id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
249004099 | pes2o/s2orc | v3-fos-license | Heat Flow Characteristics of Ferrofluid in Magnetic Field Patterns for Electric Vehicle Power Electronics Cooling
: The ferrofluid is a kind of nanofluid that has magnetization properties in addition to excellent thermophysical properties, which has resulted in an effective performance trend in cooling applications. In the present study, experiments are conducted to investigate the heat flow characteristics of ferrofluid based on thermomagnetic convection under the influence of different magnetic field patterns. The temperature and heat dissipation characteristics are compared for ferrofluid under the influence of no-magnet, I, L, and T magnetic field patterns. The results reveal that the heat gets accumulated within ferrofluid near the heating part in the case of no magnet, whereas the heat flows through ferrofluid under the influence of different magnetic field patterns without any exter-nal force. Owing to the thermomagnetic convection characteristic of ferrofluid, the heat dissipates from the heating block and reaches the cooling block by following the path of the I magnetic field pattern. However, in the case of the L and T magnetic field patterns, the thermomagnetic convection characteristic of ferrofluid drives the heat from the heating block to the endpoint location of the pattern instead of the cooling block. The asymmetrical heat dissipation in the case of the L magnetic field pattern and the symmetrical heat dissipation in the case of the T magnetic field pattern are observed following the magnetization path of ferrofluid in the respective cases. The results confirm that the direction of heat flow could be controlled based on the type of magnetic field pattern and its path by utilizing the thermomagnetic behavior of ferrofluid. The proposed lab-scale experimental set-up and results database could be utilized to design an automatic energy transport system for the cooling of power conversion devices in electric vehicles.
Introduction
In the last few decades, the need for advanced cooling fluids which surpass the traditional fluids is increasing to improve the efficiency and lifespan of electronic devices [1].A nanofluid is a colloidal fluid comprised of nano-sized particles dispersed into base fluids.Owing to the Brownian motion of nanoparticles, the thermal conductivity of nanofluids is superior compared to base fluids [2,3].The heat transfer performance of nanofluids improves with the uniform distribution of nanoparticles in base fluids, hence dispersants have been used to enhance the stability of nanofluids [4].
Patrizi et al. [5] have studied the performance of DC-DC converters under temperature variations by considering thermal cycling, temperature step, and high-temperature tests.The input ripple increases from 80 mV to 85 mV when the temperature approaches 80 °C and the efficiency of the converter reduces by a maximum of 7% with an increase in temperature from 20 °C to 120 °C.A significant amount of heat loss occurs in high-fluxdensity power electronic devices, which results in an increase in operating temperature.
The higher temperature of such devices degrades their efficiency and operating life.Furthermore, the higher temperature of power electronics creates performance issues in the gate threshold voltage shift, a decrease in switching speed, a decrease in mobility, and a decrease in noise margin [6].The thermal stresses are induced owing to the increase in temperature, which causes the failure of power electronics devices, and hence, the effective thermal management of these devices is requisite for cost-effective and reliable energy conversion [7].
Garud et al. [8] have studied the heat transfer performance characteristics of singleand hybrid-particle nanofluids and concluded that Al2O3/Cu nanofluid shows the highest performance evaluation criteria of 1.12.Here, the performance evaluation criteria stand for a ratio of the Nusselt number of nanofluid over the base fluid to the friction factor of the nanofluid over the base fluid.Furthermore, Garud et al. have shown that Al2O3/Cu nanofluid with oblate spheroid-shaped nanoparticles depict enhanced first-and secondlaw characteristics compared to those with spherical-, prolate spheroid-, blade-, cylinder-, platelet-, and brick-shaped nanoparticles [9].Ghadiri et al. [10] have studied experimentally the cooling performance of a photovoltaic thermal (PVT) system with Fe3O4-water ferrofluid coolant under the influence of constant and alternating magnetic fields.The enhanced exergy of 48 W is extracted from the PVT system with 3% ferrofluid under an alternating magnetic field.
Nanofluids depict magnetic characteristics because of the dispersion of ferromagnetic nanoparticles into the base fluid, in which case the nanofluids are named as ferrofluids.The flow control for ferrofluids could be achieved by using its magnetism properties, which were first discovered by NASA in a state of zero gravity [11].The magnetization performance of ferrofluid changes with temperature, and it is strongest in the saturation magnetization.As the temperature of the fluid increases, it gradually weakens, and when the intrinsic Curie temperature of the fluid is reached, a permanent loss of magnetization performance occurs [12].Therefore, when a temperature field is generated due to heat transfer in the ferrofluid, the non-equilibrium of magnetization occurs inside because of the local temperature change of the ferrofluid.When the ferrofluid is exposed to a magnetic field environment, the fluid in a relatively low temperature area is induced to the magnetic field, which results in fluid flow, which is called thermomagnetic convection.Thus, the ferrofluid can flow without a special transport device, such as a coolant pump.
Ferrofluid has the property of being magnetized in response to a magnetic field in addition to the properties of existing nanofluids, which leads to a sustainable amount of research on heat transfer systems using the magnetization properties of ferrofluids [13].Lian et al. [14] have conducted a study on the flow rate according to the thermal load and magnetic field distribution of a ferrofluid.The flow rate was measured through the visualization of the flow pattern by using micro-PIV (particle image velocimetry), and the temperature of the ferrofluid was measured according to the output of the heat source.Through the experiment, the correlation between the thermal load and the flow velocity of the ferrofluid was established, and as a result, it was confirmed that the flow velocity of the ferrofluid increases as the thermal load increases.Xuan et al. [15] have developed a lab-scale cooling device and have tested its cooling performance using ferrofluid under the influence of a magnetic field.The cooling capacity of 5 W is achieved by utilizing only the magnetization characteristics of the ferroflui.Heiazian et al. [16] have proposed a numerical model to simulate the heat transfer performance of ferrofluid under the influence of a magnetic field and presented a consistent validation of the numerical model with the experimental results.Koji et al. [17] have analyzed the heat transfer performance according to the change in flow rate and under the application of a magnetic field.The results confirm that the slower the ferrofluid flow rate, the higher the heat transfer performance of the ferrofluid when forced convection occurs by the magnetic field.Seo et al. [18] have investigated the heat transfer and illuminance characteristics of a high-power LED cooling system with ferrofluid according to magnetic field intensity and the volume fraction of nanoparticles.Yamaguchi et al. [19] have investigated the heat transportation characteristics of a magnetically driven cooling device with ferrofluid as a coolant and depicted the thermal energy transportation of 35.8 W with a heat transfer distance of 5 m.Furthermore, M.S. Pattanaik et al. [20] have shown a similar ferrofluid-based magnetically driven cooling device, which depicts the heat transport distance of 8 m.
Sustainable research has been conducted on ferrofluid as a cooling fluid by utilizing its thermomagnetic convection characteristics when exposed to the magnetic field environment, along with its improved heat transfer performance.By utilizing the thermalmagnetic convection characteristics of the ferrofluid, it is possible to control the thermal flow direction of the ferrofluid through symmetrical and asymmetrical magnetic field patterns, and it is determined that this can be applied to a cooling system.However, most of the prior studies are focused on the unidirectional flow of thermomagnetic convection through a simple arrangement of permanent magnets or a low-power of solenoid coil, which enables the magnetization characteristics of ferrofluid.Research related to the directional control of the heat flow using the thermomagnetic convection of ferrofluid under the application of symmetrical and asymmetrical magnetic fields has not yet been explored.In this study, in order to achieve the effective cooling performance for a ferrofluidbased cooling device, the heat dissipation characteristics of ferrofluid under influences of three magnetic field patterns (with symmetrical and asymmetrical magnet arrangements) are experimentally investigated.The database generated from the lab-scale experiments will be applied to design a cooling system with ferrofluid coolant for the thermal management of high-flux-density devices in electric vehicles.
Problem Conceptualization
The ferrofluid has the ability to react with an applied magnetic field owing to the magnetic property of dispersed nanoparticles.In order to study the behavior of ferrofluid under the influence of a magnetic field, the ferrofluid-based heat transfer system has been designed in the present work.In the open literature, the concept of an automatic energy transport device has been already proposed, based on which the ferrofluid circulates in the direction of the applied magnetic field.Using this concept, the present system is fabricated such that ferrofluid is filled in a cavity whose one side is exposed to a heating device and whose other side is exposed to a cooling device.The different shapes of magnetic field patterns have been applied to this system.Thus, under the applied temperature difference and magnetic field, the behavior of ferrofluid is observed by conducting several experiments.The details of the experimental set-up, procedure, and parameters are explained in Sections 2.2 and 2.3.
Experimental Set-Up Description
Figure 1 shows the schematic diagram of the experimental set-up and the actual image.The experimental set-up comprises of two acryl plates at the top and bottom and two aluminum blocks, one with a ceramic heater and the other with a cooling fan.The experiments are performed in constant temperature and humidity test chambers with an ambient temperature of 25 °C [21].The small gap is created at the bottom acryl plate to fill the ferrofluid such that the ferrofluid is sandwiched between two acryl plates.The top plate is provided with small holes to insert the thermocouples inside the ferrofluid and one big hole at corner is provided to insert the ferrofluid.To ensure the tight contact between both acryl plates and to avoid the leakage of ferrofluid, both acryl plates are tightly sealed using silicon.The ceramic heater and cooling fan are attached to the respective aluminum blocks using heat-resistant silicon (LC179).The DC power supply is connected with a heater and cooling fan.The ceramic heater for heating has a leakage current of less than 0.5 mA and the voltage is a fixed voltage of 10V through a DC power supply considering the maximum operating temperature of the ferrofluid [22].The heater is powered using a DC power supply with a voltage and current of 10 V and 0.75 A, respectively, which results in a power input of 7.5 W based on power = voltage current for the heater at the heating block.The cooling block is provided with natural convection cooling and fan cooling.The cooling fan is operated with a voltage and current of 10 V and 0.07 A, which results in a power input of 0.7 W. The dimensions of each experimental component are depicted in Table 1 [14].As shown in Figure 1, 33 T-type thermocouples are used to measure the ferrofluid temperature at different locations and two T-type thermocouples (T3 and T33) are used to measure the heating and cooling aluminum blocks' temperatures.All thermocouples are connected to a data logger (GL840, GL820, GRAPHTEC) for continuously monitoring the temperature data.To enable the various magnetic field patterns, cylindrical neodymium magnets (N35, EMAGNET) with dimensions of 10 mm × 10 mm are used.The magnetic flux density of each magnet is evaluated as 440 mT by using a Gauss meter (K-6333A, EXSO Co. Ltd., Korea) with an accuracy of ±5% [23,24].The magnets in different patterns are attached at the lower side of the bottom acryl plate.The heat dissipation characteristics for ferrofluid are compared in three magnet field patterns namely, I magnet field pattern, L magnet field pattern, and T magnet field pattern.The schematic representation of no magnet, I magnet field pattern, L magnet field pattern, and T magnet field pattern is shown in Figure 2. The thermal imaging camera TE-V1 (Sensitivity: <50 mk, Thermal expert) is installed at the center-bottom side of experimental set-up to capture the heat flow distribution in ferrofluid at regular time intervals.By comparing the experimental results of the no-magnet pattern and the I magnet field pattern, the basic heat flow characteristics according to the thermomagnetic convection of the ferrofluid are confirmed.Through the L magnetic field pattern, it is observed whether the heat flow is transferred along the direction of this pattern at the corner where the shape of the pattern is at a right angle, and finally, through the T magnetic field pattern, it is confirmed whether heat is dispersed in the opposite direction based on the magnetic field pattern during the heat flow.Furthermore, the L and T magnetic field patterns are formed to analyze the asymmetrical and symmetrical heat distribution within ferrofluid.The experiments are conducted for a time duration of 30 min.The specifications of the measuring devices are presented in Table 2.The ferrofluid used in the experiment is HC50 (Taiho, Tokyo, Japan), and its thermal properties are shown in Table 3 [25].The HC50 ferrofluid comprises of a dispersion of Fe3O4 nanoparticles in kerosene as a base fluid.The appearance of this ferrofluid is black in liquid form and dark brown when it is dried.When an external magnetic field is applied to HC50 ferrofluid, the nanoparticles get attracted toward the location of the applied magnetic force.
Experimental Procedure and Uncertainty Analysis
The heat is generated at the heater block by enabling the 10 V DC supply to the heater.At the start of the experiment, the heat is generated at the heater, which is then dissipated to the aluminum block through conduction.As the time passes, the lower surface of the heater block gets heated, which then transfers the heat to the ferrofluid through convection.In the case of the no-magnet experiment, the magnetic field is disabled, hence heat dissipation through ferrofluid will occur based on asymmetrical convection.In the case of the I magnetic field pattern, the magnets are arranged in a straight path by connecting heating and cooling blocks.The magnets are arranged asymmetrically by connecting the heating block and one side's right-angle corner in the case of the L magnetic field pattern, and those are arranged symmetrically by connecting the heating block and both sides' right-angle corners in the case of the T magnetic field pattern.All magnetic field patterns enable the magnetic field, which results in the heat flow in the ferrofluid based on thermomagnetic convection.The enabled magnetic field creates non-equilibrium magnetization, which results in ferrofluid circulation based on temperature difference.As the lowtemperature ferrofluid responds to the magnetic field and concentrates on the magnet, the high-temperature ferrofluid is transported along the magnet pattern provided on the acrylic plate.The experiments are conducted in a sequence of no-magnet, I, L, and T magnetic field patterns, respectively.The temperatures of the heating and cooling blocks, and ambient and various locations in the ferrofluid are measured at regular time steps using thermocouples and a data logger.Furthermore, the heat distribution in ferrofluid for nomagnet and different magnetic field patterns are visualized by capturing thermal images at regular time steps.The measured temperatures and captured thermal images are compared for no-magnet and various magnetic field patterns to analyze the heat flow characteristics of ferrofluid.The uncertainty associated with any parameter results in the deviation between its measured value and actual value.The uncertainties in the measuring parameters are produced due to the inaccuracies of the measuring devices and errors in the measurements.Therefore, the uncertainty analysis has been performed to ensure the accuracy and reliability of experimental results in the present study.The concept of linearized fraction approximation as presented by Equation ( 1) is used to calculate the uncertainty in measuring parameters [26].The measuring parameter in the present experiments is temperature.The accuracies of the thermocouple and data logger are ±0.1 °C and ±0.25%, respectively.The uncertainty in the measured temperature is evaluated as ±0.97% for the conducted experiments.
Results & Discussion
Figure 3 shows a comparison of the heat dissipation characteristics of ferrofluid under no-magnet and I magnetic field patterns at various time steps.In the case of no magnet, the heat cannot dissipate within ferrofluid in any direction due to absence of a thermomagnetic convection effect, which results in an increase in temperature near the heating block as time passes, as shown in Figure 3a.The heat gets accumulated near the heating block and the maximum temperature results around this location at the end time of the experiment.The ferrofluid exhibits convection similar to a non-magnetic particlebased nanofluid in an environment where no external magnetic field is applied, and the temperature distribution of the ferrofluid in the heating part remains circular without heat transfer in a specific direction, even after 30 minutes have elapsed from the start of the experiment [27].In the case of the I magnetic field pattern, the heat dissipates in the direction of the magnetic field generated due to the thermomagnetic convection effect, as shown in Figure 3b.At the starting time of the experiment, the heat is generated near the heating block and as the time passes it dissipates in a straight path to the cooling block because the magnetic field pattern is enabled in the I path connecting the heating and cooling blocks.After a time duration of 10 min, the heat dissipation from the heating block to the cooling block increases in the direction of a straight path.The ferrofluid interferes with the flow of thermomagnetic convection according to the temperature difference and the direction of the external magnetic field to control the direction of heat transfer [28].
Figure 4 depicts the ferrofluid temperatures at different locations along the path of the I magnetic field pattern over the experimental duration.The temperatures of heating and cooling blocks are presented as T3 and T33 curves, whereas temperatures between these two blocks in the straight path are presented as T8, T13, T18, T23, and T28.The highest temperature of 65.3 °C is measured for location T3, which is the heating block temperature.The temperature gradually decreases as the heat transfers along the path of the magnetic field pattern.The heat moves away from the heating block and gets collected as the cooling block, which results in the final cooling block temperature at location T33 of 30.9 °C.The final cooling block temperature is 5.9 °C higher than the ambient temperature, which confirms that the heat is transferred due to the presence of the I magnetic field pattern as a result of thermomagnetic convection.In the case of intermediate locations, the temperature decreases in the order of T8, T13, T18, T23, and T28, respectively, because the distance of these locations increases in the same order from the heating block.The magnetization performance of ferrofluid is strongest in the saturation magnetization and changes with temperature.The ferrofluid losses magnetization characteristics permanently when ferrofluid approaches the Curie temperature [12].With the increase in temperature, the magnetization characteristic of ferrofluid degrades; therefore, the heat dissipation along the path of the I magnetic field pattern decreases.This results in less variation in ferrofluid temperatures for heating and cooling blocks as well as all locations within the I magnetic field pattern at the end of experiment, when the temperature is high.The locations T2 and T4 are near the heating block, where the maximum temperature approaches 37.35 °C because the heat is not accumulated around the heating block, which is similar to the no-magnet case.The heat dissipates along the path of the I magnetic field pattern, which results in less heat accumulation around the heating block.The locations T32 and T34 around the cooling block show the temperatures of 27.7 °C and 28.5 °C, respectively, which is higher than the ambient temperature.This indicates that the heat follows the magnetic field pattern path and heats up the cooling block, as well as the heat dissipating around the cooling block.The temperature of ferrofluid does not deviate above 0.5 °C with respect to ambient temperature in all locations except the heating and cooling blocks and locations within the I magnetic field pattern path.
The temperature variation comparison of the heating block with times for no-magnet and I magnetic field patterns is depicted in Figure 5.At the end of the experiment, the temperature of the heating block is measured as 87.4 °C in the case of no magnet and as 65.3 °C in the case of the I magnetic field pattern.The I magnetic field pattern shows a 22.1 °C lower temperature for the heating block compared to the no-magnet case.This indicates that the presence of a magnetic field governs the heat flow following the path of the magnetic field pattern due to the thermomagnetic convection characteristic of ferrofluid.At the start of the experiment, the temperature field is not significantly formed, so the temperature of the heating block is measured similarly in both cases of the no-magnet and I magnetic field patterns.In the case of the no-magnet experiment, the temperature of the heating block continues to rise until the experiment is finished.In the case of the I magnetic field pattern experiment, as the temperature of the heating part rises and the thermal-magnetic convection becomes active, the heating part is cooled by the convection of the ferrofluid [12].The heat flow occurs along the pattern path, and the rate of increase in the temperature of the heating block decreases.Beyond 20 min, the temperature of the heating block is maintained at a steady-state point in the case of the I magnetic field pattern.Figure 6 shows the comparison of heat dissipation in ferrofluid for the L and T magnetic field patterns at various time steps over the experimental duration.As shown in Figure 6a, the heat follows the L magnetic field pattern path and heat gets collected at the right-angle corner instead of the cooling block.Despite temperature differences between heating and cooling blocks, the heat is dissipated in the direction of the L magnetic field pattern due to dominance of the thermomagnetic convection effect in ferrofluid generated by the L magnetic field pattern.As the time passes, the dissipation of heat from the heating block to the corner increases along the path of the L magnetic field pattern.The heat is dissipated in both right-angle corners in the case of the T magnetic field pattern, as shown in Figure 6b.In this case, also, the heat dissipated from the heating block gets collected at both corners instead of the cooling block due to the thermomagnetic convection generated by the T magnetic field pattern dominating the temperature difference.The heat distribution increases as time passes and symmetrical heat distribution results along the T magnetic field pattern path at each time step.Figure 7 shows the comparison of the steady-state temperature of the heating block, cooling block, and endpoint location of the magnetic field patterns.In the case of the I magnetic field pattern, the magnetic field pattern path connects the heating and cooling blocks, which results in the transfer of heat from the heating block to cooling block.However, in the case of the L and T magnetic field patterns, the magnetic field pattern paths connect to both sides' right-angle corners, which results in the transfer of heat from the heating block to these corners instead of the cooling block.Therefore, in case of the I magnetic field pattern, the cooling block temperature, and in the case of the L and T magnetic field patterns, the endpoint location (both right-angle corners), are depicted in Figure 7. Furthermore, the cooling block is provided with natural convection cooling and with a fan in the case of the I magnetic field pattern.In the case of the no-magnet experiment, the temperature of the heating block is 87.4 °C, which is the highest temperature compared to all magnetic field patterns.The temperature of the cooling block in the case of no magnet is 25.6 °C, which is the lowest temperature compared to the magnetic field patterns.This results in larger temperature differences between the heating and cooling blocks compared to the considered magnetic field patterns and, hence, confirms that the heat is not dissipating in the case of the no-magnet experiment.In the case of the I magnetic field pattern without a fan at the cooling block, the temperatures of the heating and cooling blocks are 66.7 °C and 35.3 °C, respectively.However, the temperatures of the heating and cooling blocks are measured as 65.3 °C and 30.9 °C when a cooling block is provided with a fan.The temperatures of the heating and cooling blocks are lower by 1.4 °C and 4.4 °C, respectively, in the case of the cooling block with a fan compared to that without a fan.This indicates that the ferrofluid has transferred heat to the cooling part, and the cooling performance can be improved by increasing the heat transfer coefficient [14,15].In the case of the L magnetic field pattern, the temperature at location T16 is presented as the endpoint location of the pattern.In the case of the T magnetic field pattern, the average temperature of locations T16 and T20 is considered as the endpoint location of the pattern.In the case of the L magnetic field pattern experiment, the temperatures of the heating block and the endpoint location of the pattern are measured as 67.1 °C and 41.1 °C, respectively.Whereas, in the case of the T magnetic field pattern, the temperatures of the heating block and the endpoint location of the pattern are measured as 65.8 °C and 36.4 °C, which are lower by 1.3 °C and 4.7 °C, respectively, compared to those in the case of the L magnetic field pattern.Thus, it confirms that the heat dissipation is better in the case of the T magnetic field pattern compared to the L magnetic field pattern.All magnetic field patterns show lower heating block temperatures and higher temperatures for the cooling block and the endpoint location of the pattern compared to the heating and cooling blocks' temperatures in the case of the no-magnet experiment.These results confirm that by using the thermomagnetic convection effect of ferrofluid, the direction of heat flow could be controlled under the influence of different magnetic field patterns [14][15][16][17][18][19].The direction of heat flow could be controlled in the presence of a magnetic field using the thermomagnetic convection characteristic of ferrofluid.There is no external work applied to control the heat dissipation direction in ferrofluid.This concept would be used to design a ferrofluid-based cooling system to dissipate the heat from high-flux-density power electronics in electric vehicles.The heat from power electronics devices could be distributed to cooling parts using these magnetic patterns by arranging the magnets in symmetrical and asymmetrical paths.Except the magnetic force imposed by the magnetic field, there is no external pumping power required for such a cooling system.For example, the heat from high-power-density LEDs or inverters in electric vehicles could be absorbed by ferrofluid as a primary coolant, and by using the effective magnetic patterns, the heat from the ferrofluid could be dumped into the cooling part with a secondary coolant (air, water, or any other conventional working fluid) without using any pumping source.Instead of a convectional working fluid, ferrofluid with improved thermophysical properties could be used to dissipate the heat from high-flux-density devices, which could result in improved heat dissipation.Furthermore, despite the high viscosity and density of ferrofluid, this system does not require any external force for ferrofluid circulation, which could reduce the pumping power cost.
Conclusions
This experimental study is proposed to control the heat flow direction under the influence of various magnetic field patterns utilizing the thermomagnetic convection characteristics of ferrofluid.The following key findings have been drawn from the conducted study.
1.The temperatures of heating and cooling blocks are evaluated as 87.4 °C and 25.6 °C, respectively, in the case of the no-magnet experiment.This indicates that the heat is not dissipated within ferrofluid, and it gets accumulated near the heating block due to the absence of thermomagnetic convection; 2. In the case of the I magnetic field pattern, the heat flows from the heating block to the cooling block along the path of the magnetic field pattern.The thermomagnetic convection of ferrofluid drives the heat in the presence of the magnetic field and the heat dissipation rate increases as time passes; 3. The temperatures of the heating and cooling blocks are measured as 66.7 °C and 35.3 °C for the I magnetic field pattern without a cooling fan and as 65.3 °C and 30.9 °C for the I magnetic field pattern with a cooling fan; 4. When the fan is used with cooling block in the case of the I magnetic field pattern, the temperature of the heating block lowers by 22.1 °C compared to the heating block temperature in the case of no magnet.Furthermore, the heating and cooling blocks' temperatures are lower by 1.4 °C and 4.4 °C, respectively, for the cooling block with a fan compared to that without a fan, which indicates that the thermomagnetic convection is sensitive to the temperature difference; 5.The heat from the heating block flows to one side's right-angle corner in the case of the L magnetic field pattern and flows symmetrically to both sides' right-angle corners in the case of the T magnetic field pattern.In both magnetic field patterns, the heat dissipates along the respective paths of the magnetic field patterns; 6.The temperatures of the heating block and the endpoint of the pattern are measured as 67.1 °C and 41.1 °C, respectively, in the case of the L magnetic field pattern and 65.8 °C and 36.4 °C, respectively, in the case of the T magnetic field pattern.In the case of the T magnetic field pattern, the temperatures of the heating block and the endpoint of the pattern are lower by 1.3 °C and 4.7 °C, respectively, compared to the L magnetic field pattern.This indicates a superior heat dissipation performance in the case of the T magnetic field pattern; 7. The direction and path of heat flow could be controlled using the magnetization properties of ferrofluid by enabling thermomagnetic convection using various magnetic field patterns.This concept and results database could be referred to as the guidelines to design a ferrofluid-based cooling system with heat dissipation direction control characteristics.This system could be used in electric vehicles to dissipate the heat from high-flux-density power electronics devices.
Figure 2 .
Figure 2. Schematic of no-magnet and various magnetic field patterns.
Figure 3 .
Figure 3. Heat distribution in ferrofluid with time for (a) no-magnet and (b) I magnetic field pattern.
Figure 4 .
Figure 4. Temperature of ferrofluid with time for locations in I magnetic field pattern.
Figure 5 .
Figure 5. Temperature of heating block for no-magnet and I magnetic field pattern.
Figure 6 .
Figure 6.Heat distribution in ferrofluid with time for (a) L-magnetic field pattern; (b) T-magnetic field pattern.
Figure 7 .
Figure 7. Temperatures of heating block, cooling block, and endpoint location for no-magnet and various other magnetic field patterns.
Table 2 .
Specifications of measuring devices. | 2022-05-24T15:05:09.499Z | 2022-05-22T00:00:00.000 | {
"year": 2022,
"sha1": "3f761fb643294949d6ef819cf6acdd23a39fea99",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/14/5/1063/pdf?version=1653384891",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "44ca53b204b757355b0a7574417d78baf68b3836",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
266778948 | pes2o/s2orc | v3-fos-license | Platelet to lymphocyte ratio: can it be an early economical mortality predictor of AKI patients?
Background Acute kidney injury (AKI) affects over 13 million individuals annually worldwide, resulting in 1.7 million deaths. The potential long-term progression to chronic kidney disease (CKD) and renal failure, as well as the acute use of health care resources associated with acute kidney injury (AKI), impose enormous costs on society. The platelet-to-lymphocyte ratio (PLR) has emerged as a useful economical marker for detecting changes in platelet and lymphocyte counts owing to acute inflammatory and prothrombotic states. This study aimed to determine the PLR in patients with AKI and evaluate the in-hospital mortality. Results The median PLR was compared between the non-survivor and survivor groups, and it was determined that the non-survivor group had a significantly higher PLR. ( p < 0.001) For further subgroup analysis, the PLR was stratified into three groups: ≤ 100, 101–200, and > 200. Significantly more patients were demised in the PLR group 101–200 than in the PLR group ≤ 100, while all of the patients died in the PLR group greater than 200. The group with a PLR > 200 had a higher SOFA score > 10 ( p = 0.006), a lower eGFR ( p = 0.001), and higher platelet counts ( p = 0.001), higher serum creatinine ( p = 0.001), BUN ( p < 0.001), and procalcitonin levels ( p = 0.007). In multivariate Logistic regression analysis to predict the mortality outcome, PLR (OR 1.051; 95% CI, 1.016–1.087; p = 0.004) was identified as one of the significant indicators predicting AKI mortality. Other statistically significant indicators included SOFA scores (OR 2.789; 95% CI, 1.478–5.260; p = 0.002), procalcitonin levels (OR 0.898; 95% CI, 0.818–0.987; p = 0.025), and duration of hospital stay (OR 0.494; 95% CI, 0.276–0.886; p = 0.017). The ROC curve for the PLR yielded a value of 0.803 [95% CI, 0.720–0.886; p < 0.001] with the optimal cutoff value for the PLR to determine prognosis being 107.905, with a sensitivity of 82.5% and a specificity of 51.2%. Conclusion PLR plays a significant role in the early prediction of prognosis (survival or death) for patients with AKI in ICU on a short-term basis.
Background
Acute kidney injury (AKI) affects over 13 million individuals annually worldwide, resulting in 1.7 million deaths.Up to 20% of hospitalised patients and 30-60% of critically ill patients may be diagnosed.It causes organ dysfunction in intensive care units (ICUs) involving the liver, brain, and lungs more frequently.Even moderate AKI is associated with a 50% increased mortality risk.The potential long-term progression to chronic kidney disease (CKD) and renal failure, as well as the acute use of health care resources associated with acute kidney injury (AKI), impose enormous costs on society [1].
In patients with critical illness, systemic inflammation plays a significant role in disease progression and is frequently associated with sepsis, resulting in an increased mortality risk.Along with morphological and functional alterations in vascular endothelial cells and tubular epithelium, inflammation is a crucial factor in the initiation and progression of AKI in patients.Leukocytes, including lymphocytes, infiltrate the injured kidneys and the entire body via the circulatory system, inducing the production of inflammatory mediators including cytokines and chemokines, which damage multiple organs, including the kidneys.Platelet antithrombotic actions can lead to atherogenesis via the release of proinflammatory cytokines, whereas platelet attachment to endothelial cells can cause leukocyte transmigration and adhesion, especially under shear stress [2].
Inflammation-related measures that are predictive of the onset of AKI include the platelet-to-lymphocyte ratio (PLR) and neutrophil-to-lymphocyte ratio (NLR), which are based on total blood counts [3].The platelet-tolymphocyte ratio (PLR) has emerged as a useful marker for detecting changes in platelet and lymphocyte counts owing to acute inflammatory and prothrombotic states.PLR shifts are useful for assessing the severity of systemic inflammation and predicting infections and other comorbidities, as demonstrated by a number of extensive observational studies [4].
A positive monotonic association between a high PLR and an unfavourable prognosis for diseases such as hypertension, hepatocellular carcinoma, and myocardial infarction has been reported.It is plausible to hypothesise that the PLR may influence the prognosis of AKI, contingent upon their findings.To date, however, very few epidemiological studies have investigated the prognostic impact of the PLR in AKI patients [2].
The main objective of this study was to determine the ratio of platelets to lymphocytes in patients with acute kidney injury and evaluate the patients' hospital outcomes and its correlation with other parameters, with the assumption that PLR may be significant in inflammatory states like in patients with AKI.
Methods
It was a prospective hospital-based observational study conducted at the Intensive Care Unit of the Department of Medicine at Silchar Medical College & Hospital from 1 June 2021 to 31 May 2022.The diagnosis of AKI was based on the KDIGO-AKI criteria.The primary outcome of the investigation was the in-hospital mortality rate of AKI patients.The PLR was computed for each individual patient and correlated with in-hospital mortality.A consecutive sampling method was used for the selection of study participants.The study was approved by the institution's ethical committee board after its thorough review.Informed consent was obtained from all participants, and the confidentiality of the data of all the patients has been maintained.
Out of 956 patients admitted to the ICU of the Department of Medicine, 100 consecutive patients met the inclusion and exclusion criteria and were included in the study during the specified time period.All patients aged 18 years or older and those who stayed for 48 h or longer were included in the study.Patients with a medical record of AKI before admission, those who underwent renal replacement therapy (RRT) on the day of or before their hospital admission, and those who did not achieve serum creatinine levels below 4.0 mg/dL during their stay at the end of day 7 were classified as having ESRD, thus ruling out false positive cases of chronic kidney disease rather than including as non-recovered AKI.The patients' eGFR was computed using the 2021 CKD-EPI creatinine equation.
Microsoft Excel 2013 and IBM SPSS 20.0 were utilized for statistical analysis.Continuous variables were expressed as mean (SD) or median (IQR), as appropriate for parametric and non-parametric data, respectively.Student's t-test and analysis of variance were used for parametric data, and Mann-Whitney U or Kruskal-Wallis test was used for non-parametric data, as appropriate.Categorical data were expressed as proportions and compared using the chi-square or Fisher exact test as appropriate.A p-value of < 0.05 was deemed statistically significant, and the appropriate tests for significance were applied based on the normal distribution of the patients.If the study population did not exhibit a normal distribution, non-parametric tests were performed.Multivariate logistic regression analysis was done for PLR, MV, eGFR, serum procalcitonin levels, SOFA score, Haemodialysis, and duration of hospital stay.The receiver operating characteristics curve was also plotted for PLR against other indicators such as NLR and BUN-creatinine ratio.
Results
In our investigation, the mean age was 50.57(16.87) years.Sixty-five percent of the 100 patients were male, while the rest 35% were females.(male-to-female ratio = 1.85:1).Table 1 displays the other baseline characteristics of AKI patients.The proportions of patients in stages 1, 2, and 3 of AKI were 65%, 13%, and 22%, respectively.Pre-renal and intrinsic-renal insults were the leading causes of AKI, accounting for 45% and 55% of cases, respectively.Non-survivors tended to be younger and had a higher prevalence of diabetes, while the prevalence of hypertension and cardiac disease was comparable.The non-survivors required RRT and vasopressors at significantly higher rates than the survivors.In addition, they had high SOFA scores, reduced eGFR levels, higher serum bilirubin levels, and shorter hospital stays.Notably, neither the prevalence of hypertension, cardiovascular disease, or diabetes nor the WBC and platelet counts differed significantly between the two groups of patients.
As in our study, the population was not normally distributed for PLR; the median PLR was compared between the non-survivor and survivor groups, and it was determined that the non-survivor group had a significantly higher PLR (p < 0.001).
After performing logistic regression, we plotted the ROC curve for the PLR and obtained an area under the receiver operating characteristics curve (AUROC) value of 0.803 [95% CI, 0.720-0.886;p < 0.001], with the optimal cutoff value for the PLR to determine prognosis being 107.905, with a sensitivity of 82.5% and a specificity of 51.2%.The ROC curve is illustrated in Fig. 1.
Discussion
As an early predictor of mortality in AKI patients admitted to the ICU, our study revealed a correlation between High PLR and mortality.In a massive cohort of cancer patients, Proctor et al. discovered a correlation between the PLR and overall survival.Using a similar PLR criterion as our study, they demonstrated a positive correlation between PLR and mortality (PLR < 150, HR 1; PLR 150-300, HR 1.19; P 0.001; PLR > 300, HR 1.71; P 0.001) [5].
In contrast to our findings, Zheng, CF, et al. demonstrated a U-shaped correlation between the PLR and 30-day and 90-day mortality.Both low and high PLRs were associated with elevated mortality rates [2].
Shen Y et al. demonstrated that the OR for PLRs > 200 was statistically significant (OR 1.0002; 95% CI, 1.00001 to 1.0004) following adjustment for covariates such as the SOFA score with higher mortality [6].In our study also, the association between PLR > 200 and mortality was statistically significant (OR = 1.051; 95% CI = 1.016-1.087;p = 0.004).But in contrast to their study, our study found a statistically significant association between PLR > 200 and higher SOFA score > 10(p = 0.006).
Chen Y et al. showed the prognostic value of PLR for patients with septic AKI, along with the optimal cutoff value being 120, with a sensitivity of 70.7%, and a specificity being 65.4% [7].Meanwhile in our study, we found a comparatively lower cut-off value of 107.905, along with a better sensitivity of 82.5% but lower specificity of 51.2%.
Yaprak et al. evaluated the correlation between the PLR and mortality in a small cohort of patients with end-stage kidney disease and demonstrated that the PLR could predict mortality from all causes in this population independently.This disparity is primarily due to the insufficient quantity of patients with low PLRs [8].AKI and CKD contribute to local and systemic inflammation.In addition, numerous observational studies have reported elevated levels of inflammatory mediators including blood cells, endothelial cell components, platelets, lymphocytes, macrophages, mast cells, and fibroblasts, as well as negative outcomes for these conditions [9].According to Yanfei Shen et al., PLR was associated with a higher risk of mortality in sepsis patients as noted in our study with higher SOFA scores with higher PLR [6].
Balta et al. demonstrated that in ESRD, the PLR predicts inflammation more accurately than the neutrophilto-lymphocyte ratio.On the basis of the relationship between PLR-related inflammation and disease severity, we hypothesised that extremely elevated PLRs may predict the same adverse outcomes as other inflammatory biomarkers in AKI populations as well [10].
In addition, Kweon et al. examined median PLRs in a healthy Korean population and proposed that PLR cutoff values for illness assessment be individually determined based on age [11].However, in our study, age was not statistically significant with different categories of PLR.
PLR is a strong predictive factor in pancreatic cancer patients, according to a previous Smith et al.In the current study, it was similarly proposed that PLR could be helpful for predicting the early progression to septic AKI [12].
However, AKI in the ICU is associated with a high mortality rate; it appears that other factors also contribute to poor outcomes.For instance, blood pressure, renal function, urine output, and additional clinical indicators may all influence the outcome of AKI.
Nevertheless, the strengths and limitations of the study were as follows: PLR can be useful for predicting the progression of AKI.The study found a significant association between higher PLR (> 200) and higher SOFA score (> 10), thus identifying morbid individuals at an early stage.It can be a useful maker to know the probable outcome of the patients as there was a significant correlation between PLR and mortality.The study yielded a lower cut-off value of PLR, increasing its sensitivity to determine beforehand the patients at risk.Since this was a single-center study, differing conclusions could be drawn if patient data from other institutions were included.Therefore, subject selection bias cannot be ignored, necessitating prospective multicenter research.Due to a lack of data on kidney function prior to 3 months before patient arrival, we were unable to investigate the prevalence of CKD among patients with AKI or determine the significance of CKD in relation to the PLR and mortality.Patients cannot be evaluated for PLR until they are admitted to the ICU.In addition, a single PLR measurement does not completely reflect inflammation, which is best evaluated by assessing additional inflammatory mediators simultaneously or subsequent repeat measurements.Preliminary findings suggest that the PLR could be a risk adjustment instrument with implications for AKI prognosis.To establish PLR as a predictive marker, researchers must validate its clinical utility.In statistical studies, the cutoff value must be determined in one patient cohort and evaluated in another, and the number of patients in each cohort must be taken into account.Due to a lack of pertinent data, we did not analyse the effect of sepsis and shock, both of which may worsen patient morbidity and predict more substantial mortality among patients with AKI, on the relationship between PLR and outcomes.
Conclusion
All of the aforementioned evidence demonstrates that the PLR plays a significant role in the early prediction of prognosis (survival or death) for patients with AKI in ICU on a short-term basis.Despite of the drawbacks, evidence suggests that PLR can provide valuable information to clinicians who encounter multisystem manifestations of Acute Kidney Injury, which are reflected by changes in platelet, lymphocyte, neutrophil, or monocyte counts.Interpretation of PLR in conjunction with complementary hematologic indices is recommended for more accurate prediction of related comorbidities and can be used as an early, potentially valuable, and cost-effective clinical marker.
PLR Platelet to lymphocyte ratio NLR
Neutrophil to lymphocyte ratio
Table 1
Baseline information of patients with AKI at day 1 of ICU admission Numerals in bold denote statistical significance PR pulse rate, MV mechanical ventilation, MAP mean arterial pressure, SOFA Sequential organ failure assessment, RRT renal replacement therapy, NLR neutrophil-tolyphocyte ratio, eGFR estimated glomerular filtration rate, WBC white blood cell count, PLT platelet count, ALC absolute lymphocyte count, BUN blood urea nitrogen, nt pro BNP n-terminal pro brain natriuretic peptide
Table 2
Baseline information of variables in patients across different groups of PLR with AKI on day 1 of ICU admissionFor normal distribution, mean (standard deviation) was used, while in non-normal distribution, median (interquartile range) was used to depict the individual variables of the patients Numerals in bold denote statistical significance PR pulse rate, MV mechanical ventilation, MAP mean arterial pressure, SOFA Sequential organ failure assessment, RRT renal replacement therapy, NLR neutrophil-tolyphocyte ratio, eGFR estimated glomerular filtration rate, WBC white blood cell count, PLT platelet count, ALC absolute lymphocyte count, BUN blood urea nitrogen, nt pro BNP n-terminal pro brain natriuretic peptide
Table 3
Multivariate logistic regression analysis for PLR to predict AKI mortality in ICU patientsNumerals in bold denote statistical significance PLR platelet to lymphocyte ratio, SOFA Sequential organ failure assessment, PCT procalcitonin, MV mechanical ventilation, eGFR estimated glomerular filtration rate Fig. 1 Illustration of ROC curve | 2024-01-06T16:24:34.509Z | 2024-01-02T00:00:00.000 | {
"year": 2024,
"sha1": "d97e8414390d3a92dc261256957f62ecc08c81a4",
"oa_license": "CCBY",
"oa_url": "https://ejim.springeropen.com/counter/pdf/10.1186/s43162-023-00267-4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "51f22b106334333a0e5277e6553e32a930ac338b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
214714110 | pes2o/s2orc | v3-fos-license | Statistical analysis of Galactic globular cluster type properties
The analysis of pseudo-colour diagrams, the so-called chromosome maps, of Galactic globular clusters (GCs) permits to classify them into type I and type II clusters. Type II GCs are characterized by an above-the-average complexity of their chromosome maps and some of them are known to display star-to-star variations of slow neutron-capture reaction elements including iron. This is at the basis of the hypothesis that type II GCs may have an extragalactic origin and were subsequently accreted by the Milky Way. We performed a Principal Component Analysis to explore possible correlations among various GCs parameters in the light of this new classification. The analysis revealed that cluster type correlates mainly with relative age. The cause of this relation was further investigated finding that more metal-rich type II clusters, also appear to be younger and more distant from the Galactic centre. A depletion of type II clusters for positive values of Galactic coordinate Z was also observed, with no type II clusters detected above Z$\sim2$ kpc. Type II cluster orbits also have larger eccentricities than type I ones.
INTRODUCTION
Seen as a single system, the Galactic Globular Clusters (GCs) were classically used to establish the non-preferential position of the Sun with respect to our Galaxy (Shapley 1918). Being the oldest astronomical objects for wich reliable ages could be estimated, they were also used to put strong constraints to cosmological theories (see e.g. Marín-Franch et al. 2009;Charbonnel 2016 and references therein). In the Milky Way (MW) they populate a spheroid that extends out to various tens of kiloparsecs from the center of our Galaxy. A multitude of studies were oriented to them in order to find hints on MW formation processes (Zinn 1985;2001segc.book..223H).
For the first time, Zinn (1985) provided evidence that pointed to the existence of distinct sub-populations of GCs inside the MW. In his work, a halo population was detected in the Galaxy along with a disk population. GCs associated to the halo population are characterized by lower metallicity and a small net rotation around the Galactic center. On email: msimioni@iac.es the other hand, the disk population is more metal rich and has been found to have a velocity close to that of the local standard of rest. At that time, this was used to promote the vision that, at least part of MW GCs have extragalactic origins and were later incorporated into MW via merging processes.
Since then, an increasing number of refinements in the number and components of distinct GC populations in the MW was proposed. Ibata et al. (1995) discovered the Sagittarius dwarf galaxy. Its chemical and kinematical patterns were detected in a large area of the sky supporting the idea that this satellite of the MW is being subject to tidal stripping. The authors, also suggested that some MW GCs originally were part of this galaxy. The existence of others nearby, stripped dwarf galaxies was proposed, along with a list of GCs likely associated with them (see for example Marín-Franch et al. 2009 -hereafter MF09 and references therein).
More recently, in the first decade of this century, the advent of deep, high resolution photometry of Galactic GCs, led to the detection of distinct and detached sequences at different stellar evolutionary stages in the Color-Magnitude Diagrams (CMD) of these objects. This result, coupled with the growing number of spectroscopical evidence, was interpreted in terms of multiple stellar populations (MPs, Gratton et al. 2012;Bastian & Lardo 2018). It is worth mentioning that distinct sequences of stars, that appear detached in the CMD, put strong constraints to modelling the processes of star formation in GCs. This evidence, in fact, rules out a continous star formation process and suggests that bursts of star formation occurred in these systems -hereafter Paper I, but see also Bastian et al. 2013 andHopkins 2014).
At first, the MPs phenomenon was detected in the CMD of single GCs. But with the completion of a large photometric survey, namely the Hubble Space Telescope (HST) UV Legacy Survey of Galactic Globular Clusters (PI: Piotto, Paper I), it has been confirmed that this phenomenon is a common characteristic of virtually all Galactic GCs (Paper I; Milone et al. 2017 -hereafter Paper IX).
Even more interesting is that, with the introduction of the chromosome maps (Paper IX), MW GC population has been once again subdivided. These two-color diagrams, in fact, fully exploit the potential of the UV observations taken in the context of the HST UV Legacy Survey of Galactic GCs. Specifically, they maximize the color difference between the various stellar populations inside each GC, optimizing their detection and characterization (Paper I; Paper IX). Indeed, in Paper IX it was shown that the chromosome maps of the majority of the observed GCs show the presence of two major distinct groups of stars (1G population and 2G population). Samples of stars taken from these two groups display chemical abundance differences in light elements; for example the well known Na-O anticorrelation. These clusters were named type I clusters.
The remaining GCs show more complex chromosome maps: other groups of stars can be detected, in addition to the two main groups present in type I clusters, all with their own Na-O anticorrelation. These additional groups of stars also have different Ba and Fe abundances, and more in general a different abundance of elements produced via slow neutron capture reactions (Paper IX). In this respect, it is worth mentioning that the presence of small intrinsic iron spread, at least for some GCs, is still debated, as they can be artificially introduced by the method used to derive atmospheric parameters of stars (Mucciarelli et al. 2015, Lardo et al. 2016; but see also Lee 2016 Nardiello et al. (2018) have reclassified this cluster as type II. We thus added NGC 7078 to the list of type II GCs.
In this work we performed a principal component analysis (PCA) in order to investigate the correlation of cluster type with other GC parameters. Noteworthy previous applications of this technique to GCs is by Djorgovski, & Meylan (1994), who identified a correlation between cluster luminosities and concentration, and by Recio-Blanco et al. (2006), who examined the effect of various parameters on the morphology of the horizontal branch revealing the influence of cluster mass, thus providing hints of self-enrichment Table 1. GC properties considered for the PCA. They will be identified in the following with the identification number provided in the first column. In the third column we report the sources from where values were taken. References are: i, Paper IX; ii, Catelan (2009); iii, Zorotovic et al. (2010); iv, Sollima et al. (2010); v, van den Bergh (2011); vi, Kunder et al. (2013); vii, Carretta et al. (2009);viii, MF09;ix, Dinescu et al. (1999); x, Gnedin & Ostriker (1997); xi, Harris (1996, 2010 in Galactic globular clusters. Another example is Carretta et al. (2010) who studied the effect of detailed chemical composition of the distinct stellar populations in GCs. The structure of this paper is as follows: in Section 2 we present the details of the PCA and briefly discuss the results. In Section 3 possible relations among various cluster parameters are presented in more details. In particular, we reviewed the age-metallicity and age-Galactocentric distance relations (Section 3.1); the metallicity vs. Galactocentric distance relation (Section 3.2); the relation between orbital parameters and cluster type (Section 3.3); the mass distribution (Section 3.4); and finally the spatial distribution of GCs (Section 3.5).
VARIABLES OF THE PROBLEM AND PRINCIPAL COMPONENT ANALYSIS
We investigated a large sample of variables through principal component analysis. In particular, we focus on 11 quantities, reported in Table 1: cluster type, Oosterhoff type, metallicity, relative age, orbit total energy, orbit total angular momentum, orbit eccentricity, orbit inclination with respect to the Glactic plane, cluster masses, core radii and tidal radii. This sample of GC properties was chosen in order to have a complete view of the relations affecting the cluster type. In particular, these quantities were collected from several catalogues that provide information for different GC samples. Our final sample consist on 25 GCs, 7 of them of type II and the rest of type I. The 7 type II clusters are NGC 362, NGC 1851, NGC 5139, NGC 6656, NGC 6934, NGC 7078, NGC 7089. The full list of parameters of our sample is given in Table 2.
The resulting principal component values, along with the fraction of total variance explained by each component are listed in Table 3. The first four components explain respectively 26%, 19%, 16% and 13% of the total variance of the sample, for a total of 74%. Table 4 lists the correlation coefficients between principal components and the 11 quantities considered in the analysis. Table 2. The considered sample of Galactic GC. For each cluster the values of the parameters used in the PCA are given. Labels in the first row identify the parameters, according to the designations given in Table 1. They are respectively: cluster type (1), Oosterhoff type (2), metallicity (3), relative age (4), total energy of the orbit (5), total angular momentum (6), orbit eccentricity (7), orbit inclination with respect to the Galactic plane (8), cluster mass (9), core radius (10), tidal radius (11).
Parameter: 1 2 3 4 5 6 7 8 9 10 11 (10 2 km 2 s −2 ) (kpc km s −1 ) Figure 1 shows the eigenvector projections of the 11 quantities considered in the planes defined by the first and the second components (upper-left panel), the first and the third components (lower panel) and the third and the second components (upper-right panel).
The first component correlates mainly with orbital parameters like total energy, total angular momentum and inclination (5, 6 and 8 respectively); it also correlates with tidal radius (11), which points to a connection between this parameter and orbital ones. To a lesser extent it also correlates with Oosterhoff type (2) and metallicity (3), confirming the well known relation between metallicity and Oosterhoff type.
The second component shows correlations with relative age (4), Oosterhoff type (2), orbit total energy (5) and eccentricity (7). It can be also observed a mild correlation between this component and cluster type (4). In particular, the alignment of the eigenvectors associated to cluster type and relative age in the upper-left panel of Figure 1, call for further investigation on its origin. We will discuss this result in the following section.
The third component mainly correlates with GC mass (9) and cluster type (4) pointing to a correlation between both parameters.
The fourth component mainly correlates with the orbit eccentricity and with the metallicity. This relation was discussed in Dinescu et al. (1999). This component mildly also correlates with Oosterhoff type and core radius (2 and 10 respectively) both in the same sense as eccentricity.
Since the first component mainly correlates only with orbital parameters and tidal radius, the conclusion that can be reached is that it describes general properties of the whole GC sample. But our interest is to investigate how the new classification in type I and type II clusters affects the sample, so we concentrated on the second and third components, which have the strongest correlation with cluster type. The relations between it and other parameters are shown in the upper-right panel of Figure 1. A possible relation between cluster type (1) and relative age (4) can be observed. In fact, the two associated vectors points in opposite directions and are almost parallels.
RELATIONS STUDIED
The PCA analysis performed in the previous section highlighted some possible correlations between various GC properties. In particular, we are interested in studying those that involve cluster type. For this reason, we reviewed in more details some of them, this time without the use of PCA, but quantifying the statistical significance of each one. Our choice is also motivated by the fact that the sample of clusters we could use in PCA is more limited than the entire sample used in this paper because of the lack of some of the parameters used in the PCA.
For clarity, we report in Table 5 the Spearman rank coefficients (r S ) of the various correlations we investigated. In the first column we report the coefficients for type I clusters.
The coefficients for type II ones are in the second column and, in the last column, we provide r S for the sample of type II clusters excluding NGC 6388. This cluster is among the type II clusters that are not present in the sample used for the PCA in Section 2 beacuse we could not find some of the parameters used in the analysis. Its metallicity and position in the Galaxy make it rather peculiar among this group of GCs (see also Paper IX).
Age-metallicity relation
The strong anti-correlation found between cluster type and age deserves further investigation. We thus considered, for this purpose, the sample of relative ages and metallicities of MF09.
One of the main results coming from MF09 is that the old population of globular clusters shows a small relative age dispersion of about 0.05, with no detectable trends of age with metallicity. On the other hand, young clusters show a trend with metallicity, the more metal rich clusters being younger. This trend can be clearly seen in Figure 10 of MF09, that we have reproduced in Figure 2, highlighting type I and type II clusters present in their sample with filled black dots and filled red symbols, respectively. In particular, we marked the position of NGC 6388 with a red star. Of the total 64 GCs considered in MF09, fortyfive of the 64 GCs in the MF09 sample are of Type I while 11 are of Type II. Interestingly enough, all type II clusters except NGC 6388, show a clear age-metallicity relation.
MF09 noted that the age-metallicity trend of young clusters is followed by GCs believed to be associated with accreted dwarf galaxies, namely Sagittarius (Ibata et al. 1995;Dinescu et al. 2000;Bellazzini et al. 2003), Monoceros (Crane et al. 2003;Frinchaboy et al. 2004) and the stellar overdensity in Canis Major (Martin et al. 2004 Figure 2 shows that NGC 6388 is the only type II cluster that does not follow the age-metallicity trend traced by the others. It may be worth noting that NGC 6388 is also the type II cluster which has the smallest distance to the Galactic center and also the one with the highest metallicity. It is the only type II cluster that can be strictly ascribed to the bulge population. Apart from NGC 6388, all the other type II clusters seem to follow a well defined trend. In particular, for the age-metallicity relation, the r S calculated for all type II clusters is −0.71. The probability of randomly extracting eleven points from the sample of Paper IX with the same value of r S or higher (i.e. r S ≤ −0.71), is ∼ 4.5%. If NGC 6388 is removed from the sample of type II clusters, r S = −0.96 , and the probability of randomly extracting 10 clusters with r S ≤ −0.96 results less than 0.1%.
On the other hand, the small r S calculated for type I clusters (0.08) is consistent with the hypothesis that all GCs of the old population (as defined by MF09) are preferentially Type I clusters. With a mean relative age of 0.97 (median 0.99 and standard deviation of 0.08) they are slightly biased towards older ages than type II GCs which result to have a mean relative age of 0.89 and standard deviation of 0.08. This result possibly explain the relation between age and cluster type highlighted by the PCA.
The lower panel of Figure 2 is even more interesting. Here we plot the relative age against the Galactocentric distance (r GC ). MF09 noted an increase in age dispersion with distance for the young group of GCs. Figure 2 suggests that this trend may be, at least in part, the consequence of the previously observed age-metallicity relations for these GCs. Table 1 of MF09. The displayed metallicities are in the Carretta & Gratton (1997) scale, while distances from Galactic center come from Harris (1996, 2010. Empty circles represent GCs for which cluster type has not been determined in Paper IX, black points type I clusters and red dots type II ones. The red star is associated to NGC 6388. In both panels, the regression lines are shown in black and red for type I and type II GCs respectively. Focussing the attention on type II GCs (red dots), what Figure 2 shows is an age-r GC relation for these clusters, rather than an increase in age spread. We calculated r S for both type I and II clusters in the plane defined by age and r GC and found r S = −0.32 for type I clusters and r S = −0.64 for type II ones (see Table 5). The probability of obtaining the same, or higher correlation (r S ≤ −0.64) randomly extracting eleven points from the sample of type I and type II clusters of MF09 is ∼ 17%.
To further analyze the possible age-r GC relation found for type II clusters we also considered each single cartesian Galactocentric coordinate (X, Y, Z) defined in the ususal way and measured in kiloparsec. We studied the correlation between age and | X − 8 |, | Y | and | Z | respectively for each sample, in analogy to what has been done before. We report the results in Table 5. r S for type II clusters: r S = −0.63 for X, r S = −0.40 for Y and r S = −0.75 for Z. It thus appears Table 6. Oosterhoff type for type II clusters. Galactocentric distances have been also reported. References legend is: 1, Catelan (2009); 2, Zorotovic et al. (2010); 3, Sollima et al. (2010); 4, van den Bergh (2011); 5, Kunder et al. (2013); 6. Sollima et al. (2014 that the possible age-r GC relation for type II clusters is more significant for the Z Galactic coordinate.
[Fe/H] vs Galactocentric distance for the Type I and Type II clusters
In this section we explore the distribution of values in the metallicity-r GC plane. In Figure 3 What is interesting is the relatively low scatter of the halo type II GCs in this plane if compared with the large dispersion of type I ones. In order to better quantify this result, we considered the sample of type I GCs with [Fe/H]< −1 (35 type I GCs satisfy this condition). We obtain r S = −0.39 for them and r S = 0.61 for type II GCs with [Fe/H]< −1 for the [Fe/H] vs Galactocentric distance relation.
The probability of obtainig a sample that displays an equal or higher | r S |, randomly extracting ten points from the combined samples of type I and type II GCs with [Fe/H]≤ −1, (45 GCs in total) is less than 1.5%.
Such level of correlation might indicate the existence of a trend for the majority of type II GCs, with metal-rich ones located, nowadays, at larger r GC than those with lower [Fe/H] values. Specifically, the slope of the regression line (solid red line in Figure 3) is 0.033 ± 0.005. For type I GCs with [Fe/H] ≤ −1, the slope of the regression line results to be, instead, −0.037 ± 0.001. In order to further explore this possibility, we investigated the spatial distribution of the Oosterhoff type of these clusters. We recall that this quantity correlates with the metallicity of GCs (Arp 1955). In Table 6 we have listed the Oosterhoff type as well as Galactocentric distance for type II clusters. The resulting distribution seems to support the presence of such trend.
As previously done for the age-r GC relation, we studied the possible relations existing between metallicity and each single Galactic cartesian coordinate. The r S calculated in each case are reported in Table 5. Type II GCs with [Fe/H]≤ −1 show higher values of correlation (r S =∼ 0.70) in the X and Z coordinates than in Y (rs = 0.27). Figure 4 shows the orbit inclination distributions (left panel) and the orbital eccentricity cumulative distribution (right panel). Data are from Dinescu et al. (1999). While the distribution of the orbit inclinations for Type II clusters (in red) is flat, these GCs appear to have preferentially high eccentricity orbits. This selection effect is particularly interesting, also in the light of the evidences provided in Section 2. The PCA, in fact, indicates a possible correlation between cluster type and orbit eccentricity.
Cluster mass vs. cluster type
PCA indicates that 16% of variability in the considered sample is explained by the third principal component. This component mainly correlates with cluster mass (see lower-left panel of Figure 1). This is also evident in Figure 5 that shows that type II clusters (in red) have, on average, higher mass than type I clusters.
Spatial distribution of cluster types
In Section 3.1 and Section 3.2 we have noted that a possible relation involving age or metallicity and the spatial distribution of type II GCs may exist. In particular, the r S values indicate that both relations are somewhat stronger in Z coordinates.
The spatial distributions in the Galactic coordinates X, Y and Z are shown in Figure 6. The black histograms show type I GCs while the red histograms show type II ones. It can be noted that, in X and Y, the two samples seem to be equally distributed around 0. In the Z coordinate, instead, type II GCs seem to be preferentially located at negative values, in contrast with the apparent symmetry of the type I distribution. Specifically, no type II GCs are found above Z∼ 2. Calculating the median of each sample and the associated dispersion, we measured the separation between the two distributions in each projection. It results that, in units of σ, the two medians are separated by 0.17 in the X coordinate, 0.26 in Y and 0.81 in Z. We thus calculated the probability of obtaining such values of separation between medians, randomly extracting 11 points from the joint sample of type I and type II GCs. The resulting probabilities are ∼ 65% for X, ∼ 30% for Y and < 0.3% for Z, which points to a likely bias in the Z values of the type II sample.
SUMMARY AND CONCLUSION
Chromosome maps (Milone et al. 2015, Paper IX) showed that virtually all Galactic GCs host multiple stellar populations. Chromosome maps also allowed us to identify a family of Galactic GCs that shows an above-the-mean complexity in their stellar population. These clusters were classified as type II clusters, the remaining ones being labelled as type I clusters (Paper IX).
In this paper we study if and how cluster type correlates with GC properties. We performed principal component analysis based on 11 quantities: cluster type, Oosterhoff type, metallicity, relative age, orbit total energy, orbit total angular momentum, orbit eccentricity, orbit inclination with respect to the Glactic plane, cluster masses, core radii and tidal radii. The main sources of variance in the considered sample results to be contained in orbital parameters and, at a lesser extent, in the relation between cluster type and relative age. Below we summarize the main results.
− In the plane age-metallicity, type II clusters define a clear trend, with more metal-rich clusters being younger.
NGC 6388 is an exception, but it also is the only type II cluster located in the bulge.
− There are hints of a possible relation between age and Galactocentric distance for type II GCs, with younger clusters being located outwards.
− In the [Fe/H]-r GC plane, halo type II clusters show a trend, with more metal-rich clusters being located outwards. This trend is consistent with the radial distribution of their Oosterhoff type and apparently inconsistent with the trend found for type I GCs. Orbit eccentricity cumulative distribution. The same color-code has been adopted. The used values are from Dinescu et al. (1999). The red histogram refers to type II GCs; the black one to type I clusters; the gray line shows the distribution of the total sample. Right panel: Mass cumulative distribution. The same color-code has been adopted. Masses are from Gnedin & Ostriker (1997).
− The orbits of type II clusters are more eccentric than the average of type I clusters. A large dispersion of the orbital inclinations with respect to the Galactic plane is also observed and is similar for both groups.
− Type II clusters are, on average, more massive than type I ones. There are, in any case, type I clusters with mass comparable to those of type II.
− Nowadays, type II GCs are preferentially located below the plane of the Galaxy. Specifically, no type II GC has been observed above Z∼ 2 kpc. This suggest two further investigations: (i) it will be of primary interest to search for additional type II clusters, especially at Z> 0 kpc; (ii) it would be quite informative to perform detailed simulation of the kinematics of type II GCs.
For the first point, we are suggesting to trace the path to a photometric survey aimed at enlarging the sample of Galactic GCs for which chromosome maps could be used to discern their type. Extending the survey to the full sample of the MW will improve the statistical significance of the present results.
For the second point we find of particular interest to establish, in the light of this new classification, if some rem-nants of a common dynamical pattern could be detected among the known type II clusters. In fact, the high incidence of negative values of Z for type II GCs could indicate a common extragalactic origin for these systems. | 2020-03-31T01:00:57.907Z | 2020-03-28T00:00:00.000 | {
"year": 2020,
"sha1": "b4bde38b49dd71513a3812d44a09536a716c44e9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.12762",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b4bde38b49dd71513a3812d44a09536a716c44e9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
212629843 | pes2o/s2orc | v3-fos-license | More than half of hypoxemia cases occurred during the recovery period after completion of esophagogastroduodenoscopy with planned moderate sedation
Guidelines advise precautionary measures for possible adverse events that may occur due to sedation during endoscopic procedures. To avoid complications, intraprocedural and postprocedural monitoring during recovery is considered important. However, since not many studies have reported on hypoxemia during the recovery period, findings for specific monitoring methods are insufficient. The aim of this retrospective study was to determine the incidence of hypoxemia during the recovery period using continuous central-monitoring by pulse oximetry and to characterize the hypoxemia cases. Among the 4065 consecutive esophagogastroduodenoscopy (EGD) procedures under planned moderate sedation, 84 (2.1%) procedures developed unexpected hypoxemia (SpO2 ≤ 90%). Hypoxemia was observed during the procedure, at the end of the procedure, and during the recovery period in 21, 17, and 46 (1.1%) procedures, respectively. More than half of the hypoxemia cases occurred during the recovery period. Many hypoxemia cases were characterized by neither serious co-morbid illness nor low body mass index which have been reported as risk factors of hypoxemia. The lack of risk factors is no guarantee that hypoxemia will not occur. Therefore, continuous monitoring by pulse oximetry is more important during the recovery period and is recommended in all EGD procedures under planned moderate sedation.
endoscopic procedures included many that were conducted without using sedatives or analgesics, the frequency of adverse events among procedures conducted using sedatives is unknown. The aim of this study was to determine the incidence of adverse events during the recovery period and to assess the effectiveness of continuous monitoring by conducting a retrospective analysis of the data.
Results
Of the 7332 EGD procedures conducted between April 1, 2015 and December 31, 2016, there were 4065 consecutive outpatient EGD procedures conducted under sedation in 2890 unique patients ( Table 1). The age range of the patients for the 4065 EGD procedures was between 15 and 92 years (median 57 years), and the ratio of men to women was 2483 to 1582. As for the reason for seeking consultation, 1688 EGD procedures were conducted due to some kind of symptom or disorder, and 2377 EGD procedures were conducted as part of the medical check-ups systems established in Japan with a high incidence of gastric cancer as both population-based and opportunistic screening. In terms of the sedative used, diazepam and midazolam were used in 4049 (99.6%) and 16 (0.4%) EGD procedures, respectively, and pentazocine was added in eight procedures (0.2%) ( Table 2).
Of these procedures, there were 84 (2.1%) procedures and 72 unique patients (2.5%) who developed unexpected hypoxemia (SpO 2 of ≤90%) between the intraprocedural and recovery periods ( Table 1). The age of the patients at the time of these 84 hypoxemia cases ranged between 38 and 88 years (median 71.5 years). The ratio of men to women for the procedures was 1:1. Eastern Cooperative Oncology Group Performance Status (ECOG-PS) Grades were 0/1/2/3/4 in 33/40/10/1/0 patients, respectively. The mean height was 157.9 cm (135.6 to 186.7 cm), mean body weight was 64.1 kg (32.0 to 97.7 kg), and mean body mass index (BMI) was 25.6 (14.1 to 35.9) kgm −2 (data were confirmed for 74 of 84 hypoxemia cases). The number of patients with co-morbid illnesses that could affect cardiorespiratory dynamics or the metabolism of the sedatives were as follows: 11 respiratory disease patients (chronic respiratory failure, chronic obstructive pulmonary disease, asthma, interstitial pneumonia, and pulmonary emphysema); five cardiovascular disease patients (ischemic heart disease); five kidney disease patients (renal dysfunction, chronic kidney disease, and chronic kidney failure on maintenance hemodialysis); 24 liver disease patients (viral chronic hepatitis, autoimmune hepatitis, non-alcoholic fatty liver disease, alcoholic liver disease, compensated and uncompensated liver cirrhosis); and 48 patients that did not have any of these illnesses of 84 hypoxemia cases. The American Society of Anesthesiologists Physical Status (ASA-PS) Classes were I/II/ III/IV/V in 28/43/13/0/0 procedures, respectively ( Table 3). The sedative used in 82 of 84 hypoxemia cases was diazepam [mean dose 6.30 mg, men 6.95 (5 to 7.5) mg, women 5.93 (2.5 to 7.5) mg], and two procedures used midazolam [women only, mean dose 5.00 (4.0 to 6.0) mg]. None of the cases received analgesics such as pentazocine. The mean duration of the EGD procedure was 7 minutes and 41 seconds (5 to 35 min).
Hypoxemia developed intraprocedurally, at the end of the procedure, and during the recovery period in 21, 17, and 46 procedures, respectively (Fig. 1). More than half of the cases occurred during the recovery period, accounting for 1.1% of the 4065 EGD procedures that were performed under sedation. Among the 46 procedures that developed hypoxemia during the recovery period, the mean time in which hypoxemia occurred was 12 minutes and 53 seconds after the end of the procedures. There were 39 cases (84.8%) (17,15,5, and 2 cases occurred between 1 to 5 minutes, 6 to 10 minutes, 11 to 15 minutes, and 16 to 20 minutes, respectively), and over 80% of the cases experienced hypoxemia within 20 minutes, whereas seven cases (15.2%) of hypoxemia occurred between 21 and 60 minutes (Fig. 2). The mean recovery period was 1 hour, 9 minutes, and 45 seconds (minimum 40 minutes and maximum 2 hours and 20 minutes) ( Table 4). In Japan, an annual examination is not uncommon. When the scope of examination of the EGD records at our institution was expanded to July 31, 2017, the same sedative with the same dose was administered in 41 of the 84 procedures that developed hypoxemia. Of these procedures, the records showed that hypoxemia did not develop in 29 unique patients (Table 1).
Oxygen was administered in all hypoxemia cases. Of the 46 hypoxemia cases that occurred during the recovery period, none complained of coughing up sputum and needed sputum suction during recovery. There were no serious adverse events that required unplanned tracheal intubation or advanced life support (cardiopulmonary resuscitation in cardiac arrest and related conditions). There were also no cases that required hospitalization or resulted in subsequent complications. There were also no cases that required flumazenil, an antagonist agent. In all hypoxemia cases, hypoxemia was detected intraprocedurally, at the end of the procedure, or during the recovery period by an audible alarm that was triggered due to a drop in SpO 2 , and not through visual observation of apnea by healthcare personnel (Table 4). There were no cases of tachycardia or bradycardia that required treatment intervention. www.nature.com/scientificreports www.nature.com/scientificreports/
Discussion
The effectiveness of sedation during endoscopic procedures is noted and widely used in the guidelines in the USA, Europe, and Japan. However, there have been reports of adverse events related to sedation, and therefore, countermeasures are needed. A survey of adverse events in Japan 7 found that, of the 472 adverse events related to preparations such as sedation, pharyngeal anesthesia, and intestinal irrigation, 219 (46.4%) were related to sedatives and analgesics. Of the 175 events recorded in the case records, the most frequently mentioned events were respiratory depression and arrest (99 events), followed by 22 adverse events of hypoxemia. Four of these adverse events resulted in death. Based on a total of 17,087,111 endoscopic procedures that were examined in the above study, the frequency of preparation-associated adverse events was 0.0028%. However, since the total number of endoscopic procedures included many that were conducted without using sedatives or analgesics, the frequency of adverse events among procedures conducted using sedatives is unknown. In the present study, of the 4065 EGD procedures that were conducted under planned moderate sedation, 84 developed hypoxemia in the course of the entire process from the intraprocedural period to the recovery period, accounting for a high proportion, at 2.1%.
Three-quarters of the 84 procedures that developed hypoxemia occurred either at the end of the procedure or during the recovery period. Endoscopists might be surprised at this observation. Coughing and aspiration upon insertion of the endoscope during a procedure may at times induce hypoxemia; however, these events are temporary. In addition, the stimulation caused by insertion of the endoscope promotes arousal and makes it less likely to cause respiratory depression especially in moderate sedation. On the other hand, removal of the endoscope may provide relief from stimulation, deepening the sedation and facilitating the development of hypoxemia after the end of the procedure. Being aware of the timing of onset of hypoxemia is important in managing respiratory depression during EGD procedures under moderate sedation. During the procedure, experienced healthcare personnel such as physicians and nurses are by the patient's side, and they can monitor the patients during non-complicated procedures such as EGD. However, during the recovery period, if monitoring equipment is not used, patient observation may be intermittent even if designated healthcare personnel are allocated to monitor the patient in recovery. Hypoxemia due to respiratory depression may not be detected between observations before critical situations such as respiratory arrest were to occur. www.nature.com/scientificreports www.nature.com/scientificreports/ The sedative used in most procedures in the present study was diazepam. Many randomized, controlled trials (RCTs) [8][9][10][11][12] have demonstrated that there is no difference in side effects such as cardiorespiratory depression between diazepam and midazolam when used during EGD procedures. When the recovery times of diazepam and midazolam were compared, midazolam was found to have a faster recovery 10 . However, since another study 11 found that diazepam has a faster recovery if recovery occurs within 1 hour, there is not yet a unified view. It is a mistake to assume that the frequency and trends of hypoxemia cases during the recovery period in this study can be specifically attributed to diazepam. However, one can say that it is a common phenomenon associated with benzodiazepines, including midazolam, and requires preventive measures against adverse effects.
There are differences in recovery monitoring methods among the USA, European, and Japanese guidelines. The JGES guidelines in 2015 3 and the ASGE guidelines in 2003 1 do not specify the details of how to conduct the monitoring. The ESGE guidelines in 2008 2 specify that, "Close monitoring of the patient by qualified personnel should be continued, irrespective of the substance used, and using a pulse oximeter if thought desirable, until the patient has completely recovered. " Even though the use is restrictive, the guidelines recommend monitoring with a pulse oximeter. But the cited literature is also part of the guidelines 13 , and so the evidence is poor. The updated version of the ASGE guidelines in 2014 5 states that minimal monitoring requirements include electronic assessment of pulse oximetry combined with visual monitoring of the patient's level of consciousness and discomfort. However, these guidelines state that the monitoring should be performed at regular intervals rather than continuously during procedure, during initial recovery, and just before discharge. On the other hand, the ASGE training guidelines 14 outline in detail the monitoring method of propofol when used for deep sedation as follows: "Continuous monitoring of pulse oximetry, vital signs, respiratory function, and consciousness is appropriate and should be documented at regular intervals until patients have returned to or approached their baseline status. " A PubMed search using the keywords "endoscopy", "sedation", and "hypoxemia (or hypoxia) in recovery" retrieved 47 studies, and there was only one study 15 that reported using a pulse oximeter during the recovery period. This study involved 30 patients aged 60 years or older who underwent ERCP using midazolam. The patients were monitored before, during, and for 2 hours after the procedure using pulse oximeters to examine the changes in oxygen saturation over time. According to this study, patients were most hypoxic in the first 30 minutes after the procedure; however, this was an RCT with a small sample size that was separated into two groups, with one group receiving flumazenil and the other group receiving normal saline, and it was not a study of hypoxemia cases. As noted above, monitoring methods during the recovery period differ between the different guidelines, and this is most likely due to the lack of evidence. While the data used in the present study were retrospective in nature, they are highly significant as real-world data due to the large sample size. It is important to emphasize that this study targeted non-propofol cases in which a moderate level and not a deep level of sedation was planned. Even in cases where moderate sedation is planned using non-propofol sedatives, continuous monitoring is needed during the recovery period when considering the frequency and onset of respiratory depression during the recovery period.
Regarding the prognosis of hypoxemia cases, there were no serious adverse events that required advanced life support or hospitalization by monitoring using pulse oximetry without standard periodic rounds by dedicated healthcare personnel in recovery period. According to a review 16 of specific monitoring methods conducted by the American Gastroenterological Association (AGA) in 2008, "Measurement of oxygen saturation is supplemental to clinical observation of the patient. ", the review stated the following reasons for their recommendation: (i) measurement of oxygen saturation is relatively insensitive to the earliest signs of hypoventilation; and (ii) the inability to detect an adequate signal during hypothermia, low cardiac output, and motion (e.g. tremor). It also mentioned that the ability of oximetry to reduce the incidence of cardiopulmonary complications remains unproven. While monitoring with pulse oximetry may be a complementary requirement, as noted in the review, it is not a sufficient requirement. However, it is important to take note of the fact that a large number of cases (4065 patients) were monitored using pulse oximetry during the recovery period in the present study, and in the 84 cases that developed hypoxemia, measures were taken before serious adverse events that require care such as unplanned tracheal intubation or advanced life support occurred. In addition, the onset of hypoxemia of all 84 cases was identified by the audible alarm detecting a decline in SpO 2 . The subjects of this study were outpatients who underwent EGD with planned moderate sedation, and it was a group of patients with relatively good performance status. Even the 84 cases that developed hypoxemia were all categorized as ASA-PS Class III or lower, with the majority (84.5%) in Classes I and II. If the patient undergoing an EGD procedure with planned moderate sedation is categorized as at least ASA-PS Class II or lower, monitoring by pulse oximetry may sufficiently meet the conditions for preventing possible serious adverse events. For cases that are ASA-PS Class III or higher, careful consideration must be given towards not only the monitoring methods during the recovery period, but also the actual procedure itself and the use of sedatives.
Two studies 17,18 examined the relationships between ASA-PS classification and adverse events related to endoscopic procedures, and both studies found that the risk increases as the ASA-PS class increases. In the present study, 71 (84.5%) of the 84 patients that developed hypoxemia were ASA-PS Classes I and II, indicating that the majority of cases did not have serious co-morbid illnesses. The numbers of cases with individual co-morbid illnesses that could affect the cardiorespiratory dynamics or drug metabolism of the sedatives were as follows: 11 respiratory disease patients; five cardiovascular disease patients; five kidney disease patients; and 24 liver disease patients. It must also be emphasized that there were 48 patients (57.1%) that did not have any of the above mentioned illnesses (Table 3). A BMI < 18.5 kgm −2 is also considered to be a risk factor 18 ; however, 74 of the 84 examined patients had a mean BMI of 25.6 (14.1 to 35.9) kgm −2 , and there were only three patients with a BMI < 18.5 kgm −2 . Like emaciation, obesity is thought to pose a major perioperative airway challenge in association with obstructive sleep apnea syndrome. But only 11 patients have a BMI > 30 kgm −2 . According to reports to date, ASA-PS Classes III and IV and BMI < 18.5 kgm −2 were considered risk factors related to the onset of hypoxemia. However, they are not definitive factors that can predict the onset of hypoxemia. Therefore, stratifying the patients according to these factors as a preventive measure for hypoxemia does not ensure safety. These www.nature.com/scientificreports www.nature.com/scientificreports/ factors should be regarded more as useful factors in determining the adequacy of sedative use. ECOG-PS assesses performance status, and 73 (86.9%) of 84 patients were categorized as Grades 0 and 1. The performance status of the cases that developed hypoxemia was by no means poor. There is a need to fully recognize that the majority of the cases that developed hypoxemia had good performance status and no risk factors, such as co-morbid illnesses, ASA-PS Class III/IV, or low body weight. Furthermore, when the scope of the examination of the 84 cases that showed onset of hypoxemia was expanded to include EGD records at our institution up to July 31, 2017, the records showed that, of the 41 patients that had been administered the same type and dose of sedative, 29 did not develop hypoxemia (Table 1). Once again, this fact highlights the difficulty in predicting the onset of hypoxemia from patient background characteristics. The one case that developed hypoxemia after a long postprocedural gap (60 minutes after the end of the EGD procedure) was categorized as ASA-PS Class I without any co-morbid illness, and the BMI was 27.8 kgm −2 . There was also a record of a case that was given the same type and dose of sedative without developing hypoxemia. When the 84 cases that developed hypoxemia were divided into three groups depending on the onset of hypoxemia (intraprocedural, at the end of the procedure, and during recovery), and one-way analysis of variance was performed to determine whether there were significant differences in age, BMI, ASA-PS and ECOG-PS, the data did not show differences among the parameters (P = 0.098, 0.098, 0.350, and 0.083, respectively). In other words, it is difficult to predict the timing of onset of hypoxemia based on these patient background parameters.
Several limitations of the present study must be acknowledged. First, this was a retrospective, cross-sectional study. Second, since cases were included in the analysis from our single institution, selection bias may have affected results especially in regards to patient characteristics and sedative choice. Third, since records of EGD procedures under sedation were retrospectively extracted from the electronic medical records, undescribed data of height and body weight in some cases was an inevitable limitation.
In conclusion, more than half of the hypoxemia cases during EGD procedures with planned moderate sedation occurred during the recovery period, and the frequency of occurrence (1.1%) was high. Lack of risk factors such as serious co-morbid illness, low body mass index, and history of sedative-related hypoxemia is no guarantee that hypoxemia will not occur. Therefore, continuous monitoring of oxygen saturation by pulse oximetry with an audible alarm is more important during the recovery period and is recommended in all procedures.
Methods
This retrospective study was approved by the institutional review board of Saiseikai Kanazawa Hospital (H28-25) and conducted in accordance with the ethical standards described in the latest revision of the Declaration of Helsinki. Informed consent for participation was obtained in the form of an opt-out in-hospital notice.
For endoscopic procedures under planned moderate sedation, our institution has long been conducting intraprocedural monitoring using pulse oximetry with an audible alarm to monitor SpO 2 and pulse rate. However, as of April 1, 2015, in addition to intraprocedural monitoring, SpO 2 and pulse rate are continuously monitored through a postprocedural central monitoring system even after the patients are transferred to recovery beds. This system ensures that immediate action can be taken if patients develop hypoxemia, bradycardia, or tachycardia. When hypoxemia is seen, oxygen is immediately administered upon the instructions of the attending physician or endoscopist.
All outpatient cases that underwent EGD procedures with sedation at our institution between April 1, 2015 and December 31, 2016 were included in the analysis. Hospitalized patients were excluded because their postprocedural follow-up took place when they returned to their hospital beds. Records of EGD procedures under sedation were retrospectively extracted from the electronic medical records and were evaluated based on age, sex, and sedatives and analgesics used. From these records, the cases that developed hypoxemia were extracted. Patients who had been receiving oxygen due to conditions such as respiratory failure before the procedures were excluded. As patient background characteristics, age, sex, height, body weight, ECOG-PS, and co-morbid illnesses (respiratory, cardiovascular, kidney, and liver diseases) were extracted. All co-morbid illnesses were assessed based on the ASA-PS classification system. The following parameters were also examined: type and dose of sedatives and analgesics used; the starting and ending times of EGD procedures; starting and ending times of oxygen administration; dose and method of oxygen administration; end time of recovery care; treatments other than oxygen supplementation, such as antagonist agents like flumazenil and tracheal intubation; and outcomes (remission, shift in treatment such as hospitalization, death, etc.). As for hypoxemia cases, EGD procedures conducted between January 1, 2017 and July 31, 2017 were also examined.
Moderate sedation as defined by the ASA 4 was planned for EGD, and actual practices were carried out in compliance with the guidelines issued by the JSA 19 . According to these guidelines, midazolam should ideally be administered under the supervision of an anesthesiologist. As it mandates an intravenous (IV) line, midazolam is not considered a first-line agent at our institution. Therefore, in principle, diazepam is used for EGD procedures with planned moderate sedation without an IV line, and the standard doses for men and women are 7.5 mg and 6.5 mg, respectively. These standard doses correspond with the initial dose of 5-10 mg indicated in the Multisociety Sedation Curriculum for Gastrointestinal Endoscopy (MSCGE) in 2012 20 . It is intravenously administered by a nurse under the supervision of an endoscopist who is not an anesthesiologist. No sedatives are required or dose reduction is used in elderly patients, extremely emaciated patients, patients with liver, kidney, or respiratory failure, and in hemodynamically unstable patients. In patients who experienced extreme pain in previous procedures, the dose of diazepam is increased or midazolam is used, and in addition to sedatives, pentazocine, which is an analgesic, may be administered. When midazolam is used, an IV line needs to be secured, and the initial diluted dose is slowly given intravenously. Additional doses are administered until moderate sedation is achieved. When using sedatives, arrangements to secure the airway of patients in preparation for potential respiratory arrest are always in place, and flumazenil is readily available. (2020) 10:4312 | https://doi.org/10.1038/s41598-020-61120-0 www.nature.com/scientificreports www.nature.com/scientificreports/ EGD procedures are performed by two people, the endoscopist and an assisting nurse. We do not follow the guidelines established by the ESGE in 2008 2 , which state that one person in addition to the endoscopist and the assisting nurse, who is not involved in the intervention, must be present. However, all patients are monitored for oxygen saturation and pulse rate using pulse oximetry with an audible alarm during and at the end of procedure [MUE200 (Olympus Medical Systems, Tokyo, Japan), Vismo (Nihon Kohden, Tokyo, Japan), or Moneo BP-88 (COLIN, Tokyo, Japan)]. When the endoscopic procedure is completed, patients are transferred to a recovery bed placed in a curtained-off area inside the endoscopy room for monitoring. Continuous central-monitoring during the recovery period is conducted with five Nellcor N-BSJP monitors and a SAT-MeSSAGE wireless module (Covidien Japan, Tokyo, Japan). The audible alarms are set to be triggered intraprocedurally and postprocedurally including during the recovery period when SpO 2 drops below 90% or when pulse rates rise above 100 bpm or fall below 50 bpm. There are no dedicated healthcare personnel to monitor the recovering patients. While patients who develop hypoxemia and high-risk patients are appropriately observed by the healthcare personnel in the endoscopy room, there are no standard periodic rounds or monitoring conducted at regular intervals for all cases. When an alarm indicating SpO 2 < 90 or pulse irregularities sounds, a nurse immediately assesses the condition of the patient. When respiratory depression due to oversedation is observed, the nurse prompts the patient to wake up through stimuli such as calling his name and shaking his body and to place the patient in a lateral position to relieve airway obstruction by glossoptosis. In addition, if SpO 2 does not increase or decreases soon again, the nurse responds by administering oxygen as hypoxemia after informing the endoscopist or the attending physician and reinforces monitoring. SpO 2 reduction due to sensor disengagement or temporary coughing is excluded by this observation.
Cases that developed hypoxemia were divided into three groups depending on the onset of hypoxemia (intraprocedural, at the end of the procedure, and during recovery), and one-way analysis of variance was performed to determine the predictive factors associated with the timing of the onset of hypoxemia.
Data availability
The datasets generated and/or analyzed during the current study are not publicly available due to including personal information, but are partially available from the corresponding author on reasonable request. | 2020-03-09T14:04:24.943Z | 2020-03-09T00:00:00.000 | {
"year": 2020,
"sha1": "2242ca21e686437e511626bbc52bfa131e7c480c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-61120-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecc20cf1fc2311a500d2121eb5f8d64c39af17da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253546466 | pes2o/s2orc | v3-fos-license | Digital Media Exposure and Health Beliefs Influencing Influenza Vaccination Intentions: An Empirical Research in China
The purpose of this study was to investigate whether/how digital media exposure influences people’s intention to influenza vaccination. Through an anonymous online survey, we collected data on Chinese people’s exposure to influenza and influenza vaccine information on digital media platforms and their attitudes toward influenza vaccines (N = 600). The structural equation model analysis results strongly support to the research hypotheses and the proposed model. The findings reveal three major themes: (1) digital media exposure significantly influence the susceptibility and severity of influenza. (2) After exposure to digital media, it is helpful to understand the vaccine’s benefits, reduce the barriers to vaccination, and finally improve the intention to vaccination. (3) Users receive cues to action from digital media, and their vaccination intention tends to be positive. These findings explore how digital media exposure influences influenza vaccination intention and may provide insights into vaccine promotion efforts in countries. Research has shown that digital media exposure contributes to getting vaccinated against influenza.
Introduction
Influenza is a common respiratory disease that can spread from person to person and affects people of all ages, creating a significant global disease burden [1]. Prevention and control measures, especially influenza vaccination can effectively prevent the impact of influenza on people's health [2]. In China, influenza vaccination rates are low except in some cities where the local government funds influenza vaccination programs because it is not included in the national immunization program and because people need to pay for their influenza vaccination [3,4]. However, China has a large population base. Considering the importance of the influenza vaccine to prevent influenza in humans, how to increase the intention of influenza vaccination has become a vital issue affecting people's health [5].
Exposure to information about vaccines affects people's willingness to vaccinate [6][7][8], as has been demonstrated in research on the HPV vaccine and COVID-19 vaccine willingness to vaccinate [9,10]. Although information exposure on social media can increase public knowledge about HPV and COVID-19 vaccines [11], exposure to negative information can also reduce people's risk perceptions and lead to vaccine hesitancy, thus hindering vaccination [7,[12][13][14]. The flu vaccine is no exception, and existing studies have come to different conclusions about how exposure affects people's willingness to get an influenza vaccination. Some studies have found that people with more knowledge about influenza and its vaccination are more likely to get the flu vaccine [15][16][17]. In contrast, others have come to the opposite conclusion, suggesting that people's exposure to social media messages about immunization side effects, for example, increases vaccine hesitation [18].
With the popularity of the Internet in China, the way people obtain information has changed dramatically, and digital media such as smartphones have become an important channel for people to get health-related information [10,19]. Digital media are gaining popularity not only among young people but also among the elderly [20]. Because of the critical influence of media messages on influenza vaccination, it is necessary to understand the impact of digital media exposure on people's willingness to receive influenza vaccination. China's most commonly used digital media platforms include Sina Weibo, Douban, QQ, Tieba, Zhihu, etc. [21].
The three-stage model of health promotion proposed by Street theoretically explains how media information affects people's health behaviors. Specifically, the information that people are exposed to through various information sources shapes their attitudes towards health issues, influencing their health behaviors and outcomes [22]. However, people's health decisions are influenced by psychological factors such as their beliefs, attitudes, and intentions [23,24]. The Health Belief Model (HBM) contains f perceived susceptibility, perceived severity, perceived benefits, perceived barriers, and cues to action. It can be used to study people's beliefs, attitudes, and other psychological factors in the decisionmaking process of influenza vaccination [25]. Furthermore, related studies have shown that perceived susceptibility, perceived severity, perceived benefits, and cues to action in HBM positively influence influenza vaccination intentions, while perceived barriers negatively influence the intentions to the vaccination [1,6,26]. Therefore, based on the three-stage model, this study explores the influence process of "social media information exposure-attitudes toward influenza vaccination-the intentions to vaccinate." In the stage of influenza vaccination attitudes, the variables reflecting health beliefs in the HBM model are introduced as mediators. Thus, a theoretical model is constructed to investigate the influence of the Chinese public's social media exposure on influenza vaccination intentions.
Based on the theoretical model, this study proposes the following five hypotheses: H1. Digital media exposure positively affects perceived susceptibility and further positively influences vaccination intention mediated by perceived susceptibility.
H2. Digital media exposure positively affects perceived severity and further positively influences vaccination intention mediated by perceived severity.
H3. Digital media exposure positively affects perceived benefits and further positively influences vaccination intentions mediated by perceived benefits.
H4. Digital media exposure negatively affects perceived barriers and negatively influences vaccination intention mediated by perceived barriers.
H5. Digital media exposure positively affects cues to action and positively influences vaccination intentions mediated by action cues.
Research and Design
This study investigates whether the Chinese public's exposure to digital media information can influence the intentions to vaccinate against influenza through the mediating variable health beliefs. Therefore, this study builds a structural equation model to verify the validity of the above hypotheses and further explore the influence mechanism of information exposure to digital media on the intentions of the Chinese people to vaccinate against influenza.
Data Collection
This study obtained research samples through the network questionnaire survey. The questionnaire included basic demographic questions, the respondents' digital media exposure, their perception of influenza vaccine (benefits or barriers, etc.) through digital media exposure, and their intentions to influenza vaccination. In this paper, more than 800 questionnaires were initially collected, and invalid questionnaires were eliminated through reliability tests. Subsequently, based on the age data from the 2010 official Chinese census [27], the questionnaires were quantified according to the age of the respondents so that the age distribution of the questionnaires matched the actual age distribution of Chinese people, and 600 valid questionnaires were finally obtained. The gender and age distribution of the valid questionnaire sample was relatively even.
More than half of the respondents had received a college education or above (37,562.5%), 86 had only received high school education (14.33%), 45 had only received primary/junior high education (7.50%), and 94 had received the highest education from vocational schools (15.67%). The questionnaire's content did not involve medical or ethical issues, the respondents were informed, and their consent was obtained before the beginning of the questionnaire.
Data Analysis
SPSS 25.0 was used to perform the reliability test of the questionnaire [28], and AMOS 24.0 was used to build and validate the structural equation model [29]. According to hypotheses, H1-H5, 7 latent variables are set in this paper's model (Table 1), and the research hypotheses are tested by analyzing the causal relationships between latent variables. The selection of potential variables and the hypothesis of the relationship between potential variables were mainly referred to the Health Belief Model [30] and related studies. The seven potential variables are digital media exposure, perceived susceptibility, perceived severity, perceived benefits, perceived barriers, cues to action, and intentions of influenza vaccination. Each latent variable was measured by 2-6 observed variables, expressed as specific question items in the questionnaire. The model was evaluated, and hypotheses were tested by analyzing data such as factor loadings (coefficients between observed and latent variables), path coefficients (coefficients between observed variables), and goodness of fit (coefficients to evaluate the overall fit of the model) of the model. Table 1. Constructs and items included in the questionnaire.
Construct Item Reference
Digital Media Exposure I have been exposed to information about the influenza vaccine on News websites/apps (Toutiao, Tengxun News, People's Daily, etc.). [21] I have contact information about the influenza vaccine on SNS websites/apps (WeChat, Weibo, Xiaohongshu, etc.). [21] I have contact information about the influenza vaccine on online community websites/apps (Douban, Zhihu, Tianya, etc.). [21] I have been exposed to flu vaccines on online video websites/short video apps (Bilibili, Douyin, Kuaishou, etc.). [21] Perceived susceptibility I am worried about catching the flu in the fall and winter. [1] I think I am more likely to get the flu than other people. [26] Perceived severity The flu is a severe illness to me. [1] If I get the flu, it will seriously affect my daily life, work, or study. [26] If I get the flu accidentally, it will be a health threat to the whole family. [1] Perceived benefits Vaccination is the most effective way to prevent influenza. [26] Vaccination can alleviate the symptoms after infection, even if it can't wholly avoid infection with the influenza virus. [6] Vaccination can avoid the possible loss of work, time, energy, and economy caused by influenza. [31] Vaccination against influenza can avoid the risk of my family catching influenza because of me. [1] Vaccination can reduce my fear of catching influenza and play a significant role in psychological comfort. [1]
Perceived barriers
The flu virus keeps mutating, and I doubt the effectiveness of the influenza vaccine. [1] Healthy people can also prevent influenza with proper daily protection, and vaccination is not necessary. [32] Getting an influenza vaccine is not conducive to the establishment of immunity and resistance to influenza. [26] Influenza is not a severe and life-threatening illness, and patients usually recover within one to two weeks. [26] I don't know "I need to be vaccinated against influenza." [31] I don't know how to apply for influenza vaccination. [31] Cues to action I have received messages from social media urging everyone to get an influenza vaccine. [1] The bad news about the flu epidemic on social media also influenced my decision to influenza vaccination. [15] Vaccination intentions If conditions permit, I am willing to vaccinate against influenza. [31] If conditions permit, I will make a plan for influenza vaccination in the future. [31]
Results
After the model was constructed, the valid samples were imported into AMOS 24.0 for model verification. Figure 1 shows the calculation results of the model. According to the model diagram and the software output results, the results were analyzed from three aspects: (1) Factor Load Analysis: Analyze the mean value and standard deviation of each observable variable and the load coefficient of the observable variable to the latent variable.
Factor Load Analysis & The Measurement Model
Confirmatory factor analysis (CFA) assessed the reliability and validity of the con-
Factor Load Analysis & The Measurement Model
Confirmatory factor analysis (CFA) assessed the reliability and validity of the constructs. As shown in Table 2, Cronbach's alpha has ranged from 0.71 to 0.89, which was more significant than the threshold value of 0.70. Thus, all constructs have acceptable reliability. Furthermore, the convergent validity was tested by examining the value of factor loadings, composite reliability, and average variance extracted (AVE). Table 2 shows that all factor loadings reached the benchmark value, better than 0.7, and a small amount greater than 0.5 is acceptable [33]. Moreover, all the average variance extracted is more significant than the benchmark value of 0.5. All the composite reliability is more significant than the 0.7 benchmark value [33]. The results indicate that all constructs have good convergent validity [34]. Besides, the model fit indicators (x 2 = 1010.14, df = 382, x 2 /df = 2.64, RMR = 0.04, GIF = 0.93, AGFI = 0.91; NFI = 0.903, IFI = 0.931, CFI = 0.940; RMSEA = 0.06) also reflect a good fit between the measurement model and the dataset. Therefore, the reliability and validity of this study were supported. Figure 1 presents the results of the hypotheses testing. The estimated parameters include path coefficients (β) and critical ratios (t values). Figure 1 shows that digital media exposure positively affects perceived susceptibility and vaccination intentions (H1: β = 0.78, t = 5.21; β = 0.13, t = 4.02), thus supporting H1. Digital media exposure positively affects perceived severity with a high path coefficient, and perceived severity also positively affects vaccination intentions (H2: β = 0.88, t = 7.87; β = 0.20, t = 5.11). Thus, H2 is supported. Digital media exposure positively affects perceived benefits, and perceived benefits positively affect vaccination intentions (H3: β = 0.59, t = 3.32; β = 0.41, t = 10.52), thus supporting H3. In addition, digital media exposure negatively affects perceived barriers, and perceived barriers negatively affect vaccination intentions (H4: β = −0.26, t = −4.03; β = −0.27, t = −5.37), thus supporting the H4. Digital media exposure positively affects cues to action, and cues to action positively affect vaccination intentions (H5: β = 0.74, t = 5.17; β = 0.60, t = 4.78), which indicates that the H5 is also supported. All the t values are more significant than 3.29, which suggests that the significance of the coefficients of ten paths all reached the benchmark value of 0.001. So, all the five hypotheses in this study are supported.
Discussion
This study examined the association between information exposure to digital media and influenza vaccination intentions. Based on the HBM model and the three-stage model, the theoretical model was constructed by introducing the mediating variables of perceived susceptibility, perceived severity, perceived benefits, perceived barriers, and cues to action to represent health beliefs [24]. This paper systematically evaluates how digital media exposure affects the vaccination intention of influenza vaccines through these mediating variables. The results show that all the five hypotheses proposed in the theoretical model are supported. The theoretical model explains the influence of the Chinese people's digital media exposure on vaccination against influenza.
Among them, digital media exposure has the most significant influence on perceived susceptibility and perceived severity. However, the path coefficient of these two mediating variables and vaccination intentions is not high. The reason may be that although some people are aware of the seriousness of influenza and the possible threat of not being vaccinated through information contact, they lack specific cues to action [25].
Consistent with other research results, digital media exposure has a more significant impact on perceived benefits, which in turn has a more significant effect on vaccination intentions [1]. Users who feel the benefits of influenza vaccination from digital media exposure are more inclined to be vaccinated. Digital media exposure can also reduce the public's perceived barriers and improve vaccination intentions by popularizing the necessity of vaccination.
It suggests that when using digital media platforms to promote vaccination, on the one hand, it is necessary to encourage the public's cognition of perceived benefits, significantly to enhance the public's recognition of the effectiveness of influenza vaccines. On the other hand, it is necessary to release authoritative information about vaccine effectiveness, who needs to be vaccinated, and the necessity of vaccination to reduce the perceived barriers of the public and promote vaccination [32].
The research also shows that the path coefficient of 'information-cues to actionvaccination intentions' is very high, indicating the importance of mediator cues to action in promoting vaccination. Digital media information can be an essential in promoting action, thus influencing the public's intentions to vaccinate. Previous studies have also confirmed that cues to action can directly affect users' intentions to influenza vaccination [26]. Therefore, digital media platforms should be fully utilized in vaccine promotion to carry out cues to vaccination action. The content advocating public vaccination of influenza vaccine should be disseminated suitable for digital media platforms.
Vaccination is one of the most critical components of public health programs. It plays an essential role in curbing the prevalence of infectious diseases [35], and one of the main challenges facing public health systems is ensuring adequate vaccination coverage [36]. Therefore, this study also inspires public health agencies at the international and national levels to improve vaccination rates and promote public health. Given the critical role of digital media in vaccination, the active role of digital media cannot be ignored at the level of communication strategies when promoting different vaccination campaigns, including influenza vaccination [7,10]. Particularly affected by the COVID-19 pandemic, traditional interpersonal communication influence is weakened, so it is more important to pay attention to the vital role of digital media in public crisis [19]. However, it should be noted that differences in the credibility of different media can affect the public's perceived risk [37]. Therefore, relevant public health agencies should focus on their credibility when using digital media to promote vaccination campaigns.
Previous studies have explored traditional media on vaccination intentions, such as newspapers and television [15,16]. With the penetration of digital media into our lives, we have extended our research scope to digital media, which is more in line with the need, for vaccine promotion in the current information society. Previous studies have focused on website information content on vaccination intentions [38]. This study explored the impact on vaccination intentions from users' perspectives in a broader scope of digital media exposure.
However, study limitations must be noted. Firstly, the questionnaire samples collected in this study generally have a high level of education. Whether the conclusions drawn in this study are also applicable to people with a low level of education remains to be further discussed. Secondly, the study examined perceived susceptibility, perceived severity, perceived benefits, perceived barriers, and cues to action as mediating factors. However, there may be other mediating factors between digital media exposure and influenza vaccination intentions, such as healthy self-efficacy, etc. [21]. These factors are expected to be supplemented in subsequent studies.
Conclusions
Given China's large population base, it is unrealistic to implement a full-scale free influenza vaccination in China [39]. With the increasing influence of digital media on people's lives, it is necessary to explore how digital media affects the public's willingness to vaccinate. This study found that exposure to information about the influenza vaccine in digital media can influence people's health beliefs and their intention to receive influenza vaccination. It also validates the potential function of digital media in vaccine promotion, etc. At the same time, this study can inspire public health institutions to innovate vaccine science strategies and contribute experiences to the construction of public health safety in various countries.
Data Availability Statement:
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 2022-11-16T16:24:29.540Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "3a79a3049428b1a9af757f970627b905d9a5bcc5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/10/11/1913/pdf?version=1668238564",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f3ed754a80f8595c5643e330a28d4bc9712d01f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16599790 | pes2o/s2orc | v3-fos-license | Conformal Field Theory and Doplicher-Roberts Reconstruction
After a brief review of recent rigorous results concerning the representation theory of rational chiral conformal field theories (RCQFTs) we focus on pairs (A,F) of conformal field theories, where F has a finite group G of global symmetries and A is the fixpoint theory. The comparison of the representation categories of A and F is strongly intertwined with various issues related to braided tensor categories. We explain that, given the representation category of A, the representation category of F can be computed (up to equivalence) by a purely categorical construction. The latter is of considerable independent interest since it amounts to a Galois theory for braided tensor categories. We emphasize the characterization of modular categories as braided tensor categories with trivial center and we state a double commutant theorem for subcategories of modular categories. The latter implies that a modular category M which has a replete full modular subcategory M_1 is equivalent to M_1 x M_2 where M_2=M\cap M_1' is another modular subcategory. On the other hand, the representation category of A is not determined completely by that of F and we identify the needed additional data in terms of soliton representations. We comment on `holomorphic orbifold' theories, i.e. the case where F has trivial representation theory, and close with some open problems. We point out that our approach permits the proof of many conjectures and heuristic results on `simple current extensions' and `holomorphic orbifold models' in the physics literature on conformal field theory.
Abstract. After a brief review of recent rigorous results concerning the representation theory of rational chiral conformal field theories (RC-QFTs) we focus on pairs (A, F) of conformal field theories, where F has a finite group G of global symmetries and A is the fixpoint theory. The comparison of the representation categories of A and F is strongly intertwined with various issues related to braided tensor categories. We explain that, given the representation category of A, the representation category of F can be computed (up to equivalence) by a purely categorical construction. The latter is of considerable independent interest since it amounts to a Galois theory for braided tensor categories. We emphasize the characterization of modular categories as braided tensor categories with trivial center and we state a double commutant theorem for subcategories of modular categories. The latter implies that a modular category M which has a replete full modular subcategory M1 factorizes as M ≃ M1 ⊗ C M2 where M2 = M ∩ M ′ 1 is another modular subcategory. On the other hand, the representation category of A is not determined completely by that of F and we identify the needed additional data in terms of soliton representations. We comment on 'holomorphic orbifold' theories, i.e. the case where F has trivial representation theory, and close with some open problems.
We point out that our approach permits the proof of many conjectures and heuristic results on 'simple current extensions' and 'holomorphic orbifold models' in the physics literature on conformal field theory. Financially supported by the European Union through the TMR Networks 'Noncommutative Geometry' and 'Orbits, Crystals and Representation Theory'.
Introduction
As is well known and will be reviewed briefly in the next section, quantum field theories in Minkowski space of not too low dimension give rise to representation categories which are symmetric C * -tensor categories with duals and simple unit. (The minimum number of space dimensions for this to be true depends on the class of representations under consideration.) As Doplicher and Roberts have shown, such categories are representation categories of compact groups [17] and every QFT is the fixpoint theory under a compact group action [18] of a theory admitting only the vacuum representation [9]. Thus the theory of (localized) representations of QFTs in higher dimensional spacetimes is essentially closed.
Though this is still far from being the case for low dimensional theories there has been considerable recent progress, of which we will review two aspects. The first of these concerns the general representation theory of rational chiral conformal theories, which have been shown [33] to give rise to unitary modular categories in perfect concordance with the physical expectations. See also [46] for a more selfcontained and (somewhat) more accessible review. In this contribution we restrict ourselves to stating the main results insofar as they serve to motivate the subsequent considerations which form the core of this paper.
We will then study pairs (F , A) of quantum field theories in low dimension, mostly rational conformal, where A is the fixpoint theory of F w.r.t. the action of a finite group G of global symmetries. This scenario may seem quite special, as in fact it is, but it is justified by several arguments. First of all, as already alluded to, the fixpoint situation is the generic one in high dimensions. Whereas this is definitely not true in the case at hand, every attempt at classifying rational conformal field theories (or at least modular categories) will most likely make use of constructions which produce new conformal field theories from given ones. (Besides those we focus on there are, of course, other such procedures like the 'coset construction'.) The converse of the passage to G-fixpoints is provided by the construction [18] of Doplicher and Roberts, which in the case of abelian groups has appeared in the CQFT literature as 'simple current extension'. The latter are of considerable relevance in the classification of 'modular invariants', i.e. the construction of twodimensional CQFTs out of chiral ones. It is therefore very satisfactory that we are able to provide rigorous proofs for many results in this area.
Finally the analysis of quantum field theories related by finite groups leads to many mathematical results which can be phrased in a purely categorical manner. As such they have applications to other areas of mathematics like subfactor theory or low-dimensional topology.
2 'Many' Spacetime Dimensions: Symmetric Categories 2.1 Global Symmetry Groups in ≥ 2 + 1 Spacetime Dimensions. In this section we consider quantum field theories in Minkowski space with d = s + 1 dimensions where the number s of space dimensions is at least two. (See [27,32] for more details.) We denote by K the set of double cones. Let O → F (O), O ∈ K be a net (inclusion preserving assignment) of von Neumann algebras on a Hilbert space H satisfying irreducibility, locality ) and covariance w.r.t. a positive energy representation of the Poincaré group with invariant vacuum vector Ω. We sharpen the locality requirement by imposing Haag We assume that there is a compact group G with a strongly continuous faithful unitary representation U on H commuting with the representation of the Poincaré group, leaving Ω invariant and implementing global symmetries of F : Consider the subnet A(O) = F (O) G together with its vacuum representation π 0 on the subspace H 0 of G-invariant vectors. π 0 can be shown to satisfy Haag duality [15]. The Hilbert space H decomposes as whereĜ is the set of isomorphism classes of irreducible representation of G and the group G and the C * -algebra A = ∪ O∈K A(O) · (the 'quasi-local algebra') act on H as follows: The representations π ξ of A on H ξ are irreducible and satisfy [15] where π 0 is the representation of A on H 0 . These observations motivate the analysis of the positive energy representations satisfying the 'DHR criterion' (2.1) for any irreducible local net A of algebras satisfying Haag duality and Poincaré covariance. For the purposes of the development of the theory another category is much more convenient. Definition 2.2 Let A be as above. Then DHR(A) denotes the category of localized transportable morphisms, i.e. bounded unital * -algebra endomorphisms ρ of the quasi-local algebra A such that ρ ↾ A(O ′ ) = id for some O ∈ K and such that for everyÕ ∈ K there is ρÕ localized inÕ such that ρ and ρÕ are inner equivalent. The morphisms are the intertwiners in A.
Applying the general formalism to fixpoint nets as above one obtains: Proposition 2.4 [15] Let A be a fixpoint net as above. Then DHR f (A) contains a full monoidal subcategory S which is equivalent (as a symmetric monoidal category) to the category G − mod of finite dimensional continuous unitary representations of G. For a simple object ρ ξ ∈ S the dimension d(ρ ξ ) coincides with the dimension d(ξ) of the associated representation U ξ of G. Now the question arises under which circumstances one obtains all DHR representations of A in this way.
Proposition 2.5 [51] Assume F has trivial representation category DHR(F ) (in the sense of quasi-trivial one-cohomology). Then A has no irreducible DHR representations of infinite statistics and DHR(A) ≃ G − mod.
It is thus natural to conjecture that every net A satisfying the above axioms is the fixpoint net under the action of a compact group G of a net F with trivial representation structure.
The Reconstruction Theory of Doplicher and Roberts.
Theorem 2.6 [17] Let S be a symmetric C * -tensor category with conjugates and simple unit such that every simple object has twist +1. Then there is a compact group G, unique up to isomorphism, such that one has an equivalence S ≃ G − mod of symmetric tensor * -categories with conjugates.
Remark 2.7 1. If there are objects with twist −1 then there is a compact group G together with a central element k of order two such that S ≃ G − mod as a tensor category and the twist of a simple object equals the value of k in the corresponding irreducible representation of G.
2. Most categories in this paper will be closed w.r.t. direct sums and subobjects (i.e. all idempotents split). Yet, in order not to have to require this everywhere, all equivalences of ((braided/symmetric) monoidal) categories in this paper will be understood as equivalences of the respective categories after completion w.r.t. direct sums and subobjects. See, e.g., [25] for these constructions and note that equivalence of the completed categories is equivalent to Morita equivalence [25]. We believe that this is the appropriate notion of equivalence for semisimple k-linear categories. By the coherence theorem for braided tensor categories [30] we may and do assume that all tensor categories are strict. (In fact, most of the categories under consideration here are so by construction.) • the group G corresponding to the symmetric tensor category DHR f (A) is unitarily and faithfully represented on H, implementing global symmetries of F , • the reducible representation of A on H contains every irreducible DHR sector π ξ of A (of finite dimension d(π ξ )) with multiplicity d(π ξ ), • the charged (non-G-invariant) fields intertwine the vacuum and the DHR sectors.
The net F , which we denote F = A ⋊ DHR f (A), is unique up to unitary equivalence. (One may also consider the crossed product A ⋊ S with a full monoidal subcategory S of DHR f (A).) It is natural to ask whether there is a converse to Prop. 2.5 to the effect that F = A ⋊ DHR f (A) has trivial representation theory. A first result was proved independently in [8] and [39]: Assume that A has finitely many unitary equivalence classes of irreducible DHR representations of finite statistics, all with twist +1. Then the local net F = A ⋊ DHR f (A) has no non-trivial DHR representations of finite statistics.
This result has the obvious weakness of being restricted to theories with finite representation theory. On the positive side, we do not need to make assumptions on potential representations of A with infinite statistics. For most purposes of the present paper this result is sufficient, but we cite the following recent result.
Theorem 2.10 [9] Assume A lives on a separable Hilbert space and all DHR representations are direct sums of irreducible DHR representations with finite statistics and twist +1. Then F = A ⋊ DHR(A) has no non-trivial sectors of finite or infinite statistics.
In 1 + 1-dimensional Minkowski space or on R (i.e. no time: '1 + 0 dimensions') the DHR analysis must be modified [20] since there one can only prove that DHR(A) is braided. We will therefore give a brief discussion of some pertinent results on braided tensor categories. (See [30] or [31] for the basic definitions.)
Categorical Interlude 1: Braided Tensor Categories and Their
Center. Throughout we denote morphisms in a category by small Latin letters and objects by capital Latin or, in the quantum field context, by small Greek letters. We often write XY instead of X ⊗ Y . Definition 3.1 A TC * is a C * -tensor category [35] with simple unit and conjugates (and therefore finite dimensional hom-spaces). A BTC * is a TC * with unitary braiding. A STC * is a symmetric BTC * .
A TC * (more generally, a semisimple spherical category) will be called finite dimensional if the set Γ of isomorphism classes of simple objects is finite. Then its dimension is defined by where the X i , i ∈ Γ are representers for these classes. If C is braided then there is another numerical invariant, which we call the Gauss sum, defined by The dimensions in a TC * (not necessarily braided!) are quantized [35] in the same way as the square roots of indices in subfactor theory: The twist ω(ρ) of a simple object may a priori take any value in the circle group T.
In a finite dimensional TC * , every d(ρ) is a totally real algebraic integer and ω(ρ) is a root of unity. In the braided case there is no known replacement for Thm. 2.6. The deviation of a braided category C from being symmetric is measured by the monodromies If C has conjugates (in the sense of Thm. 2.3) and the unit 1 is simple then defines a number which depends only on the isomorphism classes of X, Y . These numbers, for irreducible X, Y , were called statistics characters in [49]. (They also give the invariant for the Hopf link with the two components colored by X, Y .) Picking arbitrary representers X i , i ∈ Γ we define the matrix S ′ i,j = S ′ (X i , X j ), i, j ∈ Γ. The matrix of statistics characters is of particular interest if the category is finite dimensional.
Then, as proved independently be Rehren [49] and Turaev [54], if S ′ is invertible then are unitary and satisfy the relations where C ij = δ i, is the charge conjugation matrix (which satisfies C 2 = 1). (Whereas the dimension of a TC * is always non-zero, this is not true in general. Yet, when S ′ is invertible then dim C = 0, cf. [54].) Since these relations give a presentation of the modular group SL(2, Z) we obtain a finite dimensional unitary representation of the latter, which motivated the terminology 'modular category' [54]. Furthermore, the 'fusion coefficients' N k ij = dim Hom(X i X j , X k ) are given by the Verlinde relation [56] The assumption that S ′ is invertible is not very conceptual and therefore unsatisfactory. A better understanding of its significance is obtained from the following considerations.
Definition 3.2 Let C be a braided monoidal category and K a full subcategory. Then the relative commutant C ∩ K ′ of K in C is the full subcategory defined by (C ∩ K ′ is automatically monoidal and replete.) The center of a braided monoidal category C is Z(C) = C ∩ C ′ . Remark 3.3 1. If there is no danger of confusion about the ambient category C we will occasionally write K ′ instead of C ∩ K ′ .
2. Z(C) is a symmetric tensor category for every C. C is symmetric iff Z(C) = C.
3. The objects of the center have previously been called degenerate (Rehren), transparent (Bruguières) and pseudotrivial (Sawin). Yet, calling them central seems the best motivated terminology since the above definition is the correct analogue for braided tensor categories of the center of a monoid, as can be seen appealing to the theory of n-categories.
4. We say a semisimple category (thus in particular a BTC * ) has trivial center, denoted symbolically Z(C) = 1, if every object of Z(C) is a direct sum of copies of the monoidal unit 1 of if, equivalently, every simple object in Z(C) is isomorphic to 1.
5. Note that the center of a braided tensor category as given in Defin. 3.2 must not be confused with another notion of center [29,36] which is defined for all tensor categories (not necessarily braided) and which in a sense generalizes the quantum double of Hopf algebras. See also Subsect. 5.2.
Proposition 3.4 [49] Let C be a BTC * with finitely many classes of simple objects. Then the following are equivalent: Remark 3.5 The direction (i) ⇒ (ii) is obvious, and (ii) ⇒ (i) has been generalized by Bruguières [6] to a class of categories without * -operation, in fact over arbitrary fields. He proves that a 'pre-modular' category [4] is modular iff its dimension is non-zero (which is automatic for * -categories) and its center is trivial. This provides a very satisfactory characterization of modular categories and we see that modular categories are related to symmetric categories like factors to commutative von Neumann algebras. Recalling that finite dimensional symmetric BTC * s are representation categories of finite groups by the DR duality theorem, one might say that modular categories (Z(C) = 1) differ from finite groups (Z(C) = C) by the change of a single symbol in the respective definitions! 3.2 General Low Dimensional Superselection Theory. As already mentioned, in low dimensions the category DHR(A) is only braided. As a consequence the proofs [49,26] of the existence of conjugate (dual) representations have to proceed in a fashion completely different from [16,II]. More importantly, Thm. 2.6 and, a fortiori, Thm. 2.8 are no more applicable. (There is a weak substitute for the DR field net, cf. [21,26] for the reduced field bundle, which however is not very useful in practice.) The facts expounded in the Categorical Interlude imply that every low dimensional QFT whose DHR category has finitely many simple objects and trivial center gives rise to a unitary representation of SL(2, Z). This is consistent with the physics literature on rational conformal models but at first sight rather surprising in non-conformal models. (Note, however, that Haag dual theories which are massive in a certain strong sense have trivial DHR representation theory [38], implying that for them the question concerning the rôle of SL(2, Z) does not arise.) What remains is the issue of triviality of the center of DHR f (A) which does not obviously follow from the axioms. A first result in this direction was the following which proves a conjecture in [49].
Theorem 3.6 [39] Let A be a Haag dual theory on 1+1 dimensional Minkowski space or on R. Assume that DHR f (A) is finite and that all objects in Z(DHR f (A)) are even, i.e. bosonic. Then F = A ⋊ Z(DHR f (A)) is local and Haag dual and DHR f (F ) has trivial center, thus is modular.
In other terms, every rational QFT whose representation category has nontrivial center is the fixpoint theory of a theory with modular representation category under the action of a finite group of global symmetries. In the next subsection we will cite results according to which a large class of models automatically has a modular representation category. For these models the above theorem is empty, but the analysis of [39] is still relevant for the study of F = A⋊ S where S ⊂ DHR f (A) is any full symmetric subcategory, not necessarily contained in Z(DHR f (A)).
Completely Rational Chiral Conformal Field Theories.
In this section we consider chiral conformal field theories, i.e. quantum field theories on the circle. We refer to [46] for a more complete and fairly self-contained account. Let I be the set of intervals on S 1 , i.e. connected open non-dense subsets of S 1 . For For consequences of these axioms see, e.g., [24]. We limit ourselves to pointing out some facts: • Reeh-Schlieder property: A(I)Ω = A(I) ′ Ω = H 0 ∀I ∈ I. • Type: The von Neumann algebra A(I) is a factor of type III 1 for every I ∈ I. • Haag duality: A(I) ′ = A(I ′ ) ∀I ∈ I. • The modular groups and conjugations associated with (A(I), Ω) have a geometric meaning, cf. [7,24]. Now one studies coherent representations π = {π I , I ∈ I} of A on Hilbert spaces H, where π I is a representation of A(I) on H such that One can construct [21] a unital C * -algebra C * (A), the global algebra of A, such that the coherent representations of A are in one-to-one correspondence with the representations of C * (A). We therefore simply speak of representations. A representation is covariant if there is a positive energy representation U π of the universal covering group P SU (1, 1) of the Möbius group on H such that A representation is locally normal iff each π I is strongly continuous.
In order to obtain further results we introduce additional axioms. of von Neumann algebras satisfying η(xy) = x ⊗ y ∀x ∈ A(I), y ∈ A(J). Remark 3.9 By Möbius covariance strong additivity holds in general if it holds for one pair I, J of adjacent intervals. Strong additivity has been verified in all known rational models. Furthermore, every CQFT can be extended canonically to one satisfying strong additivity. If the split property holds then H 0 is separable, and thanks to the Reeh-Schlieder theorem A(I)∨A(J) and A(I)⊗A(J) are actually unitarily equivalent. The split property follows if T re −βL0 < ∞ for all β > 0, which is satisfied in all reasonable models.
Lemma 3.10 [33]
Let A be a CQFT satisfying strong additivity and the split property. Let I k ∈ I, k = 1, . . . , n be intervals with mutually disjoint closures and denote E = ∪ k I k . Then A(E) ⊂ A(E ′ ) ′ is an irreducible inclusion (of type III 1 factors) and the index [A(E ′ ) ′ : A(E)] depends only on the number n but not on the choice of the intervals. Let µ n be the index for the n-interval inclusion. These numbers are related by (In particular µ 1 = 1, which is just Haag duality.) Thus every CQFT satisfying strong additivity and the split property comes along with a numerical invariant µ 2 ∈ [1, ∞] whose meaning is elucidated by the main result of [33] stated below.
Definition 3.11 A chiral CQFT is completely rational if it satisfies (a) strong additivity, (b) the split property and (c) µ 2 < ∞.
All known classes of rational CQFTs are completely rational in the above sense, see [57,58] for the WZW models connected to loop groups and [59,43] for orbifold models. Very strong results on both the structure and representation theory of completely rational theories can be proved. (All representations are understood to be non-degenerate.) Theorem 3.12 [33,46] Let A be a completely rational CQFT. Then • Every representation of C * (A) on a separable Hilbert space is locally normal and completely reducible, i.e. the direct sum of irreducible representations. • Every irreducible separable representation has finite statistical dimension d π ≡ [π(A(I ′ )) ′ : π(A(I))] 1/2 (independent of I ∈ I). It therefore [26] has a conjugate representation π and is automatically Möbius covariant with positive energy. • For a representation π the following are equivalent: (a) π is Möbius covariant with positive energy, (b) π is locally normal, (c) π is a direct sum of separable representations. Furthermore, there is a non-degenerate braiding. Rep(A) thus is a unitary modular category in the sense of Turaev [54].
Remark 3.13 1. In the way of structure theoretical results we mention that for completely rational theories the subfactors A(E) ⊂ A(E ′ ) ′ , E = ∪ n i=1 I i can be analyzed quite explicitly, generalizing some of the results of [58]. Yet [33] by no means supersedes the ingenious computation in [58] in the case of loop group models.
2. In view of the above results we do not need to worry about representations with infinite statistics when dealing with completely rational CQFTs. From now on we will write Rep(A) instead of DHR(A) since the (separable) representation theory can be developed without any selection criterion [46]. Some of our results hold for low-dimensional theories without the assumption of complete rationality. For this we refer to [43].
Pairs of Quantum Field Theories Related by a Symmetry Group.
In the rest of this paper we will be concerned with pairs (F , A) of quantum field theories in one or two dimensions where F has a compact group G of global symmetries (acting non-trivially for g = e) and A = F G ↾ H 0 . We assume that both A and F satisfy Haag duality. Then there is a full symmetric subcategory S ⊂ Rep(A) such that S ≃ G − mod and F ∼ = A ⋊ S. This situation is summarized in the quadruple (F , G; A, S). Our aim will be to compute the representation category of F from that of A and vice versa. The nicest case clearly is the one where both A and F are completely rational CQFTs (then G must be finite), but some of our results hold in larger generality. Remark 4.2 That fixpoint nets inherit the split property from field nets is classical [14], and that F satisfies strong additivity if A does is almost trivial. The converses of these two implications are non-trivial and require the full force of complete rationality. The implication F completely rational ⇒ A satisfies strong additivity is proved in [59], and A completely rational ⇒ F satisfies split will be proved below. The computation of the invariant µ 2 (A) is done already in [33].
Remark 4.3
The completely different structure (symmetric instead of modular) of the representation categories in ≥ 2 + 1 dimensions is reflected in a replacement of |G| 2 in (4.1) by |G|.
In Subsect. 4.3 we will show that the representation category of F depends only on Rep(A) and the symmetric subcategory S. More precisely, let A 1 , A 2 be QFTs such that Rep(A 1 ) ≃ Rep(A 2 ) and let S i ⊂ Rep(A i ), i = 1, 2 be replete full symmetric subcategories which correspond to each other under the above equivalence.
The most natural way to prove such a result clearly is to construct a braided tensor category from Rep(A) and S and to prove that it is equivalent to Rep(A ⋊ S) independently of the fine structure of A. The next categorical interlude will provide such a construction.
Categorical Interlude 2: Galois Extensions of Braided Tensor
Categories. The following result realizes a conjecture in [39].
Theorem 4.4 [4,41] Let C be a BTC * . Let S ⊂ C be a replete full monoidal subcategory which is symmetric (with the braiding of C). Then there exists a TC * C ⋊ S together with a tensor functor F : C → C ⋊ S such that • F is faithful and injective on the objects, thus an embedding.
• F is dominant, i.e. for every simple object X ∈ C ⋊ S there is Y ∈ C such that X is a subobject of F (Y ). • F trivializes S, i.e. X ∈ S ⇒ F (X) ∼ = 1 ⊕ . . . ⊕ 1, where 1 appears with multiplicity d(X) (which is in N by [17]). • The pair (C ⋊ S, F ) is the universal solution for the above problem, i.e. if F ′ : C → E has the same properties then F ′ factorizes through F . [4]. The above statement incorporates some results of [4]. The construction in [4] relying on Deligne's duality theorem [10] instead of the one of [18] it is slightly more general, but one must assume that the objects in S have integer dimension since this is no more automatic if there is no positivity. On the other hand, in [4] S is assumed finite dimensional (thus G is finite) and to be contained in Z(C), restrictions which are absent in [41]. Applications of the above construction to quantum groups and invariants of 3-manifolds are found in [4] and [52], the latter reference considering also relations with products of braided categories and of TQFTs. Remark 4.6 By the universal property C ⋊S is unique up to equivalence. The existence is proved by explicit construction. Essentially, one adds morphisms to C which trivialize the objects in S. (Then one completes such that all idempotents split, but this is of minor importance.) Here essential use is made of the fact that there is a compact (respective finite) group G such that S ≃ G − mod.
Remark 4.5 This result was arrived at independently by the author [41] and (somewhat earlier) by Bruguières
Many facts are known about the category C ⋊ S: Proposition 4.7 [4,42] If C is finite dimensional then Remark 4.8 Heuristically, the passage from C to C ⋊ S amounts to dividing out the subcategory S, an idea which is further supported by (4.2). Yet, this is not done by killing the objects of S in a quotient operation but rather by adding morphisms which trivialize them. Therefore the notation C ⋊ S, which is also in line with [18], seems more appropriate. We consider C ⋊ S as a Galois extension of C as is amply justified by the following result. In this case C ⋊ S has trivial center iff S = Z(C). C ⋊ Z(C) is called the modular closure C m of C since it is modular if C is finite dimensional.
Remark 4.11
This result has obvious applications to the topology of 3-manifolds since it provides a means of constructing a modular category out of every finite dimensional braided tensor category (which must not be symmetric). In fact, ad hoc versions of the above constructions in simple special cases motivated by topology had appeared before.
If S ⊂ Z(C) then C ⋊ S fails to have a braiding in the usual sense. Yet, there is a braiding in the following generalized sense. Definition 4.12 Let C be a semisimple k-linear category over a field k. If G is a group then C is G-graded if 1. With every simple object X is associated an element gr(X) ∈ G. 2. If X, Y are simple and isomorphic then gr(X) = gr(Y ). 3. Let C g be the full subcategory of C whose objects are finite direct sums of objects with grade g. Then X ∈ C g , Y ∈ C h implies X ⊗ Y ∈ C gh .
If C is G-graded and carries a G-action such that then C is a crossed G-category. A crossed G-category is braided if there are isomorphisms c(X, Y ) : X ⊗ Y → α gr(X) (Y ) ⊗ X for all Y and all homogeneous X, i.e. X ∈ C g for some g ∈ G. For the relations which c must satisfy cf. [55].
Remark 4.13
Our definitions are slightly more general than those given by Turaev in [55] in that we allow for direct sums of objects of different grade. This complicates the definition of the braiding, but the gained generality is needed in our applications.
Theorem 4.14 [42] Let C be a BTC * and S a replete full symmetric subcategory. Then C ⋊ S is a crossed G-category. The zero-grade part of C ⋊ S is given by which is always a BTC * and has trivial center iff Z(C) ⊂ S. The set H of g ∈ G for which C g is non-empty is a closed normal subgroup which corresponds to S ∩ Z(C) under the bijection between closed normal subgroups of G and replete full monoidal subcategories of S. Thus the grading is full ((C ⋊ S) g = ∅ ∀g ∈ G) iff S ∩ Z(C) = 1 and trivial (C ⋊ S = (C ⋊ S) e ) iff S ⊂ Z(C). If C is modular then C ⋊ S is modular in the sense of [55] with full grading for every S. This result will be relevant when we compute Rep(A) in Sect. 5.
Proposition 4.15 [41] Let X ∈ C be simple. Then all simple subobjects X i of F (X) ∈ C ⋊ S occur with the same multiplicity and have the same dimension. If S ⊂ Z(C), thus C ⋊ S is braided, then all X i have the same twist as X, and they are either all central or all non-central according to whether X is central or non-central.
Given irreducible objects X ∼ = Y in C we should also understand whether they can have equivalent subobjects in C ⋊ S. We have Proposition 4.16 [42] Let X, Y be simple objects in C. We write X ∼ Y iff there is Z ∈ S such that Hom(ZX, Y ) = {0}. This defines an equivalence relation which is weaker than isomorphism X ∼ = Y . If X ∼ Y then F (X) and F (Y ) contain the same (isomorphism classes of) simple objects of C ⋊ S (whose multiplicity in F (X) and F (Y ) need not be the same), otherwise Hom(F (X), F (Y )) = {0}.
In the case of abelian extensions one can give a more complete analysis. Recall that G is abelian iff every simple object in S ∼ = G − mod has dimension one (equivalently, is invertible up to isomorphism). In this case the set of isomorphism classes of simple objects in S is an abelian group K ∼ =Ĝ (as opposed to an abelian semigroup in the general case). Since the tensor product of a simple and an invertible object is simple, K acts on the set Γ of isomorphism classes of simple objects of C (by tensoring of representers). For every simple X ∈ C we define [X] = [X]}, the stabilizer of X, which is a finite subgroup of K. (Non-trivial stabilizers can exist since generically there is no cancellation in Γ!) Proposition 4.17 [41] For every simple X ∈ C there is a subgroup L X ⊂ K X such that N X = [K X : L X ] 1/2 ∈ N and where the X χ are mutually inequivalent simple objects in C ⋊S. K X and L X depend only on the image X of X in Γ/K (i.e. the K-orbit in Γ which contains [X]). The isomorphism classes of simple objects in C ⋊ S are labeled by pairs (X, χ), where X ∈ Γ/K and χ ∈ L X .
Together with the (appropriately restricted) T -matrix of C the matrices S [Z] satisfy the relations of the mapping class group of the torus with a puncture [1].
The two preceding propositions are abstract versions of heuristically derived results in [23] which provided decisive motivation.
Computation of Rep(F ). Given a ρ ∈ Rep(A) of
A which is localized in some double cone O there exists [50,39] an extensionρ to an endomorphism of F = A ⋊ S which commutes with the action of G. It is determined bŷ where γ ∈ S and the spaces generate F linearly. In ≥ 2 + 1 dimensions this extension is unique and again localized in O since c(γ, ρ) = 1 whenever ρ, γ have spacelike localization regions. For theories in 1 + 1 dimensional Minkowski space or on R, however, there is an a priori different extension obtained by replacing c(γ, ρ) by c(ρ, γ) * , and for spacelike localized ρ, γ a priori only one of these equals 1. Contrary to the fixpoint problem F → A = F G , non-abelian extensions A → F = A ⋊ S seem not to have been considered in the physics literature. (This is perhaps not surprising since they require the duality theorems either of Doplicher/Roberts or Deligne.) Assuming that A is completely rational we know by Thm. 4.1 that F is completely rational, thus Rep(F ) is modular. In view of the 'explicit' formula (4.4) for Rep(F ) and of Thm. 4.10 we can conclude that There should clearly be a purely categorical proof of these two observations. In fact, the result holds in considerably larger generality and is the subject of our next categorical interlude.
Categorical Interlude 3: Double Commutants in Modular
Categories. For obvious reasons the following result will be called the (double) commutant theorem for modular categories.
Theorem 4.21 [42] Let C be a modular BTC * and let K ⊂ C be a replete full sub TC * closed w.r.t. direct sums and subobjects (as far as they exist in C). Then we have
Remark 4.22
The double commutant property (a) appears first (without published proof) in the notes [48] in connection with Ocneanu's asymptotic subfactor [47]. In the subfactor setting (a) and (b) are proved in [28]. A simple argument proving (a) and (b) in one stroke in the more general setting of C * -categories appears in [42]. Finally, the theorem was then extended [6] to categories C which are semisimple spherical with non-zero dimension. It seems likely that this is the most general setting where it holds.
Thm. 4.21 has many applications, the first of which is the desired purely categorical proof of (4.5). Let thus C be modular and S a replete full monoidal subcategory. Then If S is symmetric, thus Z(S) = S, (4.5) follows at once, and (4.6) is just a special case of (b). Consider now a modular category C with a replete full modular subcategory K. Modularity being equivalent to triviality of the center by Prop. 3.4, (4.7) implies the following. Corollary 4.23 Let C be a modular BTC * and let K ⊂ C be a replete full modular sub TC * . Then L = C ∩ K ′ is modular, too.
A surprisingly easy argument now proves: Theorem 4.24 [42] Let K ⊂ C be a replete full inclusion of modular BTC * s and let L = C ∩ K ′ . Then there is an equivalence of braided monoidal categories: where ⊗ C is the product in the sense of enriched category theory.
Remark 4.25
This result implies that every modular category is a direct product of prime ones, the latter being defined by the absence of proper replete full modular subcategories. Again this holds beyond the setting of * -categories [6]. The question in which sense this factorization might be unique is quite non-trivial.
It is also interesting to note the analogy with the well-known result from the theory of von Neumann algebras where an inclusion A ⊂ B of type I factors gives rise to an isomorphism B ∼ = A ⊗ (B ∩ A ′ ).
The Galois group G (which is determined up to isomorphism by G − mod ≃ S) acts on Rep(F ) and the fixpoints are given by Thus the category Rep(F ) G , which consists just of those localized transportable endomorphisms of F which commute with all α g , is only a full subcategory of Rep(A), viz. precisely Rep(A) ∩ S ′ . The latter cannot coincide with Rep(A) since this would mean S ⊂ Z(Rep(A)), whereas we know that Rep(A) has trivial center.
Abstractly the situation is the following. We have a non-modular category This suggests the conjecture that every non-modular category C 0 embeds as a full subcategory into a modular category C such that (5.1) holds. We will look into this problem in the next Categorical Interlude, without however giving a proof.
Categorical Interlude 4: Constructing Modular Categories.
In Categorical Interlude 2 we have constructed modular categories out of braided categories by adding morphisms, which heuristically amounts to dividing out the center. In Subsection 4.3 we have seen that this categorical construction reflects what happens in the passage from Rep(A) to Rep(F ).
Given a braided tensor category C with non-trivial center one might wish to construct a modular category M into which C is embedded as a full subcategory, i.e. without tampering with C as done in Subsect. 4.2.
Lemma 5.1 Let M be a modular BTC * and C ⊂ M a replete full sub TC * . Then Proof. The obvious inclusion in conjunction with Thm. 4.21 implies for any modular extension M and thus the bound (5.2). An equivalent conjecture was formulated, in fact claimed to be true without proof, by Ocneanu [48]. We do not have any doubt concerning its correctness but, unfortunately, we are not aware of a proof. The considerations of the preceding subsection show that the conjecture is in fact true for all categories of the form Rep(A) ∩ S ′ , where A is a CQFT and S is a full symmetric subcategory of Rep(A). Since we do not know that all BTC * s actually appear as representation categories of some CQFT this provides evidence for the conjecture, but no proof.
If C is already modular, i.e. dim Z(C) = 1, then M = C clearly is a minimal modular extension. On the other hand, if Z(C) = C then C ≃ G − mod for a finite group G and a modular extension of dimension |G| 2 = (dim C) 2 , thus minimal, is given by D ω (G) − mod, where ω ∈ Z 3 (G, T) and D ω (G) is the twisted quantum double [12]. Since D ω1 (G)−mod ≃ D ω2 (G)−mod if [ω 1 ] = [ω 1 ] this example shows already that the minimal extension need not be unique. But apart from these easy cases it is a priori not obvious that it is at all possible to fully embed braided tensor categories into modular ones, even in a non-minimal way. This is proven by the 'center construction' for tensor categories [29,36], a construction which produces a braided tensor category D(C) out of any (not necessarily braided!) tensor category C. If C happens to be braided then it imbeds into D(C) as a replete full subcategory. The category D(C) generalizes the quantum double D(H) of a finite dimensional Hopf algebra H in the sense that there is an equivalence of braided tensor categories (See [31, Sect. XIII.4] for a nice presentation of all this.) For this reason -and also in order to avoid confusion with the center Z(C) of a braided tensor category as defined in Sect. 3.1 -we refer to D(C) as the quantum double of C. In [19] it has been shown D(C) is a modular category if C ≃ H − mod where H is a finite dimensional semisimple Hopf algebra over a field of characteristic zero. This can been generalized to the much wider setting of tensor categories: Theorem 5.3 [45] Let C be a semisimple spherical tensor category C with nonzero dimension over an algebraically closed field (of arbitrary characteristic). (Finite dimensional BTC * belong to this class). Then D(C) is semisimple, spherical and modular with dim D(C) = (dim C) 2 .
(5.4) Remark 5.4 Eq. (5.4) clearly is the only identity which is compatible with the special case C ≃ H − mod. Concerning the proofs we must limit ourselves to the remark that they are based on the adaption [44] of results from subfactor theory to category theory. These works owe much to [28] which provides not only the crucial motivation but also bits of the proof.
By the above, D(C) provides a modular extension of C, which is minimal iff C = Z(C), i.e. if C is symmetric, as one sees comparing (5.4) with (5.3). One might hope that a minimal modular extension can be constructed by a modification of the quantum double.
As another application of the double commutant theorem we exhibit a construction which provides many examples of BTC * s which admit a minimal modular extension.
Proposition 5.5 [42] Let C be a finite dimensional BTC * and let E be the full monoidal subcategory of the quantum double D(C) which is generated by C and D(C) ∩ C ′ . Then Z(E) = Z(C) and dim E = (dim C) 2 / dim Z(C), thus dim D(C) = dim E · dim Z(E) and the quantum double D(C) is a minimal modular extension of E.
Computing Rep(A): Soliton Endomorphisms.
We have seen that it is not possible to compute Rep(A) knowing just Rep(F ). Thus we must use properties of F which go beyond the localized representations. The aim of this subsection is to identify the additional information we need. We have already used the fact that every localized endomorphism ρ of A extends to an endomorphismρ of F which is localized iff ρ ∈ S ′ . Thus, trivially, every ρ ∈ Rep(A) is obtained as restriction ofρ to A. This makes clear that we should understand the nature ofρ for ρ ∈ S ′ .
In the following discussion we consider theories on R or 1 + 1 dimensional Minkowski space. (In the case of theories living on S 1 one must remove an arbitrary point 'at infinity' in order forρ to be well defined.) For any double cone O (or interval I) we denote by O L (resp. O R ) its left (resp. right) spacelike complement. A endomorphism of F which acts on F (O R ) like α g for some g ∈ G and as the identity on F (O L ) is called a right handed g-soliton endomorphism associated with in O. (Left handed soliton endomorphisms are of course defined analogously, but it is sufficient to consider one species.) A G-soliton endomorphism is a g-soliton associated with some O ∈ K for some g ∈ G. We emphasize that for ρ ∈ S ′ , ρ is a bona fide superselection sector (possibly reducible) of F , but the soliton endomorphismsρ of the quasilocal algebra F arising if ρ ∈ S ′ provably do not admit extension to locally normal representations on S 1 . Heuristically this is clear since they 'act discontinuously at infinity'.
Lemma 5.6 [43] Consider (F , G; A, S) with G compact but not necessarily finite. Let ρ be an irreducible transportable endomorphism of A localized in a double cone O. Letρ be its (right hand localized) extension to F and letρ 1 be an irreducible submorphism ofρ. (If E ∈ F ∩ρ(F ) ′ ⊂ F (O) is the corresponding minimal projection we pick an isometry in F (O) in order to defineρ 1 .) Then there is g ∈ G such thatρ In particular, ifρ is irreducible then there is g ∈ Z(G) such that The latter is, a fortiori, the case if ρ is a localized automorphism of A.
Lemma 5.6 shows that every irreducible localized endomorphism of A is the restriction to A of a direct sum of G-soliton endomorphisms of F . The following is more precise.
Proposition 5.7 [43] Let A, F , ρ,ρ be as in Lemma 5.6. Then there is a conjugacy class c of G such thatρ contains an irreducible g-soliton endomorphism iff g ∈ c. The adjoint action of the group G on the (equivalence classes of) irreducible submorphisms ofρ is transitive. Thus all irreducible soliton endomorphisms contained inρ have the same dimension and appear with the same multiplicity. Now we make the connection between Thm. 4.14 and the case of QFT at hand. Definition 5.8 Let F be a CQFT with compact global symmetry group G. The category G − Sol(F ) is the category whose objects are transportable G-soliton endomorphisms with finite index and all finite direct sums of them (not necessarily corresponding to the same g ∈ G). The morphisms are the intertwiners in F . The situation can be neatly summarized in the following diagram, where the horizontal inclusions of categories are full. (A very similar diagram appeared in [37] in a massive context where, however, one has to do with partially broken quantum symmetries.) In view of these results it is clearly desirable to know which soliton endomorphisms a theory F with global symmetry G admits. This is partially answered by the following result. We say that F admits g-soliton endomorphisms if for every O ∈ K there is an irreducible g-soliton endomorphism associated with O.
Corollary 5.11
Let F be completely rational and let α g be a global symmetry of finite order, i.e. α N g = id for some N ∈ N. Then F admits g-soliton endomorphisms.
Proof. Let G be the finite cyclic group generated by α g and A = F G . By Thm. 4.1 and Thm. 3.12 we have Z(Rep(A)) = 1. Then Thm. 4.14 implies that the grading of G − Sol(F ) ≃ Rep(A) ⋊ S is full.
Remark 5.12 1. Now we can complete the proof of the implication A completely rational ⇒ F satisfies split (thus complete rationality). By Thm. 3.12, Rep(A) is modular, thus the grading of Rep(A) ⋊ S is full by Thm. 4.14. Let I, J ∈ I satisfy I ∩ J = ∅ and x ∈ I ∪ J. By Thm. 5.10 (whose proof does not assume complete rationality of F !) F ↾ S 1 − {x} admits g-soliton endomorphisms for all g ∈ G. Using the latter one can construct a normal conditional expectation The rest of the proof works as in [14,Sect. 5]. 2. If µ 2 (F ) = 1 one can prove that F admits soliton automorphisms, see the next subsection. We expect that there are direct proofs of Coro. 5.11 and the above fact which avoid the detour through the fixpoint theory and its modularity and thus might work without the periodicity restriction.
Putting everything together we have the following generalization of Thm. 3.12: Theorem 5.13 Let F be a completely rational CQFT with finite group G of global symmetries. Then G − Sol(F ) is a modular crossed G-category in the sense of [55] with full grading.
We end this section with the computation of Rep(A) in a relatively simple albeit non-trivial and instructive example.
An Example:
Holomorphic Orbifold Models. In order to illustrate the computation of the representation category of a fixpoint theory we consider the simplest possible example, namely the case where the net F is completely rational with µ 2 (F ) = 1, i.e. without non-trivial sectors. Even though the analysis of this particular case reduces essentially to an exercise in low dimensional group cohomology it is quite instructive and allows us to clarify, prove and extend the results of the heuristic discussion in [11] and to expose the link with [12] and -to a lesser extent -with [13].
By the analysis in Subsect. 5.3 we know that the grading of G − Sol(F ) is full, thus dim[G − Sol(F )] g ≥ 1 ∀g ∈ G. Together with dim G − Sol(F ) = |G| · dim Rep(F ) = |G| this clearly implies that each of the categories [G − Sol(F )] g has exactly one isomorphism class of simple objects, all of dimension one and thus invertible. Therefore G − Sol(F ) is (equivalent to) the monoidal category C(G, Φ) determined (up to equivalence) by G and Φ ∈ H 3 (G, T), which is considered in [22,Chap. 7.5] and [55,Ex. 1.3]. (In our field theoretic language this means that F admits G-soliton automorphisms which are unique up to inner unitary equivalence. In an operator algebraic setting it is long known [53] that 'G-kernels', i.e. homomorphisms G → OutM = AutM/InnM are classified by H 3 (G, T) if M is a factor. This analysis is immediately applicable to the present approach to CQFTs.) Now, in [12] starting from a finite group G and a 3-cocycle φ ∈ Z 3 (G, T) a quasi-Hopf algebra D φ (G) was defined, the 'twisted quantum double'. For the trivial cocycle this is just the ordinary quantum double. For cocycles in the same cohomology class the corresponding twisted quantum doubles are related by a twist of the coproduct which induces an equivalence as rigid braided tensor categories of the representation categories.
To make a long story short we state the following result: Theorem 5.14 [43] Let F be completely rational with µ 2 (F ) = 1, let G be a finite group of symmetries and let A = F G . Then there is Φ ∈ H 3 (G, T) such that the following equivalences of braided crossed G-categories and braided categories, respectively, hold: where [φ] = Φ. In particular, Rep(A) and D φ (G) − mod give rise to the same representation of SL(2, Z).
The proof proceeds by explicitly constructing (typically reducible if G is nonabelian) endomorphisms of F as direct sums of soliton automorphisms and considering their restriction to A. One finds enough inequivalent irreducible sectors of A to saturate the bound dim Rep(A) ≤ |G| 2 and concludes that there are no others. Then modularity of Rep(A) follows by an easy argument based on [39,Coro. 4.3] even without invoking the main theorem of [33].
We conclude this discussion by emphasizing that the above analysis, satisfactory as it is, holds only if F is a local, i.e. purely bosonic, theory. If F is fermionic (graded local) then new phenomena may appear, as is illustrated by the following well known example [3]. Let F be a theory of N free real fermions on S 1 and let A = F G where G = Z/2 acts by ψ → −ψ. F satisfies twisted duality for all disconnected intervals E, thus µ 2 (F ) = 1 (in a generalized sense) and µ 2 (A) = 4. One finds |H 3 (G, T)| = 2, corresponding to the fusion rules Z/2 × Z/2 and Z/4 for D φ (G), which in fact govern the cases N = 4M and N = 4M − 2, respectively. For odd N , however, Rep(A) has only three simple objects (of dimensions 1, 1, √ 2) with Ising fusion rules [2]. The latter case clearly is not covered by [11] and our elaboration [43] of it. While the appearance of the object with non-integer dimension can be traced back to the fact that F does not admit soliton automorphisms for g = e but rather soliton endomorphisms, a general model independent analysis of the fermionic case is still lacking and seems desirable.
Summary and Open Problems
At least on an abstract level the relation between the representation categories of rational CQFTs F and A = F G for finite G has been elucidated in quite a satisfactory way by Thm. 4.19 and Thm. 5.10. We have seen that this leads to fairly interesting structures, results and conjectures of an essentially categorical nature. When considering concrete QFT models the computations can, of course, still be quite tedious as is amply demonstrated by [59] and [43].
We close with a list of important open problems.
1. Extend the results from Props. 4.17, 4.18 to extensions C ⋊ S where G is non-abelian. Thus, (i) given a simple object X ∈ C, understand how F (X) ∈ C ⋊ S decomposes into simple objects. (ii) Clarify the structure of the set of isomorphism classes of simple objects in C ⋊ S. (iii) Compute the fusion rules of C ⋊ S and the S-matrix of (C ∩ S ′ ) ⋊ S. 2. Prove a form of unique factorization for modular categories into prime ones. 3. Prove Conjecture 5.2 on the existence of minimal modular extensions. 4. Give a more direct proof of Coro. 5.11 on the existence of soliton endomorphisms.
We cannot help remarking that the results of our Categorical Interludes strongly resemble well-known facts in Galois theory and algebraic number theory. (Note, e.g., the striking similarity between our Prop. 4.15 and Coro. 2-3 of [34, §I.7] on the decomposition of prime ideals in Galois extensions of quotient fields of Dedekind rings, thus in particular algebraic number fields.) The same remark applies to questions 1-3 above. | 2014-10-01T00:00:00.000Z | 2000-08-21T00:00:00.000 | {
"year": 2000,
"sha1": "6cf82e8ba3df6d11ee60cbb406e237a5b41d8b6f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math-ph/0008027v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "781fee8ac2ba102b920a083af91ed9f216c58bf4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
255818766 | pes2o/s2orc | v3-fos-license | Cold acclimation can specifically inhibit chlorophyll biosynthesis in young leaves of Pakchoi
Leaf color is an important trait in breeding of leafy vegetables. Y-05, a pakchoi (Brassica rapa ssp. chinensis) cultivar, displays yellow inner (YIN) and green outer leaves (GOU) after cold acclimation. However, the mechanism of this special phenotype remains elusive. We assumed that the yellow leaf phenotype of Y-05 maybe caused by low chlorophyll content. Pigments measurements and transmission electron microscopy (TEM) analysis showed that the yellow phenotype is closely related with decreased chlorophyll content and undeveloped thylakoids in chloroplast. Transcriptomes and metabolomes sequencing were next performed on YIN and GOU. The transcriptomes data showed that 4887 differentially expressed genes (DEGs) between the YIN and GOU leaves were mostly enriched in the chloroplast- and chlorophyll-related categories, indicating that the chlorophyll biosynthesis is mainly affected during cold acclimation. Together with metabolomes data, the inhibition of chlorophyll biosynthesis is contributed by blocked 5-aminolevulinic acid (ALA) synthesis in yellow inner leaves, which is further verified by complementary and inhibitory experiments of ALA. Furthermore, we found that the blocked ALA is closely associated with increased BrFLU expression, which is indirectly altered by cold acclimation. In BrFLU-silenced pakchoi Y-05, cold-acclimated leaves still showed green phenotype and higher chlorophyll content compared with control, meaning silencing of BrFLU can rescue the leaf yellowing induced by cold acclimation. Our findings suggested that cold acclimation can indirectly promote the expression of BrFLU in inner leaves of Y-05 to block ALA synthesis, resulting in decreased chlorophyll content and leaf yellowing. This study revealed the underlying mechanisms of leaves color change in cold-acclimated Y-05.
Numerous Chl-deficient species, also called leaf color mutations, which exhibited significant changes in chlorophyll synthesis or degradation mechanism and various phenotypes in different species, were regarded as suitable materials for exploring the mechanism of chlorophyll biosynthesis [12][13][14][15]. For instance, in rice, there were some missense mutations occurred in conserved amino acid of ChlD and ChlI in chl1 and chl9 mutants, respectively. chl1 and chl9 mutants showed poorly stacked grana and underdevelopment chloroplasts [16]. The other color mutant F03-06 in Arabidopsis, controlled by recessive mononuclear gene At5g54810, the gene-silenced plants exhibited similar phenotype with F03-06, yellow leaves with mottled veins, the plants are stunted and growth slowly [17]. A stably inherited Brassica plant etiolated mutation (pem) with DNA sequence variation in the promoter of Bra024218, showed retarded chloroplast development, decreased chlorophyll content and reduced photosynthetic capacity [18]. The chlorophyll (Chl)-deficient mutant pylm of pakchoi had yellow leaves with reduced total Chl content, loose grana lamellae structure and few thylakoid stacks, lower photosynthetic activity and photochemical conversion efficiency, which are caused by the block of Chl a production and down-regulation of genes related with Chl biosynthesis [19]. Mutants with loss-of-function REDUCED CHLOROPLAST COVER-AGE (REC) protein, which is homologous with tetratricopeptide repeat (TPR) protein, showed lower chlorophyll contents and smaller chloroplast compartment size compared to wild type in Arabidopsis [20]. Pentatricopeptide repeat (PPR) protein, which motif is assumed to have evolved from a TPR proteins, is associated with various functions including temperaturesensitive chlorosis [21,22]. Among these leaf color mutations, which exhibits normal or near-normal leaf color at room temperature, but exhibits significant change in leaf color at low temperature, are identified as low temperature-sensitive type [23]. Diverse analytical methods based on multiple omics databases were selected to comprehensively explore chlorophyll biosynthesis involved color conversion in low temperaturesensitive type, including tea (Camellia sinensis L.) [24], rice (Oryza sativa L.) [21], wucai (Brassica campestris L.) [9] at molecular or protein level. However, the mechanism of leaf color change response to low temperature is not totally understood.
Chlorophyll is synthesized in the chloroplast and distributed on the thylakoid membranes in chloroplast [25]. The general process of chlorophyll biosynthesis is composed of three main steps: (1) formation of 5-aminolevulinic acid (ALA) from glutamate, (2) formation of protoporphyrin IX from ALA, and (3) formation of chlorophyll from protoporphyrin IX [26]. ALA has been attracted attention as a key precursor that participated in biosynthesis of chlorophyll and plants greening [27,28]. Exogenous application of ALA has been shown to increase the chlorophyll content of plants [29][30][31]. Previous reports indicated that ALA is formed from Glu in the C 5 pathway consisting of two steps [32]. First, under the effects of glutamyl-tRNA reductase (GluTR), the Glu-tRNA converts to Glu-1semialdehyde (GSA). Second, GSA converts to ALA by the effects of GSA-2,1-aminomutase (GSA-AM) [33]. GluTR and GSA-AM are encoded by the nuclear HEMA and GSA genes, respectively. Among the enzymes of ALA synthesis, GluTR is regarded as the main rate-limiting step [34]. Fluorescent in blue light (FLU), a tetratricopeptide repeat (TPR) protein, is the best-known negative regulator of ALA synthesis. Previous researches suggested that FLU protein can interact with the C-terminal segment of GluTR to inactivate ALA synthesis, resulting in reduced chlorophyll content and etiolated seedling [35,36]. In FLU-overexpression (FLUOE) Arabidopsis, FLUOE lines show pale leaves under medium light and yellow-green leaves under low light owning to decreased chlorophyll contents [37]. In addition, another factor regulating GluTR is GluTR-binding protein (GBP). GBP has been shown to interact with the N-terminal region of GluTR to protect GluTR from degradation, maintaining adequate ALA synthesis [34].
Pakchoi (Brassica rapa ssp. chinensis), a subspecies of Chinese cabbage, is a widely consumed vegetable in Asia, especially in China. Since the main edible organ of pakchoi is leaf, leaf color is an important trait in breeding. Y-05 is a special pakchoi cultivar which displays green leaves grown under normal condition but displays yellow inner (YIN) and green outer (GOU) leaves after cold acclimation. Due to its special phenotype, Y-05 is considered as an interesting and valuable material for cold-acclimated leaf color change research. To date, there has been little research on the leaf color change of Y-05 response to cold acclimation, and its color conversion mechanism remains elusive. Here, we aimed to characterize the changed leaf color of pakchoi Y-05 at the physiological, cellular and molecular levels. Our findings contribute to the understanding of cold acclimated Y-05 displaying yellow inner and green outer leaves. These findings also enrich our understanding of the mechanism of leaf color change.
Results
The inner leaves of cold-acclimated Y-05 exhibit decreased chlorophyll content and undeveloped thylakoids Compared with green leaves of pakchoi cultivar G-04, Y-05 displays yellow inner and green outer leaves after cold acclimation (Fig. 1a). Hence, we took the pigments measurement between G-04 and Y-05 at different growth conditions and periods. Interestingly, all of them show decreased contents in yellow inner leaves compared with leaves of G-04 and green outer leaves of Y-05 (Fig. 1b, Fig. S1). Generally, the content of carotenoids, xanthophylls and anthocyanin increased rather than decreased, which could cause yellow leaf phenotype [1,2]. So, we proposed that the yellow leaf phenotype of Y-05 maybe caused by low chlorophyll content.
Chlorophyll is the main pigment for photosynthesis in plants and located in the thylakoid membrane of chloroplast [38]. To verify our hypothesis, we checked the photosynthetic capacity (P n ) of outer and inner leaves between cold-acclimated Y-05 and G-04. As our expected, both outer and inner leaves of cold-acclimated Y-05 possess significantly decreased net photosynthetic rate compared with the outer and inner leaves of coldacclimated G-04 respectively (Fig. 1c).
Next, we further studied the chloroplast ultrastructure of inner and outer leaves from cold-acclimated G-04 and Y-05 by transmission electron microscopy (TEM). Observation of the cold-acclimated G-04 showed lots of mature chloroplasts and thick granum-thylakoids in outer leaves (Fig. 1d, e), and developing chloroplasts and thin granum-thylakoids in inner leaves (Fig. 1h, i). In cold-acclimated Y-05, the green outer leaves show mature chloroplasts but thinner grana stacks compared with outer leaves of G-04 (Fig. 1f, g). However, the yellow inner leaves of Y-05 display undeveloped chloroplasts and almost disappeared granum-thylakoids (Fig. 1j, k). Together, we suggested that the yellow inner Fig. 1 The inner leaves of cold-acclimated Y-05 exhibit decreased chlorophyll content and undeveloped thylakoids. a The phenotype of G-04 and Y-05 before and after cold acclimation. For cold acclimation, two-month old Y-05 plants were grown 3 weeks at 4°C, and then return to 23°C for continues grown. Before, before cold acclimation. After, after cold acclimation. Bar = 5 cm. b The total chlorophyll content of outer and inner leaves from Y-05 and G-04 before and after cold acclimation. c The net photosynthetic rate (P n ) of outer and inner leaves from Y-05 and G-04 after cold acclimation. PAR, photosynthetic active radiation. Three individual plants of each cultivar were quantified, and the total chlorophyll content and P n were measured three times. Error bars represent SE (±SE, n = 3). Different letters indicated statistically significant differences at the level of p < 0.05. d-k Chloroplast ultrastructure of outer and inner leaves from G-04 and Y-05 after cold acclimation. v = vacuole, s = starch grains, gt = granum thylakoids, st = stroma thylakoids. In the Fig. 1d, f, h, j, Bar = 4 μm. In the Fig. 1e, g, i, k, Bar = 1 μm leaves of Y-05 are caused by decreased chlorophyll content and undeveloped thylakoids which maybe induced by cold acclimation.
Transcriptomes and metabolomes of cold-acclimated Y-05
To explore the molecular mechanisms of leaf color change in Y-05 induced by cold acclimation, the transcriptomes of green outer leaves (TOU) and yellow inner leaves (TIN) were conducted (Table S1-4). For both the TOU and TIN samples, three independent biological replicates were set up. The high correlation coefficient indicates a strong linear relationship between biological duplications (Fig. S2a). Totally, 4887 differentially expressed genes (DEGs) were identified between the YIN and GOU, 2239 genes were down-regulated and 2648 genes were up-regulated in the YIN compared with GOU ( Fig. S2b, Table S2). Gene Ontology (GO) analysis revealed that the integral component of membrance and chloroplast-related categories were overrepresented in cellular component (Fig. 2, Table S3). Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis revealed significant changes in the 'Photosynthesis-antenna proteins'(ko00196), 'circadian rhythmplant'(ko04712) and 'Porphyrin and chlorophyll metabolism'(ko00860) pathway with rich factor > 2 ( Fig. 3, Table S4).
For exploring the differences in the composition of metabolites in inner-yellow (MIN) and outer-green (MOU) leaves of Y-05, the non-targeted metabolomes analysis was performed. Repeatability and correlation analysis between MOU and MIN was assessed to prove data reliability (Fig. S3a). In total, 372 differentially expressed metabolites (DEMs) were identified. One hundred sixty-two metabolites were up-accumulated, and 210 metabolites were down-accumulated in MIN (Fig. S3b, Table S5). Next, we analyzed the DEMs between MIN and MOU. KEGG pathway analysis revealed that most metabolites are mainly enriched in the 'metabolic pathways' and 'biosynthesis of secondary metabolites' pathway ( Fig. 4). Interestingly, the 'Porphyrin and chlorophyll metabolism' pathway was also found (Fig. 4), which consistent with our previous results that chlorophyll content changes in Y-05. From the metabolomes data, we found two metabolites involved in chlorophyll biosynthesis, 5-Aminolevulinate (ALA, meta_22) and L-Glutamic acid (meta_277), were down-regulated and upregulated in MIN compared with MOU respectively (Table S6), meaning that the conversion from L- Glutamic acid to ALA is blocked in inner leaves. Together, we proposed that the decreased chlorophyll content in inner leaves of Y-05 is associated with the inhibition of ALA synthesis.
ALA content is critical for the leaf color conversion of pakchoi Y-05
Based on the metabolomes data (Table S5 and S6), the inhibition of ALA synthesis in inner leaves of coldacclimated Y-05 may result in decreased chlorophyll content and leaf yellowing. To verify our hypothesis, 1 mM ALA was immediately sprayed on leaves of Y-05 after cold acclimation when both outer and inner leaves are still green. The cold-acclimated Y-05 plants sprayed by water were used as control. As our expected, the leaf yellowing is inhibited in ALA-treated inner leaves, while control plants display yellow inner leaves (Fig. 5a). Consistent with the phenotype, the chlorophyll and ALA content showed significantly increased in ALA-treated Y-05 (Fig. 5b, c). Further, we observed the ultrastructure of chloroplasts in ALA-treated and control leaves. The chloroplasts in outer leaves of ALA-treated plants possess thicker granum-thylakoids compared with control plants (Fig. 5d-g). Meanwhile, compared with the undeveloped chloroplasts in inner leaves of control, the inner leaves of ALA-treated plants possess mature chloroplasts and granum-thylakoids ( Fig. 5h-k). Together, these results suggested that exogenous application of ALA can rescue the yellow phenotype of Y-05 induced by cold acclimation.
Next, to further investigate the role of ALA in leaf color conversion of Y-05, gabaculine (3-amino 2,3- Fig. 3 The top 20 enriched KEGG pathway of the differentially expressed genes (DEGs) between TOU and TIN of pak choi Y-05. Each point represents a KEGG pathway, ordinate represents pathway name, and abscissa represents the enrichment factor. The larger the enrichment factors, the more significant the enrichment level of DEGs are showed. The color of the circle represents q-value, lower q-value means the more reliable results. And the size of the circle represents the number of genes enriched in the pathway dihydrobenzoic acid), an inhibitor of ALA biosynthesis by inactivating GSA-AT activity [39,40], was used to irrigate one-month-old Y-05 seedlings. The Y-05 seedlings watered by water were used as control. After treatment, the inner leaves of gabaculine-treated Y-05 seedlings display yellow phenotype without cold acclimation, while the control plants still show green inner leaves (Fig. 6a). Consistent with the phenotypes, the chlorophyll and ALA content in inner leaves decreased in gabaculinetreated plants compared with control plants (Fig. 6b, c). These findings further verified that the inhibition of ALA biosynthesis is the key in leaf yellowing of Y-05. Together with the above results, we suggested that the content of ALA is critical for the leaf color conversion of Y-05.
The upregulation of BrFLU is critical for ALA inhibition in cold-acclimated Y-05 Above data suggested that ALA biosynthesis is blocked in yellow inner leaves of cold-acclimated Y-05. During chlorophyll synthesis process, the conversion of L-Glutamic acid to ALA is the rate-limiting step which is controlled by three positive regulators, GBP, HEMA and GSA, and one negative regulator FLU [34,41]. To deeply study the molecular mechanism of ALA inhibition in YIN of cold-acclimated Y-05, we firstly studied the homologous genes expressions of GBP, HEMA and GSA between YIN and GOU leaves. The transcription data showed that the expression of BrHEMA1 and BrGSA1 are significantly increased in YIN compared with GOU leaves (Fig. S4, Table S7). Since HEMA and GSA are positive regulators of ALA biosynthesis, the ALA inhibition in YIN leaves should be not associated with increased BrHEMA1 and BrGSA1 expression. As for BrGBP, which showed no significant changes in expression level between YIN and GOU, also should not be the reason for blocked ALA (Fig. S4, Table S7). Previous studies indicated that FLU, a negative regulator of ALA biosynthesis, interacts with the C-terminal of GluTR to inactivate ALA synthesis [37,42]. Interestingly, BraA05003715 (BrFLU), the homologous gene of Arabidopsis FLU in pakchoi, shows 7.4-fold high expression in YIN compared with GOU (Fig. S4, Table S7), which is consistent with decreased ALA content. Hence, BrFLU was selected as candidate gene of ALA inhibition in yellow inner leaves for next research.
To study the role of BrFLU in leaf color conversion of Y-05, the BrFLU-silenced line (pTY-FLU) of Y-05 was conducted (Fig. 7a, b). The Y-05 plants injected with pTY empty vector were used as control (pTY). Compared with control plants, the silenced plants showed more chlorophyll and ALA content (Fig. 7c, d), Fig. 4 The enriched KEGG pathway of the differentially expressed metabolites (DEMs) between inner-yellow (MIN) and outer-green (MOU) leaves of pakchoi Y-05. Short time-series expression miner (STEM) was used to analyze the metabolites expression pattern. The number of DEMs in each profile was labeled above the frame. The bar represents the proportion metabolites in each profile of the total annotated metabolites suggesting that low BrFLU level makes positive contribution to ALA and chlorophyll biosynthesis in Y-05. Further, if the BrFLU upregulation induced by cold acclimation is closely related with ALA inhibition and decreased chlorophyll content in Y-05 inner leaves, silencing of BrFLU in cold-acclimated Y-05 should increase chlorophyll content and rescue the yellow leaves phenotype, at least in part. To verify our hypothesis, pTY and pTY-FLU lines were treated with 0°C for 3 weeks. Excitingly, pTY-FLU plants showed green phenotype and higher chlorophyll content compared with control plants (Fig. 7e, f), suggesting that the BrFLU upregulation is closely related with leaf color conversion of Y-05. Together, these data suggested that the upregulation of BrFLU induced by cold acclimation is critical for ALA inhibition, finally resulting in decreased chlorophyll content and yellow inner leaves of Y-05.
Furthermore, to explore why BrFLU is specially upregulated in cold-acclimated Y-05, we first compared the BrFLU open reading frame (ORF) sequences between Y-05 and other stay-green pakchoi varieties (G-04, WTC, 2Q, LY, MET, SZQ). Although many single nucleotide polymorphisms in BrFLU were detected between Y-05 and other pakchoi varieties (Fig. S5), the BrFLU amino acid sequences were found no significant different between Y-05 and other varieties (Fig. S6). Since motif appearing or missing in promoter will affect gene expression level, we then analyzed the promoters of BrFLU in Y-05 and other pakchoi varieties. Unfortunately, we did not found any motif specific appearing or missing in promoter of BrFLU from Y-05 (Fig. S7). Taken together, we proposed that the specific upregulation of BrFLU in Y-05 is induced by its upstream regulator which responds to cold acclimation.
Discussion
Leaf color is an important trait in vegetable breeding and marketing. However, previous studies on the mechanism of leaf color conversion were mainly focus on trees and ornamental plants [43,44]. Although some researches on leaf color conversion were carried out in crops such as rice, maize and wheat [21,24], there are few studies on vegetables. Here, we studied the mechanism of leaf color conversion in pakchoi. To explore the color conversion mechanism of special phenotype of Y-05 (Fig. 1a), we first measured the pigment contents between yellow and green leaves, including chlorophyll, carotenoids, xanthophyll and anthocyanin. Results showed all pigments decreased in the yellow inner leaves of Y-05 (Fig. S1). Previously, many studies found that leaf color conversion is usually accompanied by senescence [45].
For instance, the chlorophyll was continuously degraded whereas the carotenoids was partial retained during the process of leaf senescence, was thought to be the reason of leaf color change in Ginkgo biloba [46]. However, the leaf color conversion in pakchoi Y-05 only happens in young inner leaves (Fig. 1a), meaning that the conversion is not related with leaf senescence, but response to cold acclimation. Similarly, researchers believed that leaf color mutation is usually accompanied with blocked growth, then causing the economic losses of plants [47]. In our study, the chlorophyll content and P n value in Y-05 leaves is lower than that in G-04 leaves (Fig. 1b, c), indicating that the development of chloroplasts in the yellow inner leaves of Y-05 is suppressed by cold acclimation. However, Y-05 showed well-developed phenotype (Fig. 1a). Therefore, we suggested that the outer green leaves of Y-05 may supply enough photosynthate for plants to grown. The above findings revealed that the change of inner leaf color is caused by cold acclimation, and wouldn't affect plant development. However, why the yellow phenotype only happens in young inner leaves needs to be further investigated. One possible is that cold acclimation only plays a role in the early stage of thylakoids development in Y-05. The outer leaves, which have mature chloroplast and developed thylakoids, does not respond to cold acclimation. Generally, low chlorophyll content is caused by inhibition of chlorophyll synthesis or chlorophyll degradation [4]. Chlorophyll synthesis includes 19 steps from the GluTR to Chl b, a total of 16 enzymes encoded by more than 26 genes are working in this process [48,49]. For example, 'White Dove' is a leaf color mutant in kale (Brassica oleracea). Low temperature induced the low expression of chlorophyll biosynthesis gene POR, lead to chlorophyll content dramatically reduced in 'White Dove' [50]. PORA, encoding protochlorophyllide oxidoreductase, play an important role in chlorophyll biosynthesis. Arabidopsis porA-1 seedlings suffer from a drastically reduced chlorophyll content and dwarf phenotype [51]. OsCAO1 mainly controls the synthesis of chl b, its mutation will cause decreased chl b content, resulting in yellow-green leaf color [52]. Meanwhile, there is a dynamic balance between the biosynthesis and catabolism of chlorophyll in plants. The high expression of CHL2 and RCCR genes accelerate the chlorophyll degradation, resulting in leaves yellowing in Cymbidium sinense [53]. Mutations in the degradation pathway of chlorophyll often lead to the phenomenon of green stagnation in plant leaves [54,55]. Further, chlorophyll degradation in many plants is usually accompanied by senescence or injury [47]. For example, many plant show colorful leaves in autumn, which is related to chlorophyll degradation [56]. Moreover, some stresses including biotic and abiotic stress will trigger cell death and chlorophyll degradation [57]. In our study, the yellow inner leaves of Y-05 are caused by decreased chlorophyll content and undeveloped thylakoids (Fig. 1b, j, k). More important, the yellow leaves phenotype only happened in inner leaves of Y-05 but not in outer leaves. If the yellow leaves phenotype is caused by chlorophyll degradation, both outer and inner leaves should show yellow phenotype after cold acclimation. Hence, we suggested that the low chlorophyll content in yellow inner leaves of Y-05 is caused by the inhibition of chlorophyll synthesis, but not chlorophyll degradation. The idea was also supported by the impaired thylakoids membrane (Fig. 1j, k), since thylakoids membrane is the site of chlorophyll synthesis [58].
To further investigate the mechanism of leaf color conversion in pakchoi Y-05, the transcriptomes of green outer leaves and yellow inner leaves were performed (Figs. 2, 3 and 4, Table S1-5). Both GO and KEGG analyses further confirm the low chlorophyll content (Fig. 1b), weak photosynthetic capacity (Fig. 1c) and impaired chloroplast structure (Fig. 1j, k) in inner leaves of Y-05. Metabolites serve as a bridge between genotype and phenotype. Because of metabolites are closest to phenotypes, and their changes more directly reveal gene functions [59]. Based metabolome data, we found that the low chlorophyll content in yellow inner leaves is closely associated with the block of ALA synthesis (Table S6). Complementary and inhibitory experiments of ALA (Figs. 5 and 6) further support the finding. Moreover, the transcriptomes data revealed that the high expression of BrFLU plays a key role in the block of ALA synthesis ( Fig. S4 and Table S7). FLU protein has conserved TPR motifs at its C-terminus, which could interact with GluTR [35,60], thus influences the rate of synthesis of ALA [41]. In previous studies on Arabidopsis, FLUoverexpressing lines show decreased ALA synthesis and reduced chlorophyll content in the light [60]. In our study, silencing of BrFLU in Y-05 pak choi further confirmed the contribution of BrFLU to the impaired ALA synthesis and decreased chlorophyll content (Fig. 7). Taken together, our results suggested that the impaired ALA synthesis is closely related with enhanced BrFLU expression, resulting in decreased chlorophyll content and leaf yellowing in Y-05. However, we did not found the significant different in the ORF and promoter sequences of BrFLU from Y-05 compared with other varieties ( Fig. S5-6). Therefore, the reason of the specific upregulation of BrFLU in cold-acclimated Y-05 is not clear. In other words, the upregulation of BrFLU in Y-05 is regulated by an unknown regulator which responds to cold acclimation (Fig. 8). For the unknown regulator, we suggested three possibilities. One is a BrFLU positive transcription factor which can be induced by cold acclimation, activating BrFLU expression. Second is a BrFLU negative transcription factor which can be depressed by cold acclimation, reducing the inhibition on BrFLU expression. The last possibility is epigenetic regulation such as cold acclimation which may decrease BrFLU methylation level, resulting in enhanced BrFLU expression.
Conclusions
Y-05 is a special pakchoi cultivar which shows green leaves grown under room temperature but displays yellow inner and green outer leaves after cold acclimation. In cold-acclimated Y-05 pak choi, compared with the green outer leaves, the yellow inner leaves exhibited low chlorophyll content and weak photosynthetic capacity, undeveloped chloroplasts and thylakoids. Through comprehensive analysis of transcriptome and metabonomic sequencing and functional verification, we found that cold acclimation can trigger an unknown regulator, inducing BrFLU upregulation to block ALA synthesis, resulting in decreased chlorophyll content and leaf yellowing (Fig. 8). However, the unknown regulator needs to be further explored. Finally, our findings provide insight into the mechanisms underlying the leaf color change response to cold acclimation.
Plant materials and growth conditions
All pakchoi inbred lines (Y-05, G-04, WTC, 2Q, LY, MET, SZQ) were grown in pots containing a soil: sand mixture (3: 1) in a controlled artificial climatic chamber with long-day condition (16 h light / 8 h dark) at 23°C, 70% humidity and 250 μmol·m − 2 ·s − 1 light. For cold acclimation, two-month-old plants were grown 3 weeks at 4°C, and then return to 23°C for continues grown. After 2 weeks grown at 23°C, cold acclimated Y-05 will exhibit green-outer leaves (GOU) and yellow-inner leaves (YIN). The GOU and YIN leaves were used for next studies. Under the same growing conditions, G-04, which showed stay-green phenotype before and after cold acclimation, was used as the control group for pigments determination and morphological observation experiments. All of the plant materials used in reseach were from State Key Laboratory of Crop Genetics and Germplasm Enhancement of Nanjing Agricultural University.
Measurement of pigments and ALA content
The plant pigments, including chlorophyll, carotenoids contents were measured as described in previous studies [9,61,62] with simple modifications. In brief, 0.1 g fresh leaf was soaked in 15 ml extracting solution and then shook with 50 rpm/min for 24 h under dark condition. The anthocyanin was measured by method of Huo [63]. In brief, 0.1 g fresh leaf was soaked in 1 ml acidified ethanol (80% ethanol with 0.1% hydrochloric acid) and stood for 24 h under 4°C dark condition. The absorbance was measured at wavelengths of 536 nm. All the concentrations of pigments were calculated as (mg • g − 1 ). The ALA concentrations (μg • g − 1 ) were measured Fig. 8 The proposed model of chlorophyll biosynthesis in Y-05 under normal or low temperature. At normal temperature, the expression of BrFLU remains stable, and GLU converts to ALA to maintain normal biosynthesis of chlorophyll. Under low temperature, cold acclimation can trigger an unknown regulator, inducing BrFLU upregulation and the interaction between BrFLU and GLU-TR to block ALA synthesis, resulting in decreased chlorophyll content and leaf yellowing in Y-05. L-Glutamic acid, glutamyl-tRNA reductase, 5-Amino-levulinate and total chlorophyll were abbreviated as Glu, GluTR, ALA and Chl, respectively. Red and green arrow represents up-regulated and down-regulated of compounds, respectively using an enzyme-linked immunosorbent assay (ELISA) kit (Cat No: KT7958-B, Jiangsu Kete Biotechnology Co., Ltd., China).
Transmission electron microscopy (TEM) analysis
Manufacturing methods of ultra-thin slice refer method of Maekawa [64]. In brief, Use a double-sided blade to cut 2 × 2 mm leaves from pakchoi plants and fixed them in 1% (w/v) glutaraldehyde, after washed several times with phosphate buffer, the samples were further fixed in 0.5% (w/v) osmium tetroxide. After infiltrated with resin and cut using an ultramicrotome (EM UC6, Leica Microsystems), the ultra-thin sections were obtained. For transmission electron microscopy, the ultra-thin slice was examined and photographed using a Hitachi (Tokyo, Japan) H-7650 TEM, as previously described [65].
Transcriptome analysis
The third fully expanded leaves from the yellow part and green part from center were sampled 2 weeks after cold acclimation, the same part of three independent plants as biological replicates. After cleaned and cut, the leaf tissues were frozen in liquid nitrogen immediately, then stored in − 80°C for further research. Total RNA was extracted from YIN and GOU of Y-05 using a TRIzaol reagent (Thermo Fisher Scientific Inc.). The library construction and RNA-seq were performed by Biomarker Technology Co. (Beijing, China). Illumina HiSeq 2500 platform (NEB, USA) was used for library preparations sequencing. After filtering (low Q-value <= 20%), cleaned reads were then assembled and mapped to the Brassica rapa genome (V2.5) (http://brassicadb.org/ brad/index.php) using software HISAT2 (version 2.1.0) [66]. Each differentially expressed gene (DEG) function was annotated to these public databases: Nr (NCBI nonredundant protein sequences); KOG/COG (Clusters of Orthologous Groups of proteins); Pfam (Protein family), SwissProt, KEGG (Kyoto Encyclopedia of Genes and Genomes) [67] and GO (Gene Ontology). Differential expression analysis was performed using software DESeq2 [68] based on the expression levels of the genes in different samples. An absolute value of log2 (fold change) ≥2, P-value < 0.05 [69], as the criteria for identifying significantly differential expression.
Untargeted metabolome analysis
The samples for untargeted metabolome analysis are same as transcriptome analysis. But to distinguish them from transcriptomes in data analysis, the metabolites in yellow-inner and green-outer leaves were named MIN and MOU respectively. Metabolite identification and quantification were performed by Biomarker Technology Co. (Beijing, China). In brief, metabolite identification was annotated against following public database: HMDB (http://www.hmdb.ca/), KEGG (Kyoto Encyclopedia of Genes and Genomes), following the standard metabolic operating procedures. Multiple-reaction monitoring (MRM) was used for metabolite quantification. Targeted UPLC-ESI-QTOF/MS profiling and multivariate data analysis (PCA and Opls-da) [70] were used to obtain more reliable information about metabolites. For all differential metabolites which has three biologically repeats, an absolute value of log2 (fold change) ≥ 1 and variable importance in projection (VIP) ≥ 1, P-value < 0.05 [69] were the criteria for identifying significantly differential metabolites.
Treatment of ALA and gabaculine
For ALA treatment, once the cold acclimated Y-05 plants were transferred to 23°C chamber, the leaves were immediately sprayed by 1 mM ALA (5-Aminolevulinic acid; CAS:5451-09-2; J&K SCIENTIFIC LTD.) solution once every 3 days for 2 weeks. The Y-05 plants sprayed by water were used as control.
Virus-induced gene silencing (VIGS) analysis
VIGS was used to generate silenced pakchoi as our previously described [72]. A specific 40-bp fragment (5′-CTGAGGAATCAAGAGCCAGAGAAGGCTTTTGAA-GAGTTCATGAACTCTTCAA AAGCCTTCTCTGGC TCTTGATTCCTCAG-3′) of BrFLU coding region (underlined) and its antisense sequence was synthesized and inserted into the pTY vector [72]. Then, the construct was introduced into cells of one-month-old Y-05 seedlings using gene gun (Biolistic PDS-1000/He, Biorad, USA). The seedlings introduced by empty pTY vector were used as control. After 1 week grown under normal condition, both BrFLU-silenced and control plants were moved in 4°C chamber for 3 weeks cold acclimation, and then return to 23°C for continues grown.
RNA isolation and gene expression analysis
Total RNA was extracted from leaves by using TRIzaol reagent (Thermo Fisher Scientific Inc.) [73], and cDNA was synthesized by HiScript II Q RT SuperMix for qPCR (Cat No. r223-01, Vazyme, Nanjing, China). Then analyzed by qPCR using SYBR Green Premix Pro Taq HS qPCR Kit II (Rox Plus) (Cat No. AG11719, Accurate Biotechnology (Hunan) Co.,Ltd., China) on StepOnePlus system (Applied Biosystems, USA). The relative expression of genes was analyzed by the 2 −ΔΔCT method [74] and were normalized to the internal control gene BrPP2A for pakchoi. Each reaction was performed in three technical replicates and three independent biological replicates (biological replicates: leaves of the same part of three independent plants; technical replicates: repeat detection and analysis of the same sample). Primers for qPCR analysis were designed by Primer Software Version 5.0 (Premier Biosoft International, CA, USA) and shown in Table S8.
Data analysis
PCA of all samples and Volcano map were generated by BMK Cloud platform (www.biocloud.net). Heatmap and cis-acting analysis of promoter were conducted on TBtools v1.05 [75]. The full-length open reading frame (ORF) of BrFLU was obtained from the NCBI database using BLASTN and sequences amplified from the cultivar including Y-05, G-04, WTC, 2Q, LY, MET and SZQ. The multiple sequence alignment was used online software MultAlin [76]. The amino acid sequence alignment was used software DNAMAN. The 1.2 kb promoter sequence of BrFLU was identified using BLASTN to query the B. rapa genome. The primers were shown in Table S8. Promoter sequence analyzed using the Plant-CARE databases (http://bioinformatics.psb.ugent.be/webtools/ plantcare/html/). All methods and materials in our manuscript complied with relevant institutional, national, and international guidelines and legislation.
Additional file 1: Table S1. Description of read data, quality control and GC content of TIN and TOU. Table S2. Differential expression analysis and functional annotation of DEGs between TIN and TOU. Table S3. Functional categorization of DEGs between TIN and TOU by Gene ontology (GO) analysis. Additional file 2: Fig. S1. The pigments content in G-04 and Y-05 leaves. Fig. S2. The correlation between replicates and volcano map of DEGs between TIN and TOU. Fig. S3. The correlation between replicates and volcano map of DEMs between MIN and MOU. Fig. S4. The expression profiles of BrHEMA1, BrGSA1, BrGBP and BrFLU between TIN and TOU. Fig. S5. Alignment of BrFLU nucleotide sequences in seven pakchoi varieties. Fig. S6. Alignment of BrFLU amino acid sequences in seven pakchoi varieties. Fig. S7. The promoter motif analysis of BrFLU in different pakchoi varieties. | 2023-01-15T15:15:03.121Z | 2021-04-10T00:00:00.000 | {
"year": 2021,
"sha1": "0719b68f565c0cdf4036e59230afe03692833e08",
"oa_license": "CCBY",
"oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-021-02954-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "0719b68f565c0cdf4036e59230afe03692833e08",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
119169865 | pes2o/s2orc | v3-fos-license | Effective conductivity of a singularly perturbed periodic two-phase composite with imperfect thermal contact at the two-phase interface
We consider the asymptotic behaviour of the effective thermal conductivity of a two-phase composite obtained by introducing into an infinite homogeneous matrix a periodic set of inclusions of a different material and of size proportional to a positive parameter \epsilon. We are interested in the case of imperfect thermal contact at the two-phase interface. Under suitable assumptions, we show that the effective thermal conductivity can be continued real analytically in the parameter \epsilon around the degenerate value \epsilon=0, in correspondence of which the inclusions collapse to points. The results presented here are obtained by means of an approach based on functional analysis and potential theory and are also part of a forthcoming paper by the authors.
Introduction
This note is devoted to the analysis of the effective thermal conductivity of a two-phase periodic composite, consisting of a matrix and of a periodic set of inclusions, with thermal resistance at the two-phase interface. Two possibly different materials fill the matrix and the inclusions. We assume that these materials are homogeneous and isotropic heat conductors. As a consequence, the conductivity of each of these two materials is represented by a positive scalar. Moreover, we assume that the size of each inclusion is proportional to a certain parameter ǫ > 0, and that as ǫ tends to zero each inclusion collapses to a point. The normal component of the heat flux is assumed to be continuous at the composite interface, while we impose that the temperature field displays a jump proportional to the normal heat flux by means of a parameter ρ(ǫ) > 0. Such a discon-tinuity in the temperature field has been largely investigated since 1941, when Kapitza carried out the first systematic study of thermal interface behaviour in liquid helium (see, e.g., Swartz and Pohl [1], Lipton [2] and references therein). In this note, we investigate the asymptotic behaviour of the effective thermal conductivity when the positive parameter ǫ is close to the degenerate value 0. Benveniste and Miloh in [3] introduced the expression which defines the effective conductivity of a composite with imperfect contact conditions by generalizing the dual theory of the effective behaviour of composites with perfect contact (see also Benveniste [4] and for a review of the subject, e.g., Drygas and Mityushev [5]). By the argument of Benveniste and Miloh, in order to evaluate the effective conductivity, one has to study the thermal distribution of the composite when so called "homogeneous conditions" are prescribed. As a consequence, we introduce a particular transmis-sion problem with non-ideal contact conditions where we impose that the temperature field displays a fixed jump along a prescribed direction and is periodic in all the other directions (cf. problem (1) below).
We fix once for all Then we introduce the periodicity cell Q by setting We fix a bounded open connected subset Ω of R n of the Schauder class C 1,α such that the complementary set of its closure clΩ is connected and that the origin 0 of R n belongs to Ω. We note that, by requiring that R n \ clΩ is connected, we assume that the set Ω does not have holes. The set Ω represents the "shape" of the inclusions. Next we fix a point p in the fundamental cell Q and for each ǫ ∈ R, we set Ω p,ǫ ≡ p + ǫΩ .
Clearly, there exists ǫ 0 > 0 small enough, such that For ǫ ∈]0, ǫ 0 [, the set Ω p,ǫ represents the inclusion in the fundamental cell Q. We note that for ǫ = 0 the set Ω p,ǫ degenerates into the set {p}.
We are now in the position to define the periodic domains S[ǫ] and S[ǫ] − by setting for all ǫ ∈] − ǫ 0 , ǫ 0 [. We observe that for ǫ = 0 the sets S[ǫ] and S[ǫ] − degenerate into p + Z n and into R n \ (p + Z n ), respectively.
Next, we take two positive constants λ + , λ − and a function ρ from ]0, ǫ 0 [ to ]0, +∞[. For each j ∈ {1, . . . , n} and ǫ ∈]0, ǫ 0 [ we consider the following transmission problem for a pair of functions where ν Ωp,ǫ denotes the outward unit normal to ∂Ω p,ǫ , and The functions u + j and u − j represent the temperature field in the inclusions occupying S[ǫ] and in the matrix occupying S[ǫ] − , respectively. The parameters λ + and λ − represent the thermal conductivity of the materials which fill the inclusions and the matrix, respectively, whereas the parameter ρ(ǫ) plays the role of the interfacial thermal resistivity. The fifth condition in (1) means that the normal heat flux is continuous across the two-phase interface. The sixth condition says that the temperature field has a jump proportional to the normal heat flux by means of the parameter ρ(ǫ). The third and fourth conditions in (1) imply that the temperature distributions u + j and u − j have a jump equal to 1 in the direction e j and are periodic in all the other directions. Finally, the seventh condition in (1) is an auxiliary condition which we introduce in order to have uniqueness for the solution of problem (1). Since the effective conductivity is invariant for constant modifications of the temperature field, such a condition does not interfere in its definition.
Boundary value problem (1) is clearly singular for ǫ = 0. Indeed, both the domains S[ǫ] and S[ǫ] − are degenerate when ǫ = 0. Moreover, the presence of the factor 1 ρ(ǫ) may produce a further singularity if ρ(ǫ) → 0 as ǫ tends to 0 + . In this note, we consider the case in which the limit exists finite in R. We emphasize that we make no regularity assumption on the function ρ.
What can be said on the map ǫ → λ eff kj [ǫ] when ǫ is close to 0 and positive?
Questions of this type are not new and have long been investigated with the methods of Asymptotic Analysis.
Thus for example, one could resort to the techniques of Asymptotic Analysis and may succeed to write out an asymptotic expansion for λ eff kj [ǫ] of the type where P is a regular function and R a remainder which is smaller than a positive known function of ǫ.
In this note, instead, we wish to answer to the question in (2) by exploiting the different approach proposed by Lanza de Cristoforis. Namely, our aim is to represent λ eff kj [ǫ] when ǫ is small and positive in terms of real analytic functions of the variable ǫ defined on a whole neighbourhood of 0, and of explicitly known functions of ǫ. This approach does have its advantages. Indeed, if we know, for example, that there exist ǫ ′ ∈]0, ǫ 0 [ and a real analytic function then we can deduce the existence of ǫ ′′ ∈]0, ǫ ′ [ and of a sequence {a j } +∞ j=0 of real numbers, such that where the series in the right hand side converges absolutely on ] − ǫ ′′ , ǫ ′′ [. As we shall see, this is the case if ǫ/ρ(ǫ) has a real analytic continuation around 0 (for example if ρ(ǫ) = ǫ or ρ is constant). Such a project has been carried out in the case of a simple hole, e.g., in Lanza [15] (see also [16]), and has later been extended to problems related to the system of equations of the linearized elasticity in [17,18,19] and to the Stokes system in [20], and to the case of problems in an infinite periodically perforated domain in [14,21].
Strategy
We briefly outline our strategy. First of all we recall that boundary value problem (1), which we consider only for positive ǫ, is singular for ǫ = 0. Then, if ǫ is in ]0, ǫ 0 [ we can convert problem (1) into an equivalent system of integral equations defined on the ǫdependent domain ∂Ω p,ǫ by exploiting periodic potential theory (cf., e.g., [23]). Then, by an appropriate change of functional variables, we can desingularize the problem and obtain an equivalent system of integral equations defined on the fixed domain ∂Ω. By means of the Implicit Function Theorem for real analytic maps in Banach spaces, we can analyse the dependence upon ǫ of the solutions of the system of integral equations and we can prove our main results. Further details will be presented in a forthcoming paper by the authors (see [24]).
For a proof, we refer to [24]. Here, we note that if ǫ/ρ(ǫ) has a real analytic continuation around 0, then the term in the right hand side of equality (4) defines a real analytic function of the variable ǫ in the whole of a neighbourhood of 0. Accordingly, the term in the left hand side of equality (4), which is defined only for positive values of ǫ, can be continued real analytically for ǫ ≤ 0. As a consequence, λ eff kj [ǫ] can be expressed for ǫ small and positive in terms of a power series which converges absolutely on a whole neighbourhood of 0.
Moreover, we give in the following Theorem 5 more information on λ eff kj [ǫ] for ǫ close to 0 by expressing Λ kj [0, r * ] by means of a certain quantity related to the solutions of a limiting transmission problem (for a proof we refer to [24]).
where |Ω| n denotes the n-dimensional measure of Ω, and where (ũ + j ,ũ − j ) is the unique solution in C 1 (clΩ) × C 1 (R n \ Ω) of the following transmission problem If we also assume that whereṽ − j is the unique solution in C 1 (R n \ Ω) of the following exterior Neumann problem .
Proof. Since r * = 0 and by Theorem 5 one deduces that whereṽ − j is the unique solution in C 1 (R n \ Ω) of (8) and whereṽ + j is the unique solution in C 1 (clΩ) of Then the divergence theorem implies that Now the validity of the proposition follows by a straightforward calculation.
If we further assume that Ω is the unit ball B n in R n , then we have the following. where s n denotes the (n − 1)-dimensional measure of ∂B n .
Proof. By assumption Ω = B n , one verifies that the unique solution of problem (8) is given bỹ Then, by the divergence theorem one has ∂Bnṽ − j (t)(ν Bn (t)) k dσ t = 1 n − 1 |B n | n δ k,j where |B n | n denotes the n-dimensional measure of B n . Now the validity of the proposition follows by equality (7) and by a straightforward calculation (also note that s n = n|B n | n ).
Further remarks
We observe that we can investigate also the asymptotic behaviour of suitable restrictions of the functions u + j [ǫ] and u − j [ǫ] as ǫ tends to 0. Moreoveor, we can analyse also the case in which we add to the fifth and sixth conditions in (1) suitable functions defined on ∂Ω p,ǫ , and thus we consider nonhomogeneous boundary conditions. | 2013-06-26T09:29:22.000Z | 2012-11-06T00:00:00.000 | {
"year": 2013,
"sha1": "43605b696da36d4ae9b4770596ab3cb6de399a4e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.6178",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "43605b696da36d4ae9b4770596ab3cb6de399a4e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Materials Science"
]
} |
195696034 | pes2o/s2orc | v3-fos-license | Acute Liver Failure Due to Severe Hepatic Metastasis of Small-cell Lung Cancer Producing Adrenocorticotropic Hormone Complicating Ectopic Cushing Syndrome
A 72-year-old man was admitted to a general hospital with progressive liver dysfunction, hypokalemia, hyperglycemia, and nodules in the lung and liver and then transferred to our institution on the seventh hospital day. Plasma levels of adrenocorticotropic hormone (ACTH), cortisol, and neuron-specific enolase concentrations were extremely high. He developed acute liver failure, his consciousness and general condition deteriorated rapidly, and he died on Day 11. At the postmortem examination, he was found to have extensive metastases from small-cell lung cancer, including advanced hepatic metastases. This is the first reported case of acute liver failure caused by metastases derived from an ACTH-producing pulmonary small-cell carcinoma.
Introduction
Patients with prothrombin times that are !40% of the standardized value or international normalized ratios (INRs) of " 1.5 caused by severe liver damage within 8 weeks of the onset of symptoms and in whom blood laboratory data and imaging indicate that the liver function was normal prior to the current acute episode are classified as having acute liver failure (ALF) (1).
Although liver metastases commonly occur in individuals with cancer, diffuse liver infiltration rarely results in ALF. Only 7.2% of patients with metastatic liver disease develop ALF-related coma, and their metastases mainly originate from breast, gastric, or colon cancer, or lymphoma (2). ALF caused by hepatic parenchymal metastases from small-cell lung cancer (SCLC) is extremely rare, and the prognosis is poor, with death usually occurring within several days (3). SCLC cells rarely but occasionally produce adrenocorticotropic hormone (ACTH) or cause severe paraneoplastic Cushing syndrome (4).
We herein report, to our knowledge, the first reported case of ALF caused by extensive hepatic metastases from an ACTH-producing SCLC.
Case Report
A 72-year-old man was admitted to a general hospital (Day 1) because of general fatigue and excessive thirst for 6 days. He had no history of blood transfusion, hepatitis, or alcohol abuse, but had smoked an average of 1 pack of cigarettes a day for 52 years. Blood tests showed hypokalemia (K 2.5 mEq/L), high blood glucose (521 mg/dL), and mild hepato-renal dysfunction (total bilirubin 1.6 mg/dL, aspartate transaminase 81 IU/L, alanine aminotransferase 88 IU/L, and creatinine 1.95 mg/dL) (Table). The prothrombin time Plain computed tomography (CT) (Fig. 1) revealed nodular lesions in the peripheral region and hilum of the left lung. The liver was enlarged, and there were some lowdensity nodules in both lobes. Because of the patient's renal dysfunction, no contrast was administered.
After admission, insulin therapy was instituted, and supplementary potassium administered. Although the plasma glucose and serum potassium concentrations gradually normalized, the aspartate and alanine aminotransferase concentrations increased rapidly, and he was transferred to our institution on Day 7.
On admission to our institution, the patient was alert. Jaundice was noted. The significant laboratory abnormalities on Days 1 (in previous hospital) and 7 are listed in Table. By Day 7, he had developed acute liver failure (ALF) (total bilirubin 7.6 mg/dL, INR 1.73, NH3 113 μg/dL). Serological tests for hepatitis A and C viruses were negative. Although hepatitis B (HB) surface antigen was weakly positive, HBV-DNA and HB core antibody were negative; thus, the weakly positive surface antigen was considered a false positive.
Given the findings of pulmonary nodules on chest CT and his severe hypokalemia and hyperglycemia, lung cancer and some endocrine disorder, most likely an ACTH-producing SCLC, was suspected.
The serum concentrations of tumor markers for SCLC on Day 8 were as follows: neuron-specific enolase (NSE) 1,210 ng/mL and progastrin-releasing peptide (Pro-GRP) 20,500 pg/mL. Plasma cortisol and ACTH concentrations were both extremely high (Table). Taken together, these findings resulted in a diagnosis of paraneoplastic Cushing syndrome.
Because of the patient's renal dysfunction, enhanced abdominal CT and magnetic resonance imaging (MRI) could not be performed. Abdominal ultrasonography (US) revealed multiple, small, poorly demarcated nodules in both lobes of the liver. Contrast enhancement using perflubutane microbubbles (Sonazoid; Daiichi Sankyo, Tokyo, Japan) showed circularly enhanced peripheral nodules on re-perfusion images in the post-vascular phase (5) and markedly heterogeneous liver parenchymal enhancement in the post-vascular phase (Fig. 2). These findings indicate liver metastases. A liver biopsy was performed on Day 8, and a rapid pathological examination resulted in a provisional diagnosis of metastatic small-cell carcinoma. Thus, a diagnosis of ACTHproducing SCLC and ALF caused by hepatic metastases from that SCLC was made on Day 9.
Although the patient's general condition was good on Day 7, he gradually lost consciousness and developed a flapping tremor. His hepatic and renal dysfunction progressed rapidly, and he died on Day 11.
A postmortem examination (Fig. 3) showed a markedly enlarged liver (2,005 g), the cut surfaces of which revealed nodules of varying sizes distributed diffusely throughout the liver tissue. A microscopic examination revealed massive diffuse replacement of the liver parenchyma by SCLC cells. No hepatic fibrosis was seen. The primary lesion was identified in the left lower lobe of the lung, and metastases were detected in the left chest wall, lumbar vertebrae, sternum, dura, and pituitary gland; these metastases had not been diagnosed while the patient was alive. Immunohistochemical staining of the SCLC cells was positive for synaptophysin, chromogranin A, CD56, and, partially, ACTH.
Discussion
Patients with prothrombin times that are !40% of the standardized value or INRs of " 1.5 caused by severe liver damage within 8 weeks of the onset of symptoms and in whom blood laboratory data and imaging indicate that the liver function was normal prior to the current acute episode are classified as having ALF (1). These Japanese criteria for ALF were introduced in 2011. Patients without hepatic encephalopathy but with an INR of ! 1.5 are also classified as having ALF.
The present patient had no history of liver disease and no evidence of chronic liver disease or fibrosis at the postmortem examination. He had thrombocytopenia of undetermined cause. His SCLC had not invaded his bone marrow, and drug toxicity was unlikely. The INR of the prothrombin time was prolonged. The concentrations of fibrin degradation products and D-dimers (Table) were normal, and there was no evidence of disseminated intravascular coagulation at the autopsy.
Although alkaline phosphatase and γglutamyltranspeptidase concentrations were high, suggesting biliary congestion, the duration of such congestion was too brief to have resulted in vitamin K deficiency. In addition, the concentration of protein induced by vitamin K absence/ antagonist-II was just above the upper limit of normal, excluding vitamin K deficiency. We considered his coagulopathy to be attributable to liver failure. Based on these findings, we diagnosed him with ALF.
Our patient developed a disturbance in consciousness of indeterminate cause 3 days prior to his death. Serum concentrations of electrolytes were close to normal. Although pituitary and dural metastatic lesions were detected at the autopsy, they did not seem large or numerous enough to have caused the patient's coma. Based on his NH3 concentration and flapping tremor, we speculated that our patient's coma was due to hepatic encephalopathy. However, we did not check his arterial blood gases, so could not exclude acidosis.
The liver is the most common site for metastases; however, ALF secondary to metastases is rare. In one reported series, only 21 (7.2%) of 292 patients with metastatic liver disease developed ALF-related coma, and this occurred mainly in patients with breast, gastric, or colon cancer or lymphoma (2). According to a nationwide survey in Japan, infiltration by malignant cells was responsible for ALF and late-onset hepatic failure (LOHF) in only 29 of 1,603 patients (1.8%) with these conditions (6); however, that report did not detail the origins of the malignant cells.
ALF caused by hepatic invasion by SCLC is extremely rare, with a search of published reports yielding only 24 such patients (3,(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19). These reports highlighted the difficulty in making a diagnosis and the extremely poor prognosis of this condition. Most such patients have rapidly progressive liver failure and a deteriorated general condition and die within a day to a month. In older patients in particular, the diagnosis of metastatic cancer is rarely made during life.
SCLC is an aggressive tumor, with liver involvement be-ing found in 22% to 29% of patients at the time of the diagnosis (17). It usually presents as macroscopic nodules associated with patchy infiltration of the parenchyma. How this causes ALF is unclear; however, in all reported cases, tumor cells had massively invaded and destroyed the liver parenchyma. Some strong genetic mutation promoting severe invasion or progression may occur in SCLC tumor cells, including in the present patient. Imaging findings (CT, US, or MRI) vary among patients, with some showing only hepatomegaly with no distinct nodules or masses in the liver. The use of contrast material is often contraindicated because of these patients' impaired renal function, as was the case with our present patient. Unenhanced CT revealed only some nodules in the liver. Because our patient's renal function was deteriorating rapidly, contrast-enhanced US using perflubutane microbubbles was performed as a substitute for enhanced CT. It clearly revealed multiple tumors in our patient's liver with typical enhancement of the rims of the nodules (20). It also showed markedly heterogeneous hepatic parenchymal enhancement in the post-vascular phase, indicating diffuse malignant invasion. The case reports mentioned above did not document the use of contrast-enhanced US so the present report is, to our knowledge, the first such report. Contrast-enhanced US using perflubutane microbubbles should be considered in patients with renal dysfunction or allergy to contrast media because this procedure does not affect the renal function and causes allergic reactions only in patients with egg allergy.
As is true of previously reported patients, our patient's condition deteriorated so rapidly that chemotherapy or liver support could not be provided, and he died within a few days if presentation. Only two previously reported patients received chemotherapy despite their liver failure; both responded dramatically and survived for over 150 days (8). Chemotherapy should therefore be considered provided the diagnosis is made quickly enough.
Our patient presented with hypokalemia and marked hyperglycemia without a history of diabetes and was therefore suspected of having Cushing syndrome. His cortisol and ACTH concentrations were checked promptly after his transfer to our institution and found to be extremely high. These findings along with those of lung nodules on chest CT, abdominal CT, and US and positive tumor markers (NSE and pro-GRP), a provisional diagnosis of ACTH-producing SCLC and Cushing syndrome was made.
On a postmortem examination, the tumor cells were immunopositive for the markers of SCLC and ACTH. No pituitary adenoma was detected. Although urine had not been collected for measuring the cortisol and dexamethasone levels because of the rapid clinical course, a final diagnosis of ACTH-producing SCLC and Cushing syndrome was made on the basis of the postmortem findings, including the histopathologic findings, together with the strong clinical evidence and firm laboratory findings of hypercortisolism.
Ectopic Cushing syndrome (ECS) is a paraneoplastic syndrome that occurs in 1-5% of patients with SCLC (21). Pa-tients with SCLC and ECS have a poor prognosis because of their advanced stage, poor response to chemotherapy, high susceptibility to serious infections, and high incidence of thromboembolic phenomena. Most patients present with electrolyte disturbances and muscle weakness rather than the typical clinical features of Cushing syndrome (4). Our patient presented with hypokalemia and hyperglycemia but had not developed the typical buffalo hump or central obesity.
To our knowledge, this is the first reported case of ALF caused by an ACTH-producing SCLC. Cushing syndrome caused by an ACTH-producing tumor may be misdiagnosed because hyperglycemia and hypokalemia without the typical Cushingoid appearance mentioned above often occur in association with other conditions, such as dehydration, primary diabetes, hepatic cirrhosis, and malnutrition. If these abnormalities are detected in patients with ALF and/or with some type of cancer, plasma hormone concentrations should be checked in order to investigate the possibility of Cushing syndrome.
It is unclear how his severe hypercorticism affected our patient's clinical course. He had no serious infections or evidence of thromboembolism in life or on the autopsy examination, with the only relevant finding being mild esophageal erosion caused by herpes simplex virus infection of his tongue, pharynx, and esophagus. We speculate that his hypercorticism may have contributed to his rapid tumor progression by suppressing anticancer immunity; however, we found no evidence that that was the case. More experience and more data on such patients are needed to clarify this point.
We failed to immediately administer metyrapone, which inhibits glucocorticoid synthesis, because the hypertension and electrolyte abnormalities were well-controlled, and we predicted an extremely poor prognosis due to his SCLC. As a result, we speculated that the primary cause of his death was likely the systemic and, in particular, liver invasion by SCLC, and the administration of metyrapone may well have failed to alter his prognosis. However, in general, the severe hypercortism (serum cortisol >40 or 51 μg/dL or 24-hour urinary free cholesterol >4-fold the upper limit of normal), is a life-threatening condition that mandates immediate treatment because it may cause severe infection, consciousness disturbance, and other severe manifestations, and the administration of the metyrapone should thus be considered (22).
On a postmortem examination, our patient's liver was found to be enlarged and to have been extensively replaced by tumor cells, causing his ALF, as has been reported by others (3,9,19). ALF in patients with diffuse intrasinusoidal liver metastases is attributable to the destruction of liver cells by diffuse carcinomatous infiltration, ischemia caused by occlusion of the portal vein, or non-occlusive infarction of the liver caused by shock from other causes, such as sepsis or cardiac dysfunction (3).
In conclusion, we herein report, to our knowledge, the first case of ALF caused by extensive metastases from an ACTH-producing SCLC and the first account of the helpfulness of contrast-enhanced US with perflubutane in a patient whose impaired renal function precluded the use of conventional contrast medium. | 2019-06-28T13:22:10.358Z | 2019-06-27T00:00:00.000 | {
"year": 2019,
"sha1": "ebe09f7e8abd5585753a40dd35be68eefb92ede1",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/58/20/58_1976-18/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8ba409d0e8decd65e26e1a4e54fff539caba416",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269421609 | pes2o/s2orc | v3-fos-license | The ARGOS Instrument for Stratospheric Aerosol Measurements
: Atmospheric aerosols represent an important component of the Earth’s climate system because they can contribute both positive and negative forcing to the energy budget. We are developing the Aerosol Radiometer for Global Observations of the Stratosphere (ARGOS) instrument to provide improved measurements of stratospheric aerosols in a compact package. ARGOS makes limb scattering measurements from space in eight directions simultaneously, using two near-IR wavelengths for each viewing direction. The combination of forward and backward scattering views along the orbit track gives additional information to constrain the aerosol phase function and size distribution. Cross-track views provide expanded spatial coverage. ARGOS will have a demonstration flight through a hosted payload provider in the fall of 2024. The instrument has completed pre-launch environmental testing and radiometric characterization tests. The hosted payload approach offers advantages in size, weight, and power margins for instrument design compared to other approaches, with significant benefits in terms of reducing infrastructure requirements for the instrument team.
Introduction 1.Science Requirements
Previous IPCC reports have focused on the impact of anthropogenic aerosols in the troposphere [1].However, there is a growing body of evidence suggesting the additional importance of aerosols in the upper troposphere and lower stratosphere (UT/LS).The lifetime of particles injected into this region can be weeks to months, which enables regional and even global spread of material injected at a single location.
Characterizing the distribution and evolution of stratospheric aerosols is an important input parameter for climate system models that need to accurately calculate atmospheric heating [2].The long-term persistence of stratospheric aerosols over months to years, and their resulting radiative effects, impact the ability to understand climate variations on multiple time scales.The naturally produced background stratospheric aerosol is thought to provide a modest instantaneous radiative forcing of approximately −0.04 W m −2 (i.e., cooling), but perturbations by episodic volcanic and wildfire smoke injections can result in forcings an order of magnitude greater [3].Observing and characterizing the variability and properties of the stratospheric aerosol layer is thus essential to reducing uncertainties in forcing estimates in climate models.The necessary observations that underpin model development and better constrain the impacts of these aerosols on radiative forcing are defined as "Very Important" in the most recent Earth Science Decadal Survey (science question C-5), and aerosol measurements-including vertical profiles-are a priority observable that is "essential to the overall program" for quantification of their impacts on climate forcing [4].
Proper characterization of these aerosol inputs requires altitude-resolved measurements of aerosol extinction as a key geophysical variable to initialize transport calculations.
Remote Sens. 2024, 16, 1531 2 of 11 Extensive spatial sampling and good vertical resolution are also needed to capture the inhomogeneous vertical and horizontal distribution of stratospheric aerosols.Integrated measurements such as stratospheric aerosol optical depth (sAOD) represent a valuable constraint on profile observations, but do not provide the additional information about vertical distribution that is needed for proper modeling of transport effects and radiative forcing estimation.The limited number of surface observing locations for aerosols is also insufficient to fully characterize their geographic distribution, so that satellite measurements are a necessity.
Multiple observing methods (e.g., occultation, lidar, limb scattering) can be used to monitor stratospheric aerosols from space.
•
Solar occultation measurements determine total extinction directly, but only sample a single latitude each day with either sunrise or sunset events.An inclined (but not sun-synchronous) orbit is necessary to measure at different latitudes, and it may take several months to cover the full latitude range available from a given orbit.
•
Space-based backscatter lidar measurements provide better spatial and temporal sampling and high vertical resolution but provide a different measured parameter (backscatter coefficient) that is not directly transferrable to the desired extinction.The low concentration of stratospheric aerosols compared to tropospheric aerosols also poses a challenge for signal-to-noise performance, particularly during daytime measurements.
•
Limb scatter measurements provide comprehensive spatial and temporal sampling with daily global coverage and good vertical resolution.However, the observed signal is strongly dependent on viewing geometry (single scattering angle).In addition, the aerosol size distribution, including particle shape and composition, must be specified in order to retrieve extinction values from the measurements.
For the goal of providing an extensive dataset to give improved guidance to climate models, we feel that the limb scattering technique provides perhaps the most useful combination of vertical resolution, spatial and temporal sampling, and data quality for monitoring stratospheric aerosols.
Current Limb Scattering Measurements
Multiple instruments are currently making limb scattering measurements of stratospheric aerosols.The Optical Spectrograph and InfraRed Imaging System (OSIRIS) instrument has been operating on the Odin spacecraft since 2001 [5].It makes measurements between 270 nm and 810 nm with 0.8 nm spectral resolution and retrieves aerosol extinction profiles at four wavelengths (470, 675, 750, 805 nm) with ~3 km vertical resolution [6].OSIRIS flies in a near-terminator sun-synchronous orbit (0600/1800 Equator-crossing time) that limits geographic coverage during some months of the year.OSIRIS operations are currently limited in temporal coverage due to spacecraft power supply issues.
The Ozone Mapping and Profiling Suite (OMPS) Limb Profiler (LP) instrument on the Suomi NPP (S-NPP) satellite currently collects limb scattering data to create an aerosol extinction data record, which begins in 2012.A second LP instrument was launched on the NOAA-21 satellite in November 2022.The LP instrument makes hyperspectral measurements with simultaneous spectral coverage from 290 nm to 1000 nm (spectral resolution varies from 1 nm to 20 nm) and altitude coverage from 0 km to 80 km (vertical resolution ≈ 1.8 km) [7].Three vertical slits view backward from the satellite, oriented along the orbit track and 4.25 • in azimuth to each side (~250 km separation at the tangent point).Aerosol extinction profiles are retrieved at six wavelengths between 510 nm and 997 nm [8].
The viewing geometry of the LP instrument produces high scattering angles (θ ≈ 120 • -160 • ) for Southern Hemisphere (SH) measurements, which corresponds to low phase function values for typical assumed particle size distributions [9].In contrast, LP measurements in the Northern Hemisphere (NH) occur with forward scattering geometry (θ ≈ 20 product of the scattered radiance and phase function, SH observations can have signals 10-30 times smaller than NH observations for the same aerosol conditions, resulting in a correspondingly lower sensitivity to aerosol loading.The sun-synchronous orbit of both LP instruments (1330 equator-crossing time) also limits the LP sampling to a single local time, with a multi-day gap between revisit times at any location.
Our objective is to supplement the OMPS LP measurements by developing a simplified, compact instrument configuration focused on stratospheric aerosol observations.The Aerosol Radiometer for Global Observations of the Stratosphere (ARGOS) design also utilizes the limb scattering technique and adds multiple simultaneous viewing directions to increase spatial sampling (Figure 1) and reduce hemispheric phase function sampling sensitivity.
The viewing geometry of the LP instrument produces high scattering angles (θ ≈ 120°-160°) for Southern Hemisphere (SH) measurements, which corresponds to low phase function values for typical assumed particle size distributions [9].In contrast, LP measurements in the Northern Hemisphere (NH) occur with forward scattering geometry (θ ≈ 20°-60°), with larger phase function values.Since the signal observed by LP is the product of the scattered radiance and phase function, SH observations can have signals 10-30 times smaller than NH observations for the same aerosol conditions, resulting in a correspondingly lower sensitivity to aerosol loading.The sun-synchronous orbit of both LP instruments (1330 equator-crossing time) also limits the LP sampling to a single local time, with a multi-day gap between revisit times at any location.
Our objective is to supplement the OMPS LP measurements by developing a simplified, compact instrument configuration focused on stratospheric aerosol observations.The Aerosol Radiometer for Global Observations of the Stratosphere (ARGOS) design also utilizes the limb scattering technique and adds multiple simultaneous viewing directions to increase spatial sampling (Figure 1) and reduce hemispheric phase function sampling sensitivity.The viewing geometry of the ARGOS instrument enables locations along the satellite orbit track to be sampled with both forward and backward scattering view angles within approximately 15 min throughout the orbit.This approach gives a more balanced distribution of sensitivity to aerosols at all latitudes in both hemispheres.The complementary measurements collected by ARGOS at multiple scattering angles will also provide substantial statistical leverage to constrain the phase function (and thus particle size distribution) that characterizes aerosols at a given location.The viewing geometry of the ARGOS instrument enables locations along the satellite orbit track to be sampled with both forward and backward scattering view angles within approximately 15 min throughout the orbit.This approach gives a more balanced distribution of sensitivity to aerosols at all latitudes in both hemispheres.The complementary measurements collected by ARGOS at multiple scattering angles will also provide substantial statistical leverage to constrain the phase function (and thus particle size distribution) that characterizes aerosols at a given location.
Instrument Design
NASA's Earth Science Technology Office (ESTO) has supported the development of ARGOS through the In-space Validation of Earth Science Technology (InVEST) program to provide improved aerosol measurements in a compact package.We adapt a design proposed by [10] to observe radiance profiles on the atmospheric limb in multiple directions simultaneously.Our initial laboratory system used a hyperbolic central mirror to redirect all incoming signals to a single focal plane that would capture the profiles, with closely spaced pixels providing altitude sampling.However, initial testing of this system revealed that the slit images it produced were not focused sufficiently for our requirements.We, therefore, revised the design to use a central mirror with multiple flat facets, which direct light from each aperture onto a specific section of the focal plane [11].The Multi-Angle Stratospheric Aerosol Radiometer (MASTAR), with the same basic optical design as ARGOS but different wavelength selection, was built and successfully operated on a building roof at NASA Goddard Space Flight Center (GSFC) in March 2019 (Figure 2).The central mirror was revised to use a flat-sided prism shape.This change enabled better control of the optical design (because the facet angle could be precisely specified), and simplified fabrication requirements for such a small component (16 mm overall diameter).
to provide improved aerosol measurements in a compact package.We adapt a design proposed by [10] to observe radiance profiles on the atmospheric limb in multiple directions simultaneously.Our initial laboratory system used a hyperbolic central mirror to redirect all incoming signals to a single focal plane that would capture the profiles, with closely spaced pixels providing altitude sampling.However, initial testing of this system revealed that the slit images it produced were not focused sufficiently for our requirements.We, therefore, revised the design to use a central mirror with multiple flat facets, which direct light from each aperture onto a specific section of the focal plane [11].The Multi-Angle Stratospheric Aerosol Radiometer (MASTAR), with the same basic optical design as ARGOS but different wavelength selection, was built and successfully operated on a building roof at NASA Goddard Space Flight Center (GSFC) in March 2019 (Figure 2).The central mirror was revised to use a flat-sided prism shape.This change enabled better control of the optical design (because the facet angle could be precisely specified), and simplified fabrication requirements for such a small component (16 mm overall diameter).Analysis of MASTAR field measurements yielded valuable information about instrument performance, in particular stray light characteristics, that has guided the development of the ARGOS instrument design.The original MASTAR optical design included focusing lenses in the optical path following the mirror redirection of the incoming beam, in order to produce a focused slit image on the detector.However, laboratory testing revealed the presence of stray light ghosts on the focal plane resulting from internal reflections in this design.We, therefore, revised the design to remove these lenses and place the slit plate very close to the focal plane (0.5 mm separation).This change removed the stray light path and had the additional benefits of making the overall system both mechanically simpler (fewer individual elements for each optical path) and more compact (lower overall height).
The choice of wavelengths for the ARGOS instrument has also changed through successive iterations of the design.Near-infrared wavelengths are preferred for limb scattering measurements of stratospheric aerosols because the Rayleigh scattering component of the signal decreases rapidly at longer wavelengths, enabling better sensitivity in the UT/LS region of the atmosphere.Limb scattering measurements also require very accurate instrument pointing knowledge because of the long viewing path between the satellite Analysis of MASTAR field measurements yielded valuable information about instrument performance, in particular stray light characteristics, that has guided the development of the ARGOS instrument design.The original MASTAR optical design included focusing lenses in the optical path following the mirror redirection of the incoming beam, in order to produce a focused slit image on the detector.However, laboratory testing revealed the presence of stray light ghosts on the focal plane resulting from internal reflections in this design.We, therefore, revised the design to remove these lenses and place the slit plate very close to the focal plane (0.5 mm separation).This change removed the stray light path and had the additional benefits of making the overall system both mechanically simpler (fewer individual elements for each optical path) and more compact (lower overall height).
The choice of wavelengths for the ARGOS instrument has also changed through successive iterations of the design.Near-infrared wavelengths are preferred for limb scattering measurements of stratospheric aerosols because the Rayleigh scattering component of the signal decreases rapidly at longer wavelengths, enabling better sensitivity in the UT/LS region of the atmosphere.Limb scattering measurements also require very accurate instrument pointing knowledge because of the long viewing path between the satellite and the tangent point.A pointing error of 1 arc-minute (0.016 • ) in the pitch direction corresponds to an error of approximately 1 km in altitude registration at the tangent point.The MASTAR instrument was designed with a Cubesat-sized host vehicle in mind.When this design was first developed, small satellite buses typically could not guarantee this level of pointing knowledge.However, additional information about limb viewing altitude registration can be obtained by making measurements at a near-UV wavelength (e.g., 350 nm) and using the Rayleigh scattering attitude sensing (RSAS) technique to characterize the instrument pointing [12].This method has been used to assess the accuracy of the operational Suomi NPP OMPS LP instrument [13].
Therefore, the MASTAR design included two orthogonal viewing directions dedicated to 350 nm measurements, as shown in Figure 2. Commercially available CCDs with adequate radiometric sensitivity at 350 nm had limited response beyond the visible region, so wavelengths of 675 nm and 850 nm were chosen for MASTAR as the longest options that also provide heritage with the OMPS LP instrument.The use of a hosted payload provider (see Section 3) for the demonstration flight of the ARGOS instrument enabled us to revise this selection.Flying on a larger satellite bus with a high-quality attitude control system allows us to remove the requirement for RSAS measurement capabilities, which in turn allows the selection of a detector with better sensitivity in the near-IR region.Thus, while we still want to maintain consistency in wavelength selection with previous instruments for validation purposes, we have shifted our choices to 870 nm and 1550 nm to improve the scientific value of the ARGOS measurements.Table 1 gives a summary of key ARGOS specifications.Figure 3 shows a cross-section view of the optical design.Each aperture measures radiance profiles at two near-IR wavelengths (870 nm, 1550 nm) simultaneously.These wavelengths have been selected for heritage with previous aerosol measurements (OMPS LP, SAGE II and III) to get altitude coverage in the UT/LS region.The spectral separation between wavelengths and the use of the 1550 nm channel provides key information for characterization of particle size distribution [14].The two slits in each aperture are separated by 1 • in azimuth, corresponding to ~45 km at the tangent point for the planned orbit altitude of 550 km.Each aperture is pointed downward at ~22.57• from the spacecraft lower deck to align the slits on the Earth's limb.Each vertical slit (1.3 • in height) is sized to cover the altitude range −10 km to +50 km on the limb at this orbit altitude.This extended range (relative to the extent of the stratosphere) allows for variations in spacecraft altitude due to the oblateness of the Earth, which directly impacts the location of the ARGOS slits on the limb.The limited budget and tight schedule of the InVEST program supporting ARGOS development has limited our opportunities to carry out an integrated structural-thermaloptical performance (STOP) analysis.However, the performance of the telescopes as a function of bulk temperature and pressure was evaluated with Ansys Zemax software, and the telescope materials were optimized to minimize thermal defocus effects.Simplified finite element analysis of instrument heat flow was also performed.In addition, the ARGOS instrument environmental performance was evaluated during thermal vacuum testing.
The compact design of ARGOS (20 cm across, 11 cm height) reduces the number of optical elements by placing the slit plate (below the central mirror) very close to the focal plane (0.5 mm separation), avoiding the need for secondary optical elements to focus each slit image.We obtained diffraction-limited images at each slit.Stray light control is important for ARGOS because of the potential for contributions from a variety of sources, both internal and external to the instrument.We used TracePro software to thoroughly analyze the optical design and identify mitigation approaches for these contributions.We also collected experimental data to verify the success of these approaches.
•
Out of field stray light is addressed by adding an external baffle to the front of each aperture.
•
Interior surfaces of the baffles, telescope tubes, optical hub, and slits are coated with Acktar "Magic Black" paint to minimize stray light.
•
Dual band filters are placed prior to the achromatic objective lens in each aperture to exclude out of band solar light.These filters are also rotated about their vertical (slit) axis by 1 degree to prevent ghosting signals at the same wavelength from other parts of the slit field of view (FOV).
•
The facets of the central mirror are sized to minimize possible overlap of light from one facet onto a neighboring facet.A future design improvement would be to add small partitions at the facet intersections to further isolate the optical path of each facet.
•
Finally, additional spectral isolation is achieved by placing a short pass edge filter (λ < 1000 nm) on the left slit and a long pass edge filter (λ > 1400 nm) on the right slit for The limited budget and tight schedule of the InVEST program supporting ARGOS development has limited our opportunities to carry out an integrated structural-thermaloptical performance (STOP) analysis.However, the performance of the telescopes as a function of bulk temperature and pressure was evaluated with Ansys Zemax software, and the telescope materials were optimized to minimize thermal defocus effects.Simplified finite element analysis of instrument heat flow was also performed.In addition, the ARGOS instrument environmental performance was evaluated during thermal vacuum testing.
The compact design of ARGOS (20 cm across, 11 cm height) reduces the number of optical elements by placing the slit plate (below the central mirror) very close to the focal plane (0.5 mm separation), avoiding the need for secondary optical elements to focus each slit image.We obtained diffraction-limited images at each slit.Stray light control is important for ARGOS because of the potential for contributions from a variety of sources, both internal and external to the instrument.We used TracePro software to thoroughly analyze the optical design and identify mitigation approaches for these contributions.We also collected experimental data to verify the success of these approaches.
•
Out of field stray light is addressed by adding an external baffle to the front of each aperture.
•
Interior surfaces of the baffles, telescope tubes, optical hub, and slits are coated with Acktar "Magic Black" paint to minimize stray light.
•
Dual band filters are placed prior to the achromatic objective lens in each aperture to exclude out of band solar light.These filters are also rotated about their vertical (slit) axis by 1 degree to prevent ghosting signals at the same wavelength from other parts of the slit field of view (FOV).
•
The facets of the central mirror are sized to minimize possible overlap of light from one facet onto a neighboring facet.A future design improvement would be to add small partitions at the facet intersections to further isolate the optical path of each facet.
•
Finally, additional spectral isolation is achieved by placing a short pass edge filter (λ < 1000 nm) on the left slit and a long pass edge filter (λ > 1400 nm) on the right slit for the path of each aperture, to ensure that each slit only records light from one wavelength band.Laboratory tests show that each filter rejects approximately 99.995% of the incoming light at wavelengths outside the desired spectral range.
The ARGOS design utilizes a commercial indium gallium arsenide (InGaAs) camera (Princeton Infrared Technologies 1280 MVCAM, Monmouth Junction, NJ, USA) for efficient data collection.The only modification for flight use is to shorten the housing so that the ARGOS slit plate can be placed in close proximity to the focal plane, as noted previously.The glass window covering the focal plane is also removed at this time.The focal plane size of 1280 × 1024 pixels provides altitude sampling at ~0.4 km/pixel for a 550 km orbit.Thermal control to keep the camera at 10-20 • C during operations is provided by a single stage thermoelectric cooler with no moving parts.The operational thermal range of the camera is −10 • C to +30 • C. The thermal response of each telescope is expected to be small because the titanium tube matches the coefficient of thermal expansion (CTE) of the filter glass, and the change in index for the objective lens glass is small.The change in focus with pressure is also small.This camera offers a robust well depth (up to 70,000 e-in 14-bit mode) and rapid frame rate (up to 100 Hz), which reduces the potential for image saturation if a bright cloud is present in the scene.ARGOS will capture individual images with a 33 msec integration time, then co-add these images over a 6 s interval to improve signal-to-noise performance for individual aerosol extinction profiles.Our instrument model predicts signal-to-noise ratios greater than 400 at 15 km and greater than 200 at 30 km with these parameters.The 6-s averaging interval gives profile spacing comparable to the NOAA-21 OMPS LP instrument.We plan to test different observing schemes in orbit, since more closely spaced profiles are useful for tomographic retrieval algorithms that give improved resolution of aerosol vertical structure at lower altitudes [15].Since large portions of the focal plane are not illuminated for science measurements, only selected regions surrounding each pair of slits will be captured for processing, reducing data transmission requirements by approximately 70% compared to processing the full focal plane.
Science Benefits
Apertures directed both forward and backward along the satellite orbit track provide the increased sensitivity of forward scattering views in both hemispheres, as well as nearly simultaneous views of single air parcels (within 15 min) that enable better characterization of the aerosol phase function.Typical aerosol phase function curves are relatively flat at backward scattering angles (θ > 90 • ) but increase rapidly at forward scattering angles (θ < 90 • ).Comparison of measurements made at different scattering angles, as well as using multiple wavelengths, will provide information that can help distinguish between phase function curves representing different particle size distribution models (Figure 4).Measurements using apertures pointed away from the orbit track will occur at scattering angles between the values shown in Figure 4 and can give additional sampling of the phase function curve if the aerosol field is regionally homogeneous.
Apertures directed at 45 • and 90 • azimuth to the orbit track provide expanded spatial coverage.Figure 5 illustrates the spatial location of ARGOS samples for three consecutive orbits.The sample locations for a nominal orbit N (red dots) are approximately centered in the plot.Note that the west-directed sample locations from the preceding orbit N-1 (green dots) fall very close to the along-track sample locations from orbit N. Similarly, the east-directed cross-track sample locations from the following orbit N + 1 (blue dots) also fall close to the track of orbit N. ARGOS will thus be able to look for short-term variations in aerosol behavior within each day, as well as filling spatial sampling gaps between satellite orbit tracks.[8].The dashed lines correspond to a candidate size distribution for aerosols injected by the Hunga Tonga Hunga Ha'apai volcanic eruption in January 2022 (reff = 0.39 µm).Green dots show scattering angles for typical ARGOS forward and backward view measurements at 40°S latitude.The separation in scattering angle between these measurements will be greater at higher latitudes, and smaller at lower latitudes.Inset figures show, respectively, the 870 nm/1500 nm extinction ratio as a function of the effective radius of the two assumed aerosol size distributions, and the 870 nm/1500 nm scattering phase function intensities as a function of the assumed size distributions at scattering angles 40 degrees (black) and 140 degrees (red).
Implementation and Status
All optical and mechanical components have been delivered and integrated.Figure 6 shows the fully assembled ARGOS instrument prior to the start of environmental testing.ARGOS has successfully completed qualification-level vibration testing and thermal vacuum testing.Radiometric testing to establish performance characterization for dark current, linearity, stray light, optical alignment, and absolute calibration concluded in March 2024.
Remote Sens. 2024, 16, x FOR PEER REVIEW 9 of 11 satellite is crossing the equator.Green dots show sample locations for the preceding orbit (N − 1), blue dots show sample locations for the following orbit (N + 1).
Implementation and Status
All optical and mechanical components have been delivered and integrated.Figure 6 shows the fully assembled ARGOS instrument prior to the start of environmental testing.ARGOS has successfully completed qualification-level vibration testing and thermal vacuum testing.Radiometric testing to establish performance characterization for dark current, linearity, stray light, optical alignment, and absolute calibration concluded in March 2024.Space flight demonstration of ARGOS is planned for the fall of 2024 through NASA's In-Space Validation of Earth Science Technologies (InVEST) program, utilizing a hosted payload provider (Loft Orbital) for the flight.This approach offers multiple advantages for our program.
•
While the ARGOS design is relatively small and light, significant compromises in its configuration would be necessary to fit into a 6U Cubesat envelope.Loft's hosted payload approach offers substantial margins in size, weight, and power for our existing design.
•
Loft provides an existing spacecraft bus that supplies instrument power, thermal control, flight software, and space-to-ground communication functions.This enables our instrument team to plan for specified interfaces.
•
The Loft spacecraft bus will provide the pointing knowledge (<1 arcmin over a 6-s averaging period, equivalent to <1 km accuracy in altitude registration from this orbit) required to make high-quality limb scattering measurements.
•
Frequent flight opportunities are available, depending on mission requirements.
•
Mission costs are defined in advance and fixed throughout the beginning of in-orbit operations, which greatly simplifies financial planning for a program with a limited budget.Space flight demonstration of ARGOS is planned for the fall of 2024 through NASA's In-Space Validation of Earth Science Technologies (InVEST) program, utilizing a hosted payload provider (Loft Orbital) for the flight.This approach offers multiple advantages for our program.
•
While the ARGOS design is relatively small and light, significant compromises in its configuration would be necessary to fit into a 6U Cubesat envelope.Loft's hosted payload approach offers substantial margins in size, weight, and power for our existing design.
•
Loft provides an existing spacecraft bus that supplies instrument power, thermal control, flight software, and space-to-ground communication functions.This enables our instrument team to plan for specified interfaces.
•
The Loft spacecraft bus will provide the pointing knowledge (<1 arcmin over a 6-s averaging period, equivalent to <1 km accuracy in altitude registration from this orbit) required to make high-quality limb scattering measurements.
•
Frequent flight opportunities are available, depending on mission requirements.
•
Mission costs are defined in advance and fixed throughout the beginning of in-orbit operations, which greatly simplifies financial planning for a program with a limited budget.
The demonstration mission is planned for a sun-synchronous orbit at 550 km altitude, with the local time of ascending node (LTAN) to be determined.Our minimum duration for demonstration of instrument capability is 3 months, with the potential for extended operations.We plan to adapt the current OMPS LP aerosol retrieval algorithm [8] to create aerosol extinction coefficient products from the ARGOS measurements.This algorithm uses a radiative transfer model [16] to calculate the Rayleigh scattering background signal assuming no aerosols and normalizes the radiance profile at high altitude (38.5 km) to reduce stray light effects.
Discussion
The ARGOS instrument is designed to provide increased sensitivity over a wide range of latitudes and expanded spatial coverage compared to current instruments that make limb scattering measurements of stratospheric aerosols.Multiple apertures viewing the Earth's limb at two wavelengths simultaneously, feeding incoming light into a single central optical system and focal plane, are used to accomplish these objectives.Predecessor versions of ARGOS have successfully demonstrated the design in laboratory and field measurements.The instrument size, mass, and power requirements are kept small to enable deployment on different satellite platforms.A demonstration flight of ARGOS in space is planned for the fall of 2024.The basic design of ARGOS could be adapted to measure other atmospheric constituents (e.g., water vapor) with an appropriate choice of spectral bands and focal plane.
Figure 1 .
Figure 1.The ARGOS instrument will view the Earth's limb with eight apertures simultaneously.Each aperture has two slits that measure radiance profiles at separate wavelengths over the altitude range 0-60 km.The dashed line shows the sub-satellite orbit track, and the dotted line shows the nadir direction from the satellite.
NASA' s
Earth Science Technology Office (ESTO) has supported the development of ARGOS through the In-space Validation of Earth Science Technology (InVEST) program
Figure 1 .
Figure 1.The ARGOS instrument will view the Earth's limb with eight apertures simultaneously.Each aperture has two slits that measure radiance profiles at separate wavelengths over the altitude range 0-60 km.The dashed line shows the sub-satellite orbit track, and the dotted line shows the nadir direction from the satellite.
Figure 2 .
Figure 2. (Left panel): Horizon view from the top of the MASTAR instrument during field tests at NASA GSFC in March 2019.(Right panel): CCD image taken during the field test.Wavelength assignments for each slit are shown.Aperture #3 (AP3) is located on the left and viewing towards the sun (east).
Figure 2 .
Figure 2. (Left panel): Horizon view from the top of the MASTAR instrument during field tests at NASA GSFC in March 2019.(Right panel): CCD image taken during the field test.Wavelength assignments for each slit are shown.Aperture #3 (AP3) is located on the left and viewing towards the sun (east).
Figure 3 .
Figure 3. Cutaway view of the optical path for ARGOS.Light enters each aperture from the Earth's limb, passes through an achromatic lens to be reflected from the central multi-faceted mirror, and then is transmitted through a slit plate onto the detector focal plane.
Figure 3 .
Figure 3. Cutaway view of the optical path for ARGOS.Light enters each aperture from the Earth's limb, passes through an achromatic lens to be reflected from the central multi-faceted mirror, and then is transmitted through a slit plate onto the detector focal plane.
Figure 4 .
Figure 4. Nominal aerosol phase function curves for the wavelengths that ARGOS will use.The solid lines correspond to the particle size distribution assumed for OMPS LP retrievals (effective radius, reff = 0.19 µm)[8].The dashed lines correspond to a candidate size distribution for aerosols injected by the Hunga Tonga Hunga Ha'apai volcanic eruption in January 2022 (reff = 0.39 µm).Green dots show scattering angles for typical ARGOS forward and backward view measurements at 40°S latitude.The separation in scattering angle between these measurements will be greater at higher latitudes, and smaller at lower latitudes.Inset figures show, respectively, the 870 nm/1500 nm extinction ratio as a function of the effective radius of the two assumed aerosol size distributions, and the 870 nm/1500 nm scattering phase function intensities as a function of the assumed size distributions at scattering angles 40 degrees (black) and 140 degrees (red).
Figure 5 .
Figure 5. Nominal ARGOS spatial sampling.Red dots show the locations of all samples for nominal orbit N. Orange lines show view directions of all eight apertures for a single observation when the
Figure 4 .
Figure 4. Nominal aerosol phase function curves for the wavelengths that ARGOS will use.The solid lines correspond to the particle size distribution assumed for OMPS LP retrievals (effective radius, r eff = 0.19 µm)[8].The dashed lines correspond to a candidate size distribution for aerosols injected by the Hunga Tonga Hunga Ha'apai volcanic eruption in January 2022 (r eff = 0.39 µm).Green dots show scattering angles for typical ARGOS forward and backward view measurements at 40 • S latitude.The separation in scattering angle between these measurements will be greater at higher latitudes, and smaller at lower latitudes.Inset figures show, respectively, the 870 nm/1500 nm extinction ratio as a function of the effective radius of the two assumed aerosol size distributions, and the 870 nm/1500 nm scattering phase function intensities as a function of the assumed size distributions at scattering angles 40 degrees (black) and 140 degrees (red).
Figure 4 .
Figure 4. Nominal aerosol phase function curves for the wavelengths that ARGOS will use.The solid lines correspond to the particle size distribution assumed for OMPS LP retrievals (effective radius, reff = 0.19 µm)[8].The dashed lines correspond to a candidate size distribution for aerosols injected by the Hunga Tonga Hunga Ha'apai volcanic eruption in January 2022 (reff = 0.39 µm).Green dots show scattering angles for typical ARGOS forward and backward view measurements at 40°S latitude.The separation in scattering angle between these measurements will be greater at higher latitudes, and smaller at lower latitudes.Inset figures show, respectively, the 870 nm/1500 nm extinction ratio as a function of the effective radius of the two assumed aerosol size distributions, and the 870 nm/1500 nm scattering phase function intensities as a function of the assumed size distributions at scattering angles 40 degrees (black) and 140 degrees (red).
Figure 5 .
Figure 5. Nominal ARGOS spatial sampling.Red dots show the locations of all samples for nominal orbit N. Orange lines show view directions of all eight apertures for a single observation when the
Figure 5 .
Figure 5. Nominal ARGOS spatial sampling.Red dots show the locations of all samples for nominal orbit N. Orange lines show view directions of all eight apertures for a single observation when the satellite is crossing the equator.Green dots show sample locations for the preceding orbit (N − 1), blue dots show sample locations for the following orbit (N + 1).
Figure 6 .
Figure 6.Fully assembled ARGOS instrument prepared for radiometric testing.The small reference cube in the center of the hub is used for optical alignment.This image was taken before the addition of thermal blankets for flight operations.
Figure 6 .
Figure 6.Fully assembled ARGOS instrument prepared for radiometric testing.The small cube in the center of the hub is used for optical alignment.This image was taken before the addition of thermal blankets for flight operations. | 2024-04-28T15:10:22.619Z | 2024-04-26T00:00:00.000 | {
"year": 2024,
"sha1": "93088832026d016124925308cbfa784af21609fe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/16/9/1531/pdf?version=1714123989",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "06dff62f0caa3901cdf895cd4b4b9972393aa1bd",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
} |
16147526 | pes2o/s2orc | v3-fos-license | Shock and Awe: Trauma as the New Colonial Frontier
The health of Indigenous girls in Canada is often framed and addressed through health programs and interventions that are based on Western values systems that serve to further colonize girls' health and their bodies. One of the risks of the recent attention paid to Indigenous girls' health needs broadly and to trauma more specifically, is the danger of contributing to the " shock and awe " campaign against Indigenous girls who have experienced violence, and of creating further stigma and marginalization for girls. A focus on trauma as an individual health problem prevents and obscures a more critical, historically-situated focus on social problems under a (neo)colonial state that contribute to violence. There is a need for programs that provide safer spaces for girls that address their intersecting and emergent health needs and do not further the discourse and construction of Indigenous girls as at-risk. The author will present her work with Indigenous girls in an Indigenous girls group that resists medical and individual definitions of trauma, and instead utilizes an Indigenous intersectional framework that assists girls in understanding and locating their coping as responses to larger structural and systemic forces including racism, poverty, sexism, colonialism and a culture of violence enacted through state policy and practices.
Introduction
Indigenous Hawaiian scholar Manulani Aluli Myer says "See your work as a taonga (sacred object) for your family, your community, your people-because it is" ( [1], p. 219).Opaskwayak Cree researcher Shaun Wilson calls for starting from our intentions, our beliefs in the work we do.Similarly, protocol within many Indigenous communities requires a person to situate themselves and their relationships to the people and the land [2].I write this paper from unceded Musqueam territory, but the coming to know, slexlexs, of my readings and learning on the land was completed from my time spent in Secwepemlux.This work is grounded in my own intersecting relationships to Indigenous communities and the systems in which our lives are shaped.I was born in Saskatchewan, Canada in Cree territory but have been on Secwepemc territory since I was young.In many ways, my worldview has been shaped by Secwepemc land and through kinship relationships.My identity is formed not only through my own metis roots but also through my own connection to the Secwepemc community, through what Mohawk scholar Audra Simpson calls a "feeling citizenship" ( [3], p. 173).I know whom I am accountable to, and whom I belong to.These are the important questions that define my responsibility and my role within the Secwepemc nation.My work is informed and mobilized through my interconnected identities as a solo-parent of three children who are Secwepemc and from the lands of the Secwepemc peoples, and my twenty years as a community based researcher, activist and trauma counsellor with Indigenous girls in urban and rural spaces.Furthermore, I draw upon the insight I have gained through conducting interviews and sharing stories with many Indigenous therapists who address violence, healing and trauma in the Secwepemc nation, and who have also witnessed the ongoing resilience, survivance, and positive resistance of Indigenous children and youth.
Context
Several years ago a 14-year-old Indigenous girl walked into a girls group I was facilitating and asked if she could make an announcement.She proceeded to tell the other girls that she had been sexually abused since age seven by her stepfather, and that she was not going to remain silent anymore, and, moreover, wanted them to know that they did not need to tolerate abuse.In the weeks and months that followed this act of truth-telling and collective witnessing, she was labeled, stigmatized, pathologized and ignored by police, social workers and mental health professionals who she encountered.Instead of focusing on the disclosure, it was suggested that she was "using drugs", her mental health repeatedly questioned.These were provided as evidence of her credibility, her believability, and her motivation.
Weeks passed, and I then saw this young woman walking on the street.I stopped the car and said hello and asked how she was doing.We exchanged cell phone numbers and the advocacy began.The other girl's group facilitator and I began making phone calls.I became more strident with each one as I encountered the labeling of this young woman.It was clear that a very different narrative had been formed by the agencies and health care providers of a young woman who made up a story in order to leave her small community.I was told that she used drugs, that she is a lesbian, and that she had a clear plan to leave her community.Together, the other facilitator and I supported this young woman in calling a meeting, where she, together with us as supports, presented a different "picture" of herself.She was articulate, strong and clear about the abuse and about her right to live in a safe home and attend school where she chose.She got her day in court and the judge marveled at her strengths and her ability to represent herself and her needs.She became a leader in the new girls group she was attending, speaking up and naming her feelings, and her challenges.She wrote a support letter about the need for Indigenous girls groups and presented the model at a School District board meeting.
Caught in a web of government policies and community norms around violence towards Indigenous girls and women, her act of resistance to longstanding abuse was shaped by intersecting colonial discourses and practices.On paper, these relevant policies and practices may have appeared to acknowledge the unique intersecting factors that impacted her safety, health and mental health, but they (and the people who administered them and had written them) lacked an analysis of colonialism and were, in fact, part of a legacy of colonialism in perpetuating violence against her and other Indigenous girls.
I suggest that the current construction of trauma continues to create a colonial subject who requires intervention, support and saving.A focus on trauma as an individual health problem, as in this girls story, prevents and obscures a more critical, historically-situated focus on social problems under a (neo) colonial state that contribute to violence and harm.This paper will consider the following: What are the historic and current impacts of the creation of a "trauma industry" within Indigenous communities, and how does the individualized and medicalized approach to trauma undermine community and individual girls' resilience and resistance?
The young woman's story that begins this paper joins with the voices of Indigenous girls and women who have been truth-telling and speaking about violence at the intersections of Indigeneity, gender, age, and geography since colonization began.These "word warriors" are and were always writing, re (membering), and re-telling complex stories of Indigenous girls and women.Zitkala-Sa, Lee Maracle, Maria Campbell, Jeanette Armstrong, Joy Harjo, Gloria Anzaldúa, Chrystos are a few women among many others.As Indigenous feminist Dian Million states, "Our voices rock the boat and perhaps the world.They are dangerous.All of this becomes important to our emerging conversation on Indigenous feminisms, on our ability to speak to ourselves, to inform ourselves and our generations, to counter and intervene in a constantly morphing colonial system.To 'decolonize' means to understand as fully as possible the forms colonialism takes in our own times" ( [4], p. 55, emphasis added).The young woman in the girl's group I was facilitating was not only speaking to other Indigenous young women, as Million describes it speaking to ourselves in order to inform ourselves a form of Indigenous storytelling, but she was also engaging in this truth-telling in an intimate relational space of Indigenous witnessing.This young woman and the circle of girls and women who received her story, were all engaged in an intimate act of decolonizing, both through theorizing about violence and the forms that it takes, but through the telling in certain spaces and relationships, such as the Indigenous girls groups that facilitate and allow for relational witnessing and accountability.
Shock and Awe
In a discussion on trauma, Freud states, "the causal relation between the determining psychic trauma and the hysterical phenomena is not of a kind implying that the trauma merely acts like an agent provocateur in releasing the symptom, which thereafter leads an independent existence" but "the psychical trauma-or more precisely the memory of the trauma-acts like a foreign body which long after its entry must continue to be regarded as an agent that is still at work" ( [5], p. 6).This begs the questions: How is trauma theory and practice not the same invader that is reverberating in Indigenous communities and mental health practice?In what way is trauma as it is currently constructed and enacted within Indigenous health, an invader, and a colonial form of warfare that continues to act long after?
Health programs and interventions that are based on Western values systems and/or regulated through State interventions serve to further colonize and pathologize Indigenous children and youths' health and their bodies.This is evidenced through increasing rates of Indigenous child and youth incarceration, mental health diagnosis, and child welfare intervention.Moreover, the increased attention to Indigenous mental health needs both broadly and through the framework of trauma more specifically, is contributing to what I call the "shock and awe" campaign against Indigenous children and youth who have experienced violence.This leads to ineffective interventions resulting in the ongoing removal of children from their land.I utilize the term "shock and awe," from Naomi Klein's seminal work The Shock Doctrine (2007) and apply it to the ongoing colonization of Indigenous children and youth through trauma discourse, policies and practices that perpetuate statistics of horror and shock in order to justify child protection intervention and ongoing colonial control and intervention [6].It is well recognized within critical scholarship that in order to get to the land, the colonizers had to remove the power and central role of women in Indigenous communities [7][8][9].Similarly, I would argue that neo-colonialism has extended this to Indigenous children and youth through child welfare removals, incarceration, and mental health interventions.
Policy and policy processes have been, and continue to be, central to the colonization of Indigenous peoples, locally and globally [7,8,10,11].In order to understand the violence experienced by Indigenous children and youth today, it is necessary to situate this violence within the violence of colonization and consider how it continues to be enacted through policy.Colonization required the silencing of Indigenous women, as the matriarchal and co-operative societies did not fit within the individualistic and patriarchal ways of the colonizer.To get to the land, they had to remove the women and children [9][10][11][12][13].In Canada, this violence did not end with the closing of residential schools.It continues within the Indian Act and with the removal of children through child welfare policies and practices that further disconnect and displace Indigenous children and youth through adoption and foster placement [10].In my own practice, I continue to witness the harm and violence that intersecting policies have on Indigenous children, youth and families Indigenous children and youth are more likely to be in the child welfare system, and in the juvenile justice system, not only in BC and Canada, but internationally [12].Indigenous lawyer and scholar Patricia Monture-Angus asserts that criminalization is as a strategy of colonization that not only locks up Indigenous children and youth but also does not address the violence, including through state policies of child welfare that first criminalized them in the first place [13].
Trauma discourse has become part of the mainstream narrative in Indigenous and non-Indigenous communities, globally and locally.Alternatively described as the "age of trauma" [14] an "empire of trauma" [15] and as a "trauma economy" [4], trauma has become an umbrella term that includes experiences ranging from single incident experiences such as car accidents, to genocide.Maurice Stevens describes how trauma is the centre of thousands of articles within social work, psychiatry, literature; however, a universal notion of trauma is yet to be defined [16].The dominant discourses of "trauma" continue to define violence within normative neo-colonial constructions, thereby functioning to obstruct and erase the naming of certain kinds of violence such as experiences of racism, structural violence enacted through state policy [11], and violence to Indigenous lands through mining and other development [17].Craps suggests that definitions of trauma are rooted in European hegemony, resulting in psychiatric and medicalized definitions of trauma, thereby perpetuating a subsequent form of cultural imperialism [18].Foucault describes discourse within the colonial project as the "way of seeing that is produced and reproduced by various rules, systems and procedures-forming an entire conceptual territory on which knowledge is produced and shaped" ( [19], p. 3).Trauma theory has emerged out of a time, place and history of ideas, and since its original formation, has been raced, classed and gendered [16,20].Young argues that trauma theory "is glued together by the practices, technologies, and narratives with which it is diagnosed, studied, treated, and represented and by the various interests, institutions, and moral arguments that mobilized these efforts and resources" ([21], p. 5).
Examples of the "conceptual territory" of trauma can be evidenced in state funded and controlled research and media coverage of Indigenous children and youth that ultimately perpetuates statistics of horror and shock in order to justify intervention and ongoing colonial control and intervention.In fact, there is a global phenomena and expansion of trauma into Indigenous and racialized communities and Nations throughout the world, with a focus on children and youth as inherently vulnerable and in need of Western intervention, and with practices rooted in Western models of trauma and ideas of childhood and adolescence [22,23].As Summerfield asks "whose knowledge is privileged and who has the power to define the problem?" ( [22], p. 1449).
Some scholars argue that even the Indigenization of government services in many ways continues the colonial project [24,25] as it has increased the reach into the community, with trauma often being used as a justification for child welfare removals [24].As Landertinger writes, "they do not establish an alternative but rather carve out the same space within a system that continues to work in favour of the settler society" ( [26], pp.[81][82].This echoes the work of both Fanon and Coulthard who call for a turning away from the state for the solutions, as in the words of Secwepemc leader George Manuel, "they must convince the conquered" [27][28][29]. Evidence of the use of trauma as a justification for child welfare intervention and removals is also found within recent child protection responses within Australia.In 2008, the Howard government launched a national emergency response to address the sexual abuse of Indigenous children in the Northern territories.This program utilized the "shock and awe" terminology that is most often associated with going to war and deployed troops to over 70 Aboriginal communities.The Howard government seized control of these Aboriginal communities in the Northern Territory and forced Aboriginal parents to follow strict conditions in order to receive their welfare and family support payments.One newspaper described how "the troops posed for the cameras as they were dispatched into action, and the government issued an urgent national call for volunteer recruits, as policy was unfurled on the run.A year later, and the military analogy still seems appropriate for a campaign that has been, in the words of one doctor, like a bomb going off".Dr. Tamara Mackean, the president of the Australian Indigenous Doctors Association, spoke of the link to colonization and further trauma, stating, "If you take away people's sense of autonomy and control, we know that's bad for their health," she says, "like any act of fear and disempowerment, it's another layer of trauma for Indigenous people.People are exhausted.They're overwhelmed and overloaded by this whole thing that's been called the intervention."( [30], p. 7).
Left uninterrogated and unchallenged, this dominant discourse of trauma not only erases the harm done to Indigenous children and youth through policy but can also function to silence the local and Indigenous ways of knowing and of addressing the wellness of our children and youth.The definitions of trauma and the meanings we make of it are historically constructed and defined, and are shaped by the intersection of structural factors, including our access to power and our experiences of oppression.Further, these constructions of trauma shape what we consider as violence, what kinds of violence are erased, and the kinds of supports and access to services that flow from this.
The Master's Tools May Not Dismantle the House but Will Get You in the Door
It is important to assert that knowledge of how to address violence and wellness in our communities has always existed.This knowledge of what Indigenous scholar Eduardo Duran called the "soul wound" has been with us since time immemorial.Engagement with the discourse and language of trauma emerged within Indigenous communities in the 90s [31], and there has been an increase in Indigenous writings on Indigenous mental health and trauma in the last 20 years [32][33][34][35][36][37][38][39].Eduardo Duran and Bonnie Duran assert that situating the discourse of the "soul wound" within current Western constructs of trauma was important to bring "some validation to the feelings of a community that has not had the world acknowledge the systematic genocide perpetrated on it" ( [31], p. 341).
Other Indigenous scholars such as Maria Yellowhorse Brave Heart, an Oglala Lakota social worker also worked within the mainstream model of trauma, while widening the frame through the development of what she called Historical Trauma Theory [34].Brave Heart developed this out of her over 20 years of clinical experience with Indigenous communities and in response to what she saw as the inadequacy of post traumatic stress disorder as a diagnosis within Indigenous communities.More recently, Indigenous scholar and social worker Tessa Evans-Campbell (Snohomish) offers what she calls Colonial Trauma Response (CRT) as a theory that links historical and contemporary acts of trauma within Indigenous communities [38].It is important to honour the work of these scholars in expanding the framework of trauma to include naming colonialism and genocide within the discourse of trauma.
In spite of the work to expand the framework of trauma to include the experiences of Indigenous peoples, there has continued to be a domination of Western constructs of trauma and the related evidence-based practices with Indigenous peoples.Further, the failure of these approaches with Indigenous people who have experienced violence has been well documented [39][40][41].Consequently, there is widespread recognition, both within Indigenous [39,42,43].and non-Indigenous critical scholarship [44] of the need for a radical re-visioning of theoretical and practical approaches to "trauma" theory, intervention and training in Indigenous mental health.My past scholarship and that of other Indigenous and critical trauma scholars have attempted to address this need through offering new ways of understanding trauma within decolonized, feminist, intersectional, social justice, liberatory and politicized approaches [45][46][47][48].
Recently, Indigenous critical scholars have been at the forefront of rejecting state interventions and western defined framing of Indigenous communities health and healing.Duran, Firehammer, and Gonzales describe counsellors as the "new priests" of the society, the authors argue that therapists perpetuate racism and injustice through imposing incongruent helping paradigms [46].Similarly, Indigenous psychologist Joseph Gone writes, "mental health professionals are the missionaries for a new millennium" ( [39], p. 391).Further, Kirmayer, Simpson, and Cargo argue that there is great danger in framing this ongoing violence of the state in mental health language, as it may in fact "deflect attention from the large scale, and, to some extent, continuing assault on the identity and continuity of whole peoples"( [49], p. 597).
Indigenous critical theorists and activists such as Leanne Simpson, Dian Million and Glen Coulthard, argue that sovereignty and the future health of Indigenous nations will not be found through state recognition, and that the "processes of engagement" including state recognition, and the resulting discourses of healing, can and will replicate the very harms of colonialism [4,28,50].
As Leanne Simpson says "We need to rebuild our culturally inherent philosophical contexts for governance, education, healthcare, and economy.We need to be able to articulate in a clear manner our visions for the future, for living as Indigenous Peoples in contemporary times.To do so, we need to engage in Indigenous processes, since according to our traditions, the processes of engagement highly influence the outcome of the engagement itself.We need to do this on our own terms, without the sanction, permission or engagement of the state, western theory or opinions of Canadians" ( [50], p. 17).
In his seminal essay Subjects of Empire: Indigenous Peoples and the "Politics of Recognition" in Canada, Coulthard engages with the work of Fanon in the context of Indigenous peoples in Canada.Coulthard argues that Indigenous communities need to be less concerned with the politics of recognition by a settler society, and instead focus on recognizing Indigenous ways and practices, in what he describes as "our own on-the-ground practices of freedom" ( [28], p. 444).
I echo the work of Indigenous scholar Dian Million in applying this same reasoning to the concept of trauma and suggest that the theory, practice and ways of doing trauma in Indigenous communities, and with children and youth in particular, are part of the process of reproducing the colonial system, and are an example of what Foucault called "power-knowledge" [19].This power-knowledge, through the discursive framework of trauma functions to efface the naming and addressing of the real harm and violence done through colonial systems, at both the structural, and what Fanon called the "psychoaffective" level [27].I would argue that trauma theory and practices function at both levels of colonialism, that is, they simultaneously erase the naming of the structural acts of violence, while creating and exacerbating the psychological symptoms, through a form of colonial recognition or misrecognition [51].According to Taylor, "Nonrecognition or misrecognition can inflict harm, can be a form of oppression, imprisoning someone in a false, distorted, and reduced mode of being" ( [51], p. 25).I suggest that this is what has happened within trauma theory.We have moved from a space and place of nonrecognition of the harms of colonialism, to what I would argue is misrecognition of these harms through the frame of trauma, as put forward by Indigenous trauma scholars and others.Both are, as Coulthard and Fanon argue, a form of oppression, and over time these images and the power relations that co-construct them will then be related to as natural [27,28].
I do not want to take away from the work by Indigenous scholars and other critical scholars who have worked to make space for the recognition of the violence and genocide that have, and continue to, impact Indigenous peoples worldwide.However, I do believe that it is time to evaluate the impact and effectiveness of including these acts of violence within the frame of trauma.
Red Intersectionality
As the early writings of Sioux activist Zitkala-Sa and Sarah Winnemucca remind us, the binary of gender and race as a result of colonization were identified long before the writings of the early African American women activists part of the Combahee Collective or Kimberle Crenshaw, the critical race scholar who coined the term intersectionality [52][53][54][55].These early activists were central in fighting the issues of violence on the land and on the body as they witnessed it at the turn of the century.They did not separate out their activism around tribal rights and water rights from their activism against violence under colonialism.Sarah Winnemucca describes not only her own experience of being buried alive as a child by her mother to protect her from the settlers, but also her own sister's rape at the hands of settlers: "My people have been so unhappy for a long time they wish now to disincrease, instead of multiply.The mothers are afraid to have more children, for fear they will have daughters, who are not safe even in their mother's presence ( [53], p. 48).Similarly, Zitkala-Sa 1 was instrumental in collecting the testimonies of three Indigenous girls violated by the imposition of capitalism through oil and mining in the tribal lands.Zitkala-Sa put together the legal argument of gender, race, and age in 1 I had already found the writings of Zitkala-Sa and Winnemuca but I am indebted to Dory Nasson (2010) for the three cases describing Zitkala-Sa's activism.her essay "Regardless of Sex or Age", describing how "greed for the girl's lands and rich oil property actuated the grafters and made them like beasts surrounding their prey" ([56], p. 52).Zitkala-Sa reminds us again and again in her writing that violence has always been gendered, aged and linked to access to land.
This paper argues for an Indigenous wholistic and intersectional-based framework of violence, which I call Red Intersectionality.Red intersectionality is inspired and informed by Sandy Grande's "Red pedagogy" [57], Dory Nason's "Red feminism" [56] and the rich tradition of Indigenous critical scholars including Rigney [58], Grande [57], and more recently Tuck and Wang [59] who advocate for methodologies that are rooted in Indigenous sovereignty and are grounded in specific Indigenous Nations' ontologies and epistemologies.Red intersectionality is grounded in five principles: respecting sovereignty and self-determination, local and global land-based knowledge, holistic health within a framework that recognizes the diversity of Indigenous health; agency and resistance, and approaches that are rooted within specific Indigenous nations relationships, language, land, and ceremony [57,60,61].
This critical analysis allows us to consider the construction of Indigenous girls within policy and the structural intersections of this in their life as a form of violence.An anti-colonial and Indigenous intersectional perspective of violence does not center the colonizer but instead attends to the many intersecting factors including gender, sexuality and a commitment to activism and indigenous sovereignty.It helps understand and address violence against Indigenous girls as it foregrounds context, which in Canada's case has to include gendered forms of colonialism and dispossession of Indigenous lands.
Decolonizing Trauma: Implications for Wise Practice
Indigenous social workers Yellow Bird, Coates, Gray and Hetherington challenge social work to not only address the complicity in the past colonial projects, but also the ongoing colonial interventions: "Decolonizing social work requires that the profession acknowledge its complicity and ceases its participation in colonizing projects, openly condemns the past and continuing effects of colonialism . . .and seeks to remove the often subtle vestiges of colonization from theory and practice" ( [62], pp.6-7).Decolonization and transformation within trauma requires us to note sites of struggle between Western and indigenous and the need to reclaim the intellectual knowledge of Indigenous communities, healers and to reassert Indigenous epistemologies and ontologies ( [44], p. 41) Indigenous scholar Renee Linklater in her 2011 doctoral thesis describes her research as decolonizing in two ways: not only the critique of mainstream approaches but also the importance of advancing "principles of self-determination and community control in regards to Indigenous health in the context of healing" ( [43], p. 243).
"Mom I know what you do.You don't think I know history, I do.Why would you be a social worker?How does that help children?" (Cohen Clark, age 9).Present in the question from my Secwepemc twin son is the truth-telling, or naming of the harms past, and ongoing to Indigenous children and youth through State interventions, in this case through social work.However, in my son's question is also the resistance of Indigenous children and youth through acts of naming, and relational accountability through questioning and processes of relational witnessing.In this next section, I will outline how a framework of Red intersectionality that centers resistance and resistance spaces, can point the way forward.
Wesley-Esquimaux and Snowball reveal how Indigenous healing approaches and epistemologies have been ignored and erased within the Western health care system [63].The authors argue that an Indigenous "wise practices" model of healing is required in order to move forward and address the inequities within our current system.This paper will build on the call for "on the ground practices of freedom" ( [28], p. 456), through the framework of Red intersectionality to identify examples of "wise practices" or practices rooted in Indigenous communities' "unique body of knowledge, manifested through oral histories and lived experiences" ( [64], p. 3).Thoms proposes the term "wise practices" as better suited to reflect "the fact that the Aboriginal world is culturally heterogeneous, socially diverse, and communally 'traditional' while at the same time ever-changing" ( [65], p. 8).Furthermore, "wise practices" are called for given the diversity of Indigenous communities, in particular within British Columbia where there are "more than 200 contemporary bands, that collectively speak 14 mutually uninterpretable languages, occupy a territory bigger than Western Europe, live in sharply different ecological niches and spiritual worlds, and have radically different histories" ( [64], p. 3).
Trauma treatment and social service agencies exist within a web of evidence-based treatment approaches that are evaluated, and "proven" through empirical testing and evidence-based research.These "best practices" however are often deeply rooted in Eurocentric perspectives, and biased testing that fails to recognize the realities of Indigenous peoples [39,40,65], and Indigenous young people in particular [66].In a review of the evidence-based literature on Indigenous youth mental health promotion in Canada, researchers Williams and Mumtaz note "of equal concern are the glaring absence of Aboriginal epistemologies in recognized approaches to evidence and largely unquestioned acceptance of this situation by policy makers.Indeed, it would appear that much work needs to be done with communities in re-discovering traditional knowledges and ensuring their legitimization within institutions" ([66], p. 29).
Further, best practice approaches to mental health and trauma or "West knows Best" [39] approaches are foreground, or Indigenous needs are addressed through an add-on approach of culture through cultural competency while specific Nations and community approaches are ignored, decimated, and systematically eroded within these dominant paradigms [67].An example of this can be found in the Aboriginal Healing Foundation (AHF) review of 103 projects to examine what they called "promising healing practices".Their research revealed that more than 80% of these projects included Indigenous cultural activities and traditional healing interventions [68].These included a range of activities such as "[E]lders' teaching; storytelling and traditional knowledge; language programs; land-based activities; feasts and pow wows; learning traditional art forms; harvesting medicine; and drumming, singing, and dancing" ( [68], p. 130).Further, in the AHF review of five healing programs they attempted to identify best practices but realized these could not be identified, and, in fact, the language of best practice can often contribute to a pan-Indigenous approach as healing [69].The authors conclude, however, that given the diversity of Indigenous nations and their respective healing approaches, there is no one Indigenous best practice approach [69].
I would argue that best practices are colonial practices, and often these forms of covert colonization are difficult to see and name.These medical model approaches towards mental health issues further label and pathologize Indigenous children and youth, and result in increased criminalization or medicalization.These approaches often do not address the long-term wellness needs of children and youth who have experienced structural and individual acts of violence, nor the intersecting factors of age, gender, and rurality that put Indigenous children and youth at risk for violence.The resulting coping mechanisms and acts of resistance that place Indigenous children and youth in contact with mental health or the criminal justice system are also left unaddressed.
We need programs that provide safer spaces for Indigenous children and youth to address their intersecting and emergent health needs, without furthering the discourse and construction of Indigenous girls and women as "at-risk", or further criminalizing and medicalizing our children, our families and our communities.Programs such as the Indigenous girls group model offered in the next section resists medical and individual definitions of trauma, and instead uses an Indigenous wholistic, or intersectional framework that assist girls in understanding and locating their coping as responses to larger structural and systemic forces including racism, poverty, sexism, colonialism and a culture of trauma.
Centering Resistance and Activism
The issue of violence against Indigenous children and youth, as represented in the State discourse, media, mental health and counseling systems, and child welfare interventions are important to understand.At the same time other images of strength, resilience and resistance, beyond narratives of risk and harm of Indigenous children and youth, are missing from the discourse.Many studies have focused on the harms of colonization, and this deficit-based research has identified disproportionately high health challenges as a result of the interlocking oppressions for Indigenous youth such as higher rates of sexual and physical abuse, suicide as a leading cause of death especially for Indigenous males, higher rates of violence for Indigenous females, experiences of racism, and increased tobacco and marijuana usage [70][71][72][73].
Research has only recently begun to consider Indigenous understandings of resilience and healthy child development [73] in contrast to the deficit and binary construction of children within Western child development.Recent research has linked strong cultural beliefs and values with resiliency among youth and with positive health outcomes, including improved educational achievement, self-esteem, and less risky drug and sex activities [70][71][72][73][74][75][76].Research linking positive health outcomes for Indigenous youth living in reserve communities where there is strong cultural continuity has been established [75].Further, there has been an increased focus on Indigenous youth in large cities.Similarly, Mohawk scholar Rod McCormick describes how in his research Indigenous youth with a strong cultural identity identified this as key in recovering from suicidality [74].There is a need for research that documents and centers the ongoing resilience, survivance, and positive resistance of Indigenous children and youth by Indigenous youth themselves.The work of the Native Youth Sexual Health Network is one example of research and practice that exemplifies this.
There is a gap in the literature in considering what healing practices exist with Indigenous children and youth who have experienced violence, and, in particular, their acts of resistance.In my recent research with Indigenous youth in the Secwepemc nation, my colleagues and I attempted to address this gap of strengths-based research.We found that, 96% of the youth were proud of their Indigenous identify, and those youth who spoke their language and practiced their culture and traditions, rated their health the highest [76].Furthermore, consistent with other research with urban Indigenous youth [75], we found that the binaries of rural and urban and on-reserve and off-reserve need to be challenged, as cultural identity is formed within a wide circle of activities including access to Elders, language, First Nations education workers in schools, community health spaces such as in Friendship Centres, and the internet [76].
I turn again to the work of Fanon and the role of resistance, what Fanon has been critiqued for as advocating for violence, but instead I take up resistance in all its forms as necessary to free oneself and to create a "change of fundamental importance in the colonized's psycho-affective equilibrium" ( [27], p. 148).Indigenous communities have always resisted colonialism not only individually but through the creation and maintenance of "resistance communities" [48].This I would argue is an essential element of healing for Indigenous children and youth and Nations, not in acts of violence themselves but in acts of resistance for liberation [77].Cree Elder and Scholar Madeline Dion Stout describes in her powerful memoir of residential school how her parents' resilience is working through her now, and how even her triggers give her life: "Their resilience became mine.It had come from their mothers and fathers and now must spill over to my grandchildren and their grandchildren" ( [78], p. 179).Similarly, Indigenous scholar Vizenor describes survivance, as "a narrative resistance that creates a sense of presence over absence, nihility and victimry" ( [79], p. 41).I know that many of the young women I work with write poetry, songs, short stories, plays, and these truth-telling, theorizing narratives need to be centred in our work.Part of my practice with Indigenous girls is supporting their writing and art making, reframing and restorying their behaviors as resistance to larger colonial systems, instead of the mental health labels they are invited to carry and identity with.Resilience and survivance are thus not viewed as individualistic but are instead linked to past, present and future generations.
Indigenous Girls Groups as Relational Spaces of Resistance and Witnessing
Returning to the young woman's story that begins this paper, I invite the reader as a witness to this to consider the meaning of her sharing in the context of the Indigenous girls group she was part of.Bahtkin writes that "a word uttered in that place and at that time will have a meaning different than it would have under any other conditions; all utterances are heteroglot in that they are functions of a matrix of forces practically impossible to recoup, and therefore impossible to resolve" ( [80], p. 2631).Thus, if context is primary, then the spaces that Indigenous girls name acts of violence, and the witnessing of this naming through spaces such as girls groups are important.
In 1992, in my Master's thesis I wrote, "I believe that all young women engage in daily acts of resistance" and I situated the key role of women as partners in the resistance, to witness and name girls resistance and to receive their stories ( [81], p. 133).Trinh T. Minh-ha writes, "the world's earliest archives or libraries were the memories of women" ( [82], p. 121).The storyteller in Indigenous communities is often a mother, sister, auntie, poet, teacher, warrior, musician, historian and healer of her community.Minh-ha states that storytelling involves a speech, which is "seen, heard, smelled, tasted and touched" ([82], p. 121); and the process of telling the story "destroys, brings into life, nurtures" ([82], p. 121) bell hooks echoes this when she writes, "It should be understood that the liberatory voice will necessarily confront, disturb, demand that listeners even alter ways of hearing and being" ([83], p. 16).Thus, as listeners or receivers of these stories, we are witnesses and essential partners in the resistance of young women.Indigenous women and girls have always resisted the construction of themselves within policy and media.Storytelling and other forms of creative writing have been a political act and have provided an important space for Indigenous women to resist and replace the colonial images.Choctow scholar Devon A. Mihesuah writes that poetry and literature are a source rarely utilized, and yet are essential as they reveal the complexity and diversity of Indigenous women: "Indeed, it is through their writings that we can learn that Native women were and are powerful, they were and are as complex as their cultures are diverse" ([84], p. 5).
Indigenous women and girls' stories can provide understandings of strategies and unique solutions to challenges facing indigenous communities.For example, Leslie Marmo Silko writes about the Laguna Pueblo's concept of story as: "the old folks said the stories themselves had the power to protect us and even to heal us because the stories are alive; the stories are our ancestors" ([85], p. 152).Similarly, intersectionality scholar, Patricia Hill Collins also describes the importance of story-telling, in particular, the process of call and response, in order to link emotion with reason and as such situates knowing, within the context of the relationship with the larger community [86].This is similar to practices such as "counter-memory" as described by Foucault, it is a form of storytelling that "combats our current modes of truth and justice, helping us to understand and change the present by placing it in a new relation to the past" ([87], pp.160, 163-64) while problematizing the dominant discourse and understanding of a particular issue.
Indigenous witnessing invokes not only a responsibility to the stories, but truth-telling and activism linked to what we have heard [88].Indigenous scholar Sarah Hunt says, "As witness we have a role that is not to take up the voice or story of that which we have witnessed, nor to change the story, but to ensure the truths of the acts can be comprehended, honored and validated" ([88], p. 38).Similarly, Rwandan social worker Rwigena, in writing of the ethics of witnessing with Rwandan survivor communities, describes the power of relational and intimate spaces of witnessing within family and community where testimony is woven into every day alongside laughter and food and is part of building an intergenerational collective knowledge [89].She calls for attending to the context of relationships and spaces involved in listening.Spaces such as the Indigenous girls groups that I have been part of.
Through a violence-informed and Indigenous intersectional approach, the groups that my colleagues and I have developed provide the girls with the space to name, comprehend, honour and validate their experiences of abuse, sexual exploitation, body image and violence, as well as their strengths and daily lived realities in a safe and non-threatening environment [45,90,91].My work in partnership with the Secwepemc community through the Interior Indian Friendship Centre and school district 73 has involved developing an Indigenous girls' group within a framework that reintroduces Secwepemc Nation specific cultural teachings of girlhood, or "rites of passage".The model for the group was developed in a unique format-with youth, Elders, community leaders and practitioners in a traditional circle and facilitated by an Elder in the community.This talking circle identified the key issues for Indigenous youth in our community, and how to address them.Through partnerships with community, the school district and Elders, the goal of these groups are to provide Aboriginal/First Nations girls, aged eight to 18, with a space to explore a range of issues affecting their daily lives.
A violence-informed and intersectional girls' group locates the source of girls' challenges within structural and systemic problems such as racism, poverty, sexism and the intersections of these in their lives.We support the young women in healthy resistance to these problems, and in their efforts to move back into connection with themselves and others.We do this through a range of violence-informed strategies of naming, educating and supporting healthy resistance strategies [78].Violence-informed practice allows us to provide girls with safety, support and the tools to deal with violence and its effects in their daily lives within an intersectional framework.Key violence-informed practices that inform my work include truth-telling and conscious use of self, safety and containment, naming and noting, and fostering healthy resistance strategies.These practices are elaborated on in our girls' group manual [90] and in a chapter on trauma informed practice [45].
In addition, the essential elements of the groups include indigenous worldview through the traditional Secwepemc values and seven sacred teachings, a focus on strengths and healthy resistance, and trauma-informed wholistic and relative safety that recognizes the diversity within and between Indigenous girls and their identities and communities.In an interview with my friend and colleague Sarah Hunt, I described Indigenous girls groups as forms of ceremonial models of supporting girls through adolescents into adulthood: "If the circle is that piece of ceremony we can reclaim until the other ways of witnessing violence are returned or remembered or rehonored then that's maybe why in itself it's been of value" ( [88], p. 40).
The following key questions are important to consider in our work to decolonize trauma: Honouring coping: How do we name and frame girls' coping as healthy resistance strategies and support their movement toward healthy resistance while honouring their current strategies?Locating violence, strength and resistance: What are the daily experiences girls are resisting?What strengths and resistance can you identify in their stories?
Conclusions
Indigenous scholar Eve Tuck has called for a need to stop research focused on problems in Indigenous communities in order to "suspend the damage" of "deficit-based" research [92].Extending this to the concept of trauma, I propose that we need to develop models for addressing violence that are aligned with Indigenous values, Indigenous paradigms and epistemologies and that are based in strengths, resistance and survivance.I suggest that we should move beyond decolonizing Western models of trauma, and instead attend to the centering of "wise practices" and specific Indigenous Nations approaches to within a network of relational accountability.A form of "hands forward, hands back" that holds us accountable within non-linear ideas of time and space [93].This paper offers an alternative model, one that centers, remembers and revitalizes the historic and ongoing resistance of Indigenous girls and women, and articulates an Indigenous relational process of decolonizing and centering "wise practices" such as the example offered through Indigenous girls groups.
As Indigenous activist Winona La Duke challenges us "And the question, I think, that should be asked and needs to be asked of each of us is how much and how brave we are in our ability to deconstruct some of the paradigms which we have perhaps embraced.If we are able to liberate our minds to be the people that are going to be here on this land.The people who are going to protect our mother, and care for ourselves" [17]. | 2016-03-01T03:19:46.873Z | 2016-02-05T00:00:00.000 | {
"year": 2016,
"sha1": "7953c2fdc00d6772b742e6bdae7bdbdc0f30d10e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0787/5/1/14/pdf?version=1454659653",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "7953c2fdc00d6772b742e6bdae7bdbdc0f30d10e",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
118442061 | pes2o/s2orc | v3-fos-license | Fluctuation Effects on the Transport Properties of Unitary Fermi Gases
In this letter, we investigate the fluctuation effects on the transport properties of unitary Fermi gases in the vicinity of the superfluid transition temperature $T_c$. Based on the time-dependent Ginzburg-Landau formalism of the BEC-BCS crossover, we investigate both the residual resistivity below $T_c$ induced by phase slips and the paraconductivity above $T_c$ due to pair fluctuations. These two effects have been well studied in the weak coupling BCS superconductor, and here we generalize them to the unitary regime of ultracold Fermi gases. We find that while the residual resistivity below $T_c$ increases as one approaches the unitary limit, consistent with recent experiments, the paraconductivity exhibits non-monotonic behavior. Our results can be verified with the recently developed transport apparatus using mesoscopic channels.
In this letter, we investigate the fluctuation effects on the transport properties of unitary Fermi gases in the vicinity of the superfluid transition temperature Tc. Based on the time-dependent Ginzburg-Landau formalism of the BEC-BCS crossover, we investigate both the residual resistivity below Tc induced by phase slips and the paraconductivity above Tc due to pair fluctuations. These two effects have been well studied in the weak coupling BCS superconductor, and here we generalize them to the unitary regime of ultracold Fermi gases. We find that while the residual resistivity below Tc increases as one approaches the unitary limit, consistent with recent experiments, the paraconductivity exhibits non-monotonic behavior. Our results can be verified with the recently developed transport apparatus using mesoscopic channels. In the past decade, one of the most exciting topics in cold atom physics is the unitary Fermi gas characterized by the absence of a small perturbation parameter and strong pairing fluctuations [1][2][3]. Thermodynamic properties of the unitary Fermi gases have been well studied [4][5][6] and are shown to be universal [7]. Several experiments have also started to investigate the transport properties of the unitary Fermi gases, including first and second sound [8], shear viscosity [9] and spin diffusion [10][11][12]. In the later two cases, apparent lower quantum limits have been observed in experiments. Recently, a mesoscopic channel between two bulk unitary Fermi gases has been constructed and a drop of resistance below superfluid transition temperature T c has been seen [14]. With the same setup, contact resistance [13], quantized conductance [16] and thermoelectric effect [15] have also been observed. These experimental developments offer new opportunities to study mesoscopic transport phenomena with the flexibility of cold atoms.
Historically, fluctuation effects on transport properties have been well studied in the weak coupling superconductors [17]. Two well-known examples in the vicinity of superconducting transition temperature T c are: (a) Below T c , a finite resistance appears due to phase slip induced by thermal fluctuations, known as Langer-Ambegaokar-McCumber-Halperin (LAMH) effect [18,19]; (b) Above T c , conductivity is enhanced due to Cooper pair fluctuations, often called "paraconductivity" as was first studied by Aslamazov and Larkin [20,21]. In this Letter, we extend the above calculations to the unitary regime and show how the enhanced pair fluctuation modifies the above two effects. Our main conclusions are: (i) For the appearance of resistance below T c , we find that in the unitary regime, the resistivity drops much slower than in the BCS limit as temperature decreases.
(ii) For the enhancement of conductivity above T c , we find this paraconductivity changes non-monotonically from the BCS limit to the unitary regime, and a minimum exists in between.
Time-dependent Ginzburg-Landau Theory. Our derivation is based on the time-dependent Ginzburg-Landau (TDGL) theory of BEC-BCS crossover [22]. The partition function of the unitary Fermi gas can be written Here τ is the imaginary time and µ is chemical potential. As usual, g is related to s-wave scattering length a s by 1/g = −m/(4πa s )+ k 1/(2ǫ k ) with ǫ k = k 2 /2m. Introducing Hubbard-Stratonovich fields ∆(τ, x) to decouple the interaction term in the Cooper channel and then integrating out the fermions, we obtain an effective theory for the bosonic field ∆(τ, x) representing the bosonic Cooper pair field. In the vicinity of T c where ∆ is small, we can expand the action in powers of ∆, as well as its spatial and time derivatives (after Wick rotation), where γ = γ 1 + iγ 2 is complex in general. All the parameters γ, m * , r and b can be expressed in terms of µ, T and ζ ≡ 1/(k F a s ) [23]. In the following, we will focus in the vicinity of the superfluid transition temperature, T ≈ T c and as a result, µ(T ) ≈ µ(T c ). We determine both T c and µ(T c ) within the Nozières-Schmitt-Rink (NSR) scheme [24].
The real part γ 1 describes the damping of Cooper pairs due to coupling to fermionic quasi-particles. It can be shown that γ 1 is proportional to √ µΘ(µ) [22], where Θ(µ) is the Heaviside step function. As a result, around unitarity and in the BCS side where µ > 0, the Cooper pairs have finite life time; while in the BEC limit where µ < 0, γ 1 = 0 and the Cooper pairs (molecules) are infinitely long-lived within NSR. The imaginary part γ 2 represents a propagating behavior and is given by where "P" denotes principle value. N (ξ k ) = (exp(βξ k ) + 1) −1 is the Fermi distribution function and ξ k = ǫ k − µ.
In the BCS limit ζ → −∞, µ ≫ ∆, the integrand is roughly antisymmetric with respect to the Fermi surface ǫ k = µ, a manifestation of particle-hole symmetry of the BCS state. Consequently, γ 2 ≃ 0. As ζ increases towards the unitarity and the BEC side, γ 2 gradually increases from zero, due to increasing violation of particlehole symmetry. The behaviors of γ 1 and γ 2 as a function of ζ are shown in the inset of Fig. 1. Relaxation time. As will be shown later, the relaxation time of the pairing field ∆(t, x) plays an important role in both the LAMH effect and paraconductivity. In the following, we derive an expression for the relaxation time that is valid close to unitarity. As is known [17], to maintain a non-zero thermal average of the pairing fluctuation, it is necessary to introduce the so-called Langevin force η(t, x) into the TDGL equation, where the Langevin force represents the driving force of the environment and is characterized by the "white noise" correlations [17,25] With a straightforward calculation, one finds the correlation function for the order parameter [23] where τ k represents the temporal decay of the k-th Fourier component of the order parameter, while τ ′ k characterizes its propagating behavior. In the limit k → 0, we obtain In the BCS limit ζ → −∞, γ 2 ≈ 0 and the relaxation time τ 0 only depends on γ 1 and can be reduced to τ BCS = γ 1 /r. Furthermore, in the same limit, [23], as a result, τ BCS = π/[8k B (T c − T )], consistent with the weak coupling results [19]. Away from the BCS limit, τ 0 depends on both γ 1 and γ 2 . As shown in Fig. 1, as ζ increases from the BCS limit toward the unitary regime, τ 0 first decrease as γ 1 /|r|, and then increases as γ 2 2 /(γ 1 |r|). A minimum of τ 0 occurs between the BCS limit and the unitary regime when γ 1 ≈ γ 2 . In the BEC side when µ < 0, τ k → ∞, indicating an undamped bosonic mode.
To capture the effect of damping, it is necessary to go beyond the NSR scheme, which we will not attempt here. Rather we focus around unitarity, where our calculation applies.
Residual resistance below T c . To simplify our investigation, let us consider the residual resistance of a quasione-dimensional unitary Fermi gases of cross-section area A and linear dimension L. The residual resistance below T c is due to the thermally activated phase slips. The net effect of these events is to lower the current of the state, and as a result, a voltage drop must be sustained in order to maintain a steady current [17]. In another words, a finite resistance appears below T c . Such a theory is developed by LAMH and later confirmed by experiments on BCS superconductors [26].
Within LAMH theory, residual resistivity due to the phase slips is given by [17][18][19]23] ∆F 0 is the lowest free-energy barrier to create one phase slip. Its analytic expression was derived by Langer and Ambegaokar: 2b Aξ, where r 2 /2b is the condensation energy density and ξ = 1/ 2m * |r| is the Ginzburg-Landau coherence length. ∆F 0 is roughly the condensation energy in a volume Aξ. Ω is the so-called "attempt frequency", originally derived by McCumber and Halperin [19] as Ω = L ξ ∆F0 kB T 1 τBCS . In our case the relaxation time τ BCS has to be replaced by τ 0 derived above.
Let us first investigate the dependence of ρ on the interaction parameter ζ. To do that, we first fix the tem- perature below T c by 1 − T /T c = 5 × 10 −3 . Then it is clear from Eq. 19, the resistivity depends on several ratios ∆F 0 /k B T , v −1 F T ξ and ǫ F τ 0 . Within our calculation, ǫ F τ 0 changes only by a factor of 3 ∼ 4 from BCS to the unitary regime. On the other hand, if one uses the weak coupling expression for ξ = v F /π∆ and the fact that T ≈ T c ∼ ∆ the parameter v −1 F T ξ then remains almost a constant. Numerical calculation shows that in the regime of ζ considered, v −1 F T ξ changes only a few percent; see inset of Fig.3 (a). Now, the most important dependence is on ∆F 0 /k B T , since it appears on the exponential factor. Detailed calculation shows that ∆F 0 /k B T changes by a factor about 3 ∼ 4 in the relevant regime; see inset of Fig.2(a). Taking into account all these dependences, we find that from the BCS side to unitarity, the fluctuation induced residual resistivity increases rapidly by several orders of magnitude, as shown in Fig.2. Now let us look at the temperature dependences of the residual resistivity. In Fig.2 (b), we plot the resistivity ρ in the BCS limit (ζ = −3) and at unitarity (ζ = 0), normalized to their respective values ρ * at 1 − T /T c = 1.5 × 10 −3 . We observe that as temperature decreases, the resistivity drops much slower at unitarity than in the BCS limit. We also note that in Fig. 2 (b), there is a unusual drop of resistivity (marked by the dashed lines) when temperature is very close to T c . This is because the LAMH theory fails very close to T c [21].
The above two observations at unitarity, the increased residual resistivity and its slower decrease as a function of temperature, suggest the more pronounced role of superconducting fluctuations below T c at unitarity in comparison with the BCS limit. The increase of resistivity is monotonic as one approaches unitarity from the BCS side, in accordance with our general expectations. In fact, as was discovered recently, close to unitarity when A/ξ 2 ≫ 1, the energetically more favorable defects is a solitonic vortex [27][28][29], instead of the phase soliton in the BCS regime where A/ξ 2 1. Thus our estimation of ∆F 0 is an overestimate of the defect energy and the residual resistivity should in fact increase more rapidly close to unitarity and decrease even slower as the temperature is lowered. However, when we turn to the fluctuation induced conductivity above T c , as we shall show shortly, the effect is not monotonic and in fact exhibits a minimum in between.
Enhanced Conductivity above T c . Above T c , in addition to the usual conductivity given by normal fermions, there will be an extra contribution to conductivity due to thermal fluctuation of Cooper pairs field ∆(x, t), known as paraconductivity. We introduce the fluctuating supercurrent J(t) along one of spatial direction, say,x, where J x (t) is given by J x (t) = 1 m * k k x |∆ k (t)| 2 . The fluctuation induced paraconductivity can be directly calculated using the Kubo formalism as A straightforward calculation yields the current-current correlation function as [17,25] While ∆ = 0 for T > T c , the thermal fluctuation of Cooper pair field ∆(t, x) renders a non-zero value of the time-correlation function ∆ k (t)∆ k (0) , as found previously in Eq. (5). This yields a non-zero contribution to the conductivity above T c , (11) We note that only τ k , which characterize the temporary decay of the order parameter correlation function, contributes to the conductivity. Specializing to the quasione-dimensional and considering the DC component σ 0 ≡ Curves marked by "a", "b" and "c" correspond to three different scattering lengths marked in (a). Here we take k 2 F A = 10 6 .
σ(ω = 0), the paraconductivity can be written as with τ 0 given by Eq. (7). To see how σ 0 changes as a function of interaction strength ζ, let us fix the temperature slightly above T c , 1 − T /T c = 10 −3 . As we show in Fig. 3 (a), as one goes from the BCS limit to the unitary regime, fluctuation induced paraconductivity first decreases and then increases, in comparison with the monotonic behavior of phase slip induced resistivity below T c . The similar dependence on ζ of σ 0 and the relaxation time τ 0 can be understood in the following way. According to Eq.12, σ 0 is proportional to τ 0 with the coefficient T ξ. Now, as we have shown before, since again T ∼ T c ∼ ∆, T ξ remains approximately a constant and as a result, σ 0 exhibits qualitatively the same dependence on ζ as τ 0 . Now, let us look at the temperature dependences of σ 0 for various values of ζ. In Fig. 3 (b), we show σ 0 at three interaction strengths: ζ = −3 (marked by a), ζ = −0.5 (marked by b) which is at the minimal of σ 0 and ζ = 0.3 (marked by c). They all show rapid increase as one approaches T c from above.
Discussions. In this work we have discussed fluctuation effects on the transport properties of the unitary Fermi gas based on TDGL theory. At present, our results cannot be directly applied to the BEC limit since we have not taken properly into account the interactions between molecules. This leads to the infinite lifetime in the BEC side of the crossover where µ < 0. Furthermore, in the BEC limit, it is also important to take into account the correction to chemical potential arising from the molecular interaction. We shall leave this to a future investigation.
In recent ETH experiment they have observed drop of the resistance for unitary Fermi gas below T c , but the drop is much slower compared with typical BCS superconductor [14], consistent with our findings, although their experimental situation is much complicated than what is discussed here. Namely, the finite resistance observed below T c is due to the thermally activated phase slips which becomes much easier when close to unitarity, reflecting its enhanced superconducting fluctuations. Furthermore, we find that fluctuation induced conductivity (paraconductivity) above T c exhibits non-monotonic behavior as one approaches unitarity from the BCS side. This can be verified in the same ETH experimental setup. Acknowledgements where ψ σ are Grassman fields and g is the contact interaction between fermions of opposite spins. µ is the chemical potential which is determined by requiring the number density to be equal to n. To investigate the fluctuation effects in the Cooper channel, we use Hubbard-Stratonovich transformation to decouple the interaction term in the Cooper channel and then integrating out the fermions. We obtain an effective theory for the bosonic field ∆(τ, x), which represents the cooper pair field. Straightforward calculations yield the partition function of field ∆ as is the Gor'kov Green function.
In the vicinity of the phase transition the gap parameter ∆ is small and an expansion in terms of ∆ becomes possible. Including both the spatial and time derivatives (after Wick rotation) and retaining the parameter ∆ up to the forth order we obtain an effective action as where γ = γ 1 + iγ 2 and all the parameters can be expressed in terms of microscopic parameters as In above equation N (ξ k ) = 1/(exp(βξ k ) + 1) is the Fermi distribution function and ξ k = ǫ k − µ with ǫ k = k 2 /2m. Function Θ(2µ) is the heaviside step function. Notation "P" in equation of γ 2 denotes the principle value. Explicitly, the parameter b is the result of loop calculation with four fermion propagators The other parameters γ 1 , γ 2 , 1 2m * and r are all derived from the inverse vertex function Γ −1 (ω n , k), which after the standard renormalization which replace g with the two-body scattering length a s , is given by To derive the time-dependent Ginzburg-Landau equation, we first analytically continue vertex function to real frequency iω n → ω + i0 + . This procedure generates a time-dependent term with parameter γ = γ 1 + iγ 2 . The γ 2 term exhibits a propagating behavior. As long as µ > 0, γ 1 is nonzero, which indicates a finite lifetime of the Cooper pairs. Three important quantities that characterize the time-dependent Ginzburg-Landau theory are relaxation time, coherence length and condensation energy. The variation of the relaxation time as a function of ζ ≡ 1/k F a s is illustrated in Fig. 1 of the main text. Here we plot the variations of the coherence length and condensation energy with respect to 1/k F a s . In BCS and BEC limits all the parameters can be analytically derived as shown in Table I Using the asymptotic expressions above we can derive a Gross-Pitaevski equation from Eq. (4) at BEC limit. By defining Ψ = |γ 2 |∆ we obtain The parameters in above equation can be calculated as where the boson mass M = 2m, the binding energy E b = 1/(ma 2 s ) and the boson scattering length a b = 2a s .
Langer-Ambegaokar-McCumber-Halperin theory
In this part we give a simple derivation of the Langer-Ambegaokar-McCumber-Halperin theory [2]. We start our analysis from the static Ginzburg-Landau free energy functional Minimizing the free energy with respect to the field ∆ yields the time-independent Ginzburg-Landau equation − ∇ 2 2m * ∆ − r∆ + b|∆| 2 ∆. For a neutral system the current density is written as J = 1 2m * i [∆∇∆ − ∆∇∆]. A uniform constant-current solution of the 1D Ginzburg-Landau equation can be written as ∆ k = f k exp(ikx) with f 2 k = (r − k 2 /2m * )/b, where k is the allowed wave vector along the x direction with periodic boundary condition ∆(0) = ∆(L). L is the length of the 1D tunnel. The current density subject to the solution of ∆ k = f k exp(ikx) is J = k(r−k 2 /2m * ) . This current has a maximum value J c = (2r/3) 3/2 / √ mb at k c = 2mr/3. For J < J c , the stead state is that of a persistent current without any dissipation. According to the Josephson relation, this corresponds to a definite phase twisting between the two ends of the superconducting wire. Close to T c , thermal fluctuations can either add or remove one more twist by 2π in the wire, with an free energy barrier ∆F 0 that is determined by Langer and Ambegaokar [3]. In the presence of supercurrent, there is now a difference between the free energy barriers between the adding (∆F + ) or removing (∆F − ) of an extra 2π-twist, which is given by where A is the cross-section area of the 1D channel. The prefactor Ω is the attempt frequency as discussed in the main text. As a result, the rate of phase decreasing by 2π is slightly larger than the one of phase increasing by 2π. The changing of phase difference between two ends per unit time is where φ 12 is the difference between the order parameter phases at the two ends of the channel. A steady current state can be achieved when a small chemical potential difference ∆µ is applied on the ends of the wire. The resistivity can then be defined as For small current the resistivity can be approximated as | 2014-08-20T08:03:40.000Z | 2014-08-20T00:00:00.000 | {
"year": 2014,
"sha1": "f26adf951dd73ef2d65ab34dac8025dd0bbf8854",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1408.4557",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f26adf951dd73ef2d65ab34dac8025dd0bbf8854",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244490858 | pes2o/s2orc | v3-fos-license | Reconnecting groups of space debris to their parent body through proper elements
Satellite collisions or fragmentations generate a huge number of space debris; over time, the fragments might get dispersed, making it difficult to associate them to the configuration at break-up. In this work, we present a procedure to back-trace the debris, reconnecting them to their original configuration. To this end, we compute the proper elements, namely dynamical quantities which stay nearly constant over time. While the osculating elements might spread and lose connection with the values at break-up, the proper elements, which have been already successfully used to identify asteroid families, retain the dynamical features of the original configuration. We show the efficacy of the procedure, based on a hierarchical implementation of perturbation theory, by analyzing the following four different case studies associated to satellites that underwent a catastrophic event: Ariane 44lp, Atlas V Centaur, CZ-3, Titan IIIc Transtage. The link between (initial and final) osculating and proper elements is evaluated through tools of statistical data analysis. The results show that proper elements allow one to reconnect the fragments to their parent body.
Since the launch of Sputnik 1 in 1957, thousands of satellites have been deployed in orbit around the Earth. Explosions or collisions of satellites generated millions of space debris of various sizes 1 , currently traveling at different altitudes: rocket stages, fragments from disintegrations, bolts, paint flakes, electronic parts, etc. Chain reactions triggered by catastrophic events involving satellites might increase the hazard of (human and robotic) space activities. A single break-up event generates a cloud of debris which scatters around, sometimes reaching great distances after a relatively short time. Once the fragments are dispersed, it is often difficult to trace them back; hence, a question of paramount importance is to connect the debris to their parent satellite. In this work we propose a method that allows us to link the fragments, after a certain interval of time, to the configuration of the debris soon after the initial catastrophic event. This result contributes to address a timely problem since, in case of a collision between two satellites or an explosion of a single satellite, it is certainly important to know the parent bodies that generated the space debris. The implications are wide and range from space sustainability to space law.
To study a specific break-up event, we introduce a suitable model (based on the Hamiltonian formalism) to describe the dynamics of each fragment. The model is composed of the sum of the Keplerian attraction, the effect of the geopotential, the gravitational influence of Sun and Moon. Then, we implement perturbation theory to construct a sequence of canonical transformations providing, for each debris, approximate integrals of motion called proper elements, namely quantities that stay nearly constant over time. Each fragment is characterized by a set of six orbital elements, namely semimajor axis a, eccentricity e, inclination i, mean anomaly M, argument of perigee ω and longitude of the ascending node . Starting from their initial values, we compute the orbital elements of each fragment after a given interval of time, to which we refer as the final osculating elements. Then, we compute the proper elements associated to the final osculating elements, and we compare them either with the initial elements and with the corresponding proper elements at the initial time. The comparison gives the desired information: while the final osculating elements might spread far away from the initial values, the (initial/final) proper elements stay almost constant and retain the original features of the cloud of fragments 2 . A striking use of proper elements was already proposed to group asteroids, inspired by the pioneer work 3 of Hirayama in 1918 and continued by many other authors 4,5 . The analytical computation of proper elements allowed to group asteroids in families, possibly leading to the conjecture that such asteroids might be fragments of an ancestor parent www.nature.com/scientificreports/ body. Knežević and Milani 6 introduced also the synthetic proper elements based upon a numerical integration, a digital filtering of the short-period terms and a Fourier analysis. Motivated by the successful results on asteroids, we propose to group and reconnect space debris through the computation of the proper elements associated to fragments generated by a satellite break-up event [7][8][9] . The procedure we are going to describe, requires the introduction of a realistic model which depends on quantities varying on different time scales; hence we need a suitable hierarchical set of transformations of coordinates, called normal forms, aimed at constructing the proper elements, whose relation with the initial elements is analyzed through statistical methods. We will consider four sample cases associated to the break-up events of the satellites Ariane 44lp, Atlas V Centaur, CZ-3, Titan IIIc Transtage. Using statistical data analysis, we show the effectiveness of the use of proper elements in reconnecting the fragments to their parent body. To reconnect the debris to a parent body, we back-propagate the debris for a given time and compare the osculating or proper elements at the initial time and at the back-propagated time. The effectiveness of the method has been shown in the specific example of Titan III Transtage. We finally provide an example in which one can distinguish between proper elements associated to nearby breakup events.
This work is organized as follows. After introducing the model, we describe the procedure to compute the proper elements through normal form theory. Then we investigate the test cases by computing osculating and proper elements, and by analyzing the results through histograms, Kolmogorov-Smirnov test, Variance Equivalence test and Pearson correlation coefficient. We end up with some conclusions and perspectives.
The model
For the present work, our case studies will be located at altitudes between 15000 and 25000 km, all of them well above the Earth's atmosphere. At those altitudes a celestial object is subject to different forces that we describe through a Hamiltonian function composed by the following parts: the attraction of the Earth (that we split as the sum of the Keplerian part H Kep and the potential H E generated by the Earth's non-spherical shape), and the gravitational influence of the Moon H M and the Sun H S (both assumed to be point masses). The overall Hamiltonian depends upon the orbital elements of the debris, Moon and Sun, and on the sidereal time describing the rotation of the Earth 10 .
We are aware that a realistic model should include also the effect of the solar radiation pressure 11 (SRP). However, we decided not to consider SRP for two main reasons: (i) the work 2 provides some experiments on synthetic space debris (namely obtained through a simulator of break-up events), using a model that includes SRP; however, the results show that at intermediate altitudes the computation of the proper elements is not much affected by SRP, at least for objects with an area-to-mass ratio lower than 0.74; (ii) there does not exist a public catalogue that provides information about the area-to-mass ratio of real space debris, thus preventing reliable experiments on real cases.
The Keplerian and geopotential Hamiltonians. Expressing the Hamiltonian in terms of the orbital elements, the Keplerian part H Kep is given by where G is the gravitational constant and M E is the mass of the Earth.
The contribution H E due to the Earth's non-spherical shape is computed as follows 12,13 : we expand the geopotential in spherical harmonics, then we average over the fast variables (namely the mean anomaly of the debris and the sidereal time), and finally we limit the expansion of the secular part of the geopotential to the greatest spherical harmonic coefficients, usually denoted as J 2 and J 3 . The resulting Hamiltonian takes the form: where R E is the Earth's radius (equal to 6378.1363 km). The expansion of the Moon's Hamiltonian in terms of the orbital elements of the Moon and the debris is given below; we underline that in applications we will consider the expansion of H M in (1) to l = 2: Besides depending on the orbital elements of the debris, the Hamiltonian depends also upon the orbital elements of Moon and Sun. For our purposes, it is essential to stress that the debris, Moon and Sun move on different time-scales, since the angular variables describing their respective motions vary with rates of the order of days (for the debris), months (for the Moon), years (for the Sun), see Table 1. As a consequence, the respective angular variables of debris, Moon and Sun can be ordered hierarchically as fast, semi-fast and slow. The fast angles are indeed the mean anomaly of the debris and the sidereal time accounting for the rotation of the Earth; we report in Fig. 1 also the integration obtained using the Hamiltonian, doubly averaged with respect to such fast angles.
Normal form and proper elements
We briefly recall the basics of normal form theory 14 , which is at the basis of the computation of the proper elements. We consider a Hamiltonian of the form where (I, ϕ) are action-angle variables with (I, ϕ) ∈ B × T n , where B ⊂ R n is an open set and n denotes the number of degrees of freedom. In (2) the function H 0 (I) is the integrable part, ε ∈ R is a small parameter, H 1 (I, ϕ) is the perturbing function.
The normalization procedure consists in the definition of a suitable change of coordinates that transforms the Hamiltonian, so that it becomes integrable to orders of ε 2 . The procedure can be iterated for some steps, but it is known that in general it is not converging 15 .
We assume that the function H 1 can be expanded in Fourier series as where K ⊆ Z n and b k are functions with real coefficients. Let χ be the generating function of the canonical transformation from the variables (I, ϕ) to the new variables (I ′ , ϕ ′ ) given by where the action of S ε χ is defined by with {·, ·} the Poisson bracket operator. We determine S ε χ by requiring that the new Hamiltonian and H 2 is the remainder term of order ε 2 . Inserting the change of coordinates in (2), one obtains the transformed Hamiltonian which takes the desired form (3) provided χ satisfies the following normal form equation: Expanding χ in Fourier series, denoting the frequency by ω 0 = ∂H 0 ∂I ′ , one obtains that the generating function is given by the following formula, which is valid under the non-resonance assumption k · ω 0 � = 0: A higher order normal form is obtained by iterating the above procedure.
Recalling that the space debris model described above depends on fast, semi-fast and slow variables, we compute the normal form, taking advantage of the hierarchical structure of the coordinates associated to the debris, Moon and Sun. We first average the Hamiltonian over the fast (mean anomaly of the debris and sidereal time) and semi-fast (mean anomalies of Moon and Sun) angles. According to Hamilton's equations, the rate of variation of the semimajor axis of the debris is given by the derivative of the Hamiltonian with respect to the mean anomaly; since we averaged over the mean anomaly, the semimajor axis is constant and becomes the first proper element, namely a quasi-integral of motion for the averaged approximated model. After averaging over the mean anomalies and the sidereal time, we end-up with a Hamiltonian function with three degrees-of-freedom in the extended phase space, since the Hamiltonian depends on time through the variation of the longitude of the ascending node of the Moon (see Table 1).
Next, we consider some reference values for the eccentricity and the inclination (namely the values of the fragments of the case study) and we expand the averaged Hamiltonian around such values. Then, we implement . www.nature.com/scientificreports/ a canonical change of variables through a Lie series normalization, implemented through a Mathematica © program, that removes the dependence on the angles; this procedure provides two more proper elements associated with the eccentricity and the inclination. By making explicit all transformations 2 , we end the procedure by back-transforming the change of variables to express the proper elements in the original coordinates.
In conclusion, the procedure leading to the computation of the proper elements can be summarized as follows 2 .
1. We consider the Hamiltonian including the contributions of the gravitational attractions of the Earth, Moon and Sun; we average with respect to the fast variables, in particular the mean anomaly M; hence, the semimajor axis is constant and becomes the first proper element. 2. Since the longitude of the ascending node of the Moon M depends on time, the Hamiltonian resulting from step 1 depends on (e, i, ω, �, t) ; hence, we introduce the Hamiltonian in the extended phase space, so that it becomes autonomous, although depending on one more additional variable. 3. We fix reference values for e 0 and i 0 , and we introduce new variables η and ι such that e = e 0 + η , i = i 0 + ι. 4. We expand the Hamiltonian in power series around η = 0 , ι = 0 up to order 3 in η , ι. 5. We split the resulting Hamiltonian into the linear part and a remainder. We compute the generating function and the canonical transformation of coordinates to remove the remainder to higher orders. 6. Once obtained the new normal form, we disregard the remainder, so that the two actions corresponding to eccentricity and inclination become constants of motion. 7. The initial values of the new constants of motion, which are the two additional proper elements, are obtained back-transforming the canonical transformations in terms of the original variables, namely in terms of the initial data.
For a specific case, we compute the osculating and proper elements by integrating the equations of motion and by computing the normal form using a Mathematica © program. We summarize below the steps of the procedure which will be implemented for each of the fragments of the case studies analyzed in the next sections.
Step 1. INPUT: set the normalization parameters: maxSteps=maximum normalization steps, maxR=number of terms kept in the remainder after each step, maxTaylor=maximum order of the Taylor expansion in the Lie Series, T=time span of propagation, step=integration step size.
Step 3. Integrate Hamilton's equations of the full Hamiltonian up to time T to get the osculating final elements.
Step 4. Compute the average of the Hamiltonian with respect to the mean anomalies of debris, Moon, Sun, and the sidereal time.
Step 5. Expand up to order 3 the averaged Hamiltonian (in the extended phase space) around the reference values e 0 , i 0 .
Step 6. Compute the generating function up to order maxSteps.
Step 7. Compute the new Hamiltonian using the generating function determined at Step 6.
Step 8. Compute the analytic solutions by determining the new coordinates as function of initial coordinates.
Step 9. Determine the two proper elements by integrating the analytic solutions over the given interval and dividing by the length of the interval.
Test cases: proper elements and data analysis
Let us consider a concrete case formed by, say, N fragments. In practical applications, our back-tracing procedure is the following: (i) we take the (initial) orbital elements of all N fragments at time t = t 0 ; (ii) we compute the initial proper elements from the initial orbital elements; (iii) we propagate all fragments up to a time t = T to compute the (final) osculating elements; (iv) through averaging and normal form, we compute the final proper elements from the final osculating data; (v) we compare the final osculating and final proper elements with the initial orbital and initial proper elements.
Since the proper elements are quasi-integrals of motion, we expect that they retain the main features both in the initial and the final phase, thus reconnecting much better to the original elements than the propagated osculating elements. Of course, the reconnection through the proper elements is more effective in those cases in which the final osculating elements get more dispersed over time, thus losing their link with the original data. Concerning step (v), beside making a visual inspection of the plots in the planes (a, i), (i, e) of (initial and final) osculating and proper elements, we apply data analysis techniques by using the Kolmogorov-Smirnov (KS) test and the Variance Equivalence (VE) test of the errors between the osculating and proper elements taken at the initial and final times. We also compute the Pearson correlation coefficients of initial vs. final osculating elements, and initial vs. final proper elements.
Such methods, borrowed from statistical data analysis, are briefly recalled as follows 16 .
(S1) Kolmogorov-Smirnov test (KST) is a goodness-of-fit test where the null hypothesis says that two datasets were taken from the same distribution, while the alternative hypothesis states that they are not taken from the same distribution. We used the predefined Mathematica © function KolmogorovSmirnovTest, which returns the p-value of the statistical test. The p-value has to be compared with a significance level α (default is 0.05), null hypothesis being rejected for p < α. www.nature.com/scientificreports/ assumptions, one of the following tests is applied: "Brown Forsythe", "Conover", "Fisher Ratio", "Levene". We used a Mathematica © function called VarianceEquivalenceTest, that automatically chooses the most appropriate test and returns the p-value and the conclusion of the test. (S3) Pearson correlation coefficient, usually denoted by r, is used as a statistical measurement of the relationship between two one-dimensional datasets. It is defined as and gives a real number belonging to [−1, 1] , where 1 means a total positive linear relationship, 0 means no relationship, and −1 means a total negative linear relationship between the two datasets. (S4) To visualize the data and to understand the main features of a distribution, one can plot the histogram of the dataset. This plot shows the frequency of each element from the set. This is a useful tool to compare the distributions of two or more data sets.
The outcome of the data analysis is summarized in Tables 2 and 3, where we provide the comparison between different elements. Table 2 gives the results, including the p-values, about the Kolmogorov-Smirnov test and the Variance Equivalence test for the errors between osculating and proper elements at different final times. It is remarkable that both tests are always rejected, showing that the errors associated to the osculating and proper elements follow different distributions. Table 2 shows also the ratio of the root mean square errors of osculating versus proper elements, supporting that the errors associated to the osculating elements are larger than those associated to the proper elements. Table 3 gives the Pearson correlation coefficients of the initial and final, osculating and proper elements at different times.
In the supplementary material we detail the results for a fragment sample, that we take from Ariane 44lp; the supplementary material is aimed to help in reproducing the methods described in the present paper and, Table 2. The p-value of the Kolmogorov-Smirnov test and Variance Equivalence test for the errors between osculating (eoe) and proper elements (epe) at different final times (25, 50, 100, and 150 years). The last two columns contain the ratio of the root mean square (RMS) between the osculating errors and the proper elements errors. The last row refers to an example where we back-propagate in time. A 0 p-value means a number lower than 10 −40 . www.nature.com/scientificreports/ precisely, to compute the osculating elements at the initial and final times, to determine the normal form, to get the analytic solution and to compute the proper elements for the specific fragment. The same procedure can be implemented for the other fragments to get the results obtained in this work. Using the data in Table 3, Fig. 2 summarizes the Pearson correlation coefficient between the initial data and the final osculating and proper inclination, as well as the initial and final proper inclination. In all sample cases, the correlation between the initial and final proper elements is always close to 1, while using the other sets we obtain discrepancies between the correlations of the initial and final states.
In the case of Titan IIIc Transtage there is a weak correlation between initial and final osculating elements, a better Pearson coefficient between initial osculating elements and final proper elements, and an almost perfect fit of initial and final proper elements. The other three sample cases have a similar behavior. Table 3. The Pearson correlation coefficient obtained propagating forward in time between the initial osculating elements (ioe) and the final osculating elements (foe), between the initial osculating elements (ioe) and the final proper elements (fpe), between the initial proper elements (ipe) and the final osculating elements (foe), and between the initial proper elements (ipe) and the final proper elements (fpe) for eccentricity (ecc.) and inclination (incl.) at different final times (25, 50, 100, and 150 years). The last row refers to an example where we back-propagate in time. Tables 2 and 3. Table 2 shows that the KS test and the VE test are always rejected, both for eccentricity and inclination, at all times we investigated, namely 25, 50, 100, 150 years. Hence, the errors for osculating and proper elements follow different distributions with the errors associated to the osculating elements being larger than those of the proper elements. The Pearson correlation coefficient in Table 3 tends to be constant when we compare the proper elements at different times. This result confirms the near constancy of the proper elements for a long period of time. Figure 3 shows the evolution of the osculating elements in the plane a-i compared with the evolution of the proper elements in the same plane (left); it also shows the distribution of the inclination (right) for the times 10, 25, 50, 100, 150 years in case of osculating (top) and proper (bottom) elements.
As it can be seen from the plots in Fig. 3, the osculating inclination starts to spread around 25 years; the spread increases with time. On the contrary, the proper inclination is kept almost constant, thus allowing to reconstruct the distribution at the initial time. This fact is also confirmed by the histograms and the associated Pearson correlation coefficients.
Titan IIIc Transtage. It is known that 17 on February 21, 1992 an explosion of Titan IIIc Transtage produced several debris. All debris have been tracked and their coordinates at the present time can be found on "Space track". We test our procedure, assuming to ignore the break-up time and propagating backward all fragments for a period of time equal to 29.5 years. The following results confirm the validity of the procedure based on the computation and comparison of the proper elements. In fact, like in the other cases, the KS and VE tests are rejected for all times with errors bigger for the osculating than for the proper elements. Beside, comparing the osculating elements at the present time and at the final time we obtain a Pearson correlation coefficient equal to 0.99886 for the eccentricity and 0.888539 for the inclination. On the other hand, comparing the proper elements at present and backward in time, we find a Pearson correlation coefficient equal to 0.999997 for the eccentricity and 0.999947 for the inclination.
Two mixed cases. We finally test our method by mixing the cases of CZ-3 and Atlas V Centaur; the results are given in Fig. 4, which shows the evolution of the osculating and proper inclinations at different times (10, 25, 50, 100, 150 years). Through the proper elements, we succeed to distinguish two different clouds. In fact, while | 2021-11-24T06:18:13.879Z | 2021-11-22T00:00:00.000 | {
"year": 2021,
"sha1": "cece3bcacad265a6eab3298dce1adfc29df6cef9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-02010-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d357b91a5287e670cf487a81a604a6eae45e712",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247140795 | pes2o/s2orc | v3-fos-license | Experimental study on the effect of additives on the heat transfer performance of spray cold plate
Adding 200 ppm of sodium dodecyl sulfate (SDS) can increase the heat transfer coefficient of the spray cold plate by 19.8% Abstract: The spray cold plate has a compact structure and high-efficiency heat exchange, which can meet the require-ments of high heat flux dissipation of multiple heat sources, and is a reliable means to solve the heat dissipation of the next generation of chips. This paper proposes to use surfactants to enhance the heat transfer of the spray cold plate, and conduct a systematic experimental study on the heat transfer performance of the spray cold plate under different types and concentrations of additives. It was found that among the three surfactants, sodium dodecyl sulfate (SDS) can improve the heat transfer performance of the spray cold plate, and at the optimal concentration of 200ppm, the heat transfer coefficient of the spray cold plate was increased significantly by 19.8%. Both the n-octanol-distilled water and Tween 20-distilled water can reduce the heat transfer performance of the cold plate using multi nozzles. In addition, based on the experimental data, the dimensionless heat transfers correlations for the spray cold plate using additives were conducted, and the maximum errors of dimensionless correlations for using additives were 2.1%, 2.8%, and 5.4% respectively. This discovery provides a theoretical analysis and basis for the improvement of spray cold plates.
Introduction
With the development of miniaturization and integration of electronic components, the power density of electronic devices increases sharply in recent years. It is reported the power density of a chip made of gallium arsenide (GaAs) was less than 100 W/cm 2 , while the power density of a common chip made of gallium nitride (GaN) has reached 200 W/cm 2 [1] . However, the existing heat dissipation method can only achieve a heat dissipation capacity of about 120 W/cm 2 [1] . Therefore, the heat dissipation of the chip becomes an urgent issue and has caused widespread concern.Spray cooling is a new and efficient cooling technology, which has the advantages of high heat exchange efficiency, fast heat dissipation, small thermal resistance, and good uniformity [2−6] . The spray cold plate adopts the spray cooling mechanism, which arranges multiple micro swirl atomization nozzles side by side and integrates them on the cold plate, which solves the problems of small spray range of a single nozzle, limited spray area and uneven droplet distribution. It can not only meet the needs of compactness and multiple energy sources, but also achieve a heat exchange effect with a high heat flux density of more than 200 W/cm 2 [7] .
Additives can effectively enhance the heat and mass transfer effect of spray cooling, and its heat transfer enhancement effect on spray cooling has been widely recognized.At present, the research on additives has mostly focused on salt additives and soluble gases or nanofluids. Wang et al. [8] used potassium chloride (KCl) to conduct an open-loop spray cooling experiment and found that there is a certain heat transfer enhancement effect, but the higher the concentration, the worse the heat transfer effect. Das et al. [9] conducted experiments with brine containing dissolved carbon dioxide and found that the heat removal rate showed an upward trend, but it decreased after the concentration exceeded 40%. Pati [10] found that after adding NaCl to the spray, the hindrance of the surface oxide layer to the heat transfer rate is weakened, thereby enhancing the cooling effect. Khoshvaght-Aliabadi et al. used numerical simulation and experimental methods to study the effect of fins on the nanofluid heat dissipation system [11−13] . However, the soluble gas is unstable, and the salt additives are corrosive and easy to block the nozzle. Therefore, our laboratory proposes the use of ionic and high-alcohol surfactants to improve the spray cooling heat exchange effect. Cheng et al. [14,15] used n-octanol and 2-ethylhexanol as additives, and compared them with salt additives, and found that both can significantly enhance the single-nozzle spray cooling heat transfer effect, but the performance of high alcohol additives is better. Chen [16] further explained the dynamic Leidenfrost temperature rise caused by high alcohol surfactants from the perspective of bubble bursting and coalescence. Zhang [17] used high alcohol additives to conduct experiments and found that the heat flux and surface unevenness increased first and then decreased, and the effect in the singlephase region was small, but it was greatly enhanced in the two-phase region, and there is an optimal additive concentration at the same time. Li [18] explained the effect of surfactants on spray characteristics and fluid properties from the perspective of numerical simulation.
In summary, some additives can improve the heat transfer performance of spray cooling. However, the previous studies mainly investigated the effect of additives on the heat transfer performance of single-nozzle and open-space spray cooling, the heat transfer performance of multi-nozzle spray cooling by using additives is relatively lacking, and the effect of additives on the spray cooling performance in the case of a closed small spray chamber is still unclear. Therefore, for improving the heat transfer performance of the multi-nozzle cold plate to meet the high-heat flux heat dissipation, this article made an experimental study on the effect of additives on the heat transfer performance of the compact spray cold plate, and different types and concentrations of additives were concerned. In addition, a new dimensionless heat transfer correlation of the three additives was fitted to provide a criterion for the theoretical analysis of the spray cold plate. for installing heat sources. The distance between the front and back sides is 9 mm. The size of the spray cold plate is 380 mm (length) × 64 mm (height) × 9 mm (thickness). The heat source adopts thick film resistors, the substrate is aluminum nitride ceramic, the size of each heat source is 5 mm × 5 mm, the resistance is 160 Ω, the heat sources are connected in parallel, and each 4 heat sources correspond to one spray cavity, and two spray cavities are used for experiment. The installed object is shown in Fig. 1 a. In this experiment, a miniature swirling atomizing nozzle is used. The spray angle is 35°, the outer diameter is 6.5 mm, the thickness is 5 mm, the outer flange diameter is 7.9 mm, and the outlet aperture is 0.3 mm.The micro nozzles are fixed on the top of the cold plate by thread, and every 4 nozzles are a group, corresponding to a spray chamber, which realizes the compact design.When working, the working medium flowing into the water inlet flows over the cold plate, atomizes into liquid droplets through the nozzle, jets to the surface of the cold plate, and then flows out of the water outlet through the bottom of the cold plate, as shown in Fig. 1 b, the flow distribution of each nozzle is relatively uniform.
Experimental system and working conditions
On this basis, an experimental platform for the spray cold plate was designed and built, as shown in Fig. 2. The experimental system includes liquid storage tanks, micro pumps, filters, buffers, cold plates, pressure swirl nozzles, thick film resistors, plate heat exchangers, cryogenic thermostats, flow meters, shock-resistant pressure gauges, data acquisition instruments, etc.
A flow meter and a shock-resistant pressure gauge are installed in front of the cold plate entrance. The range of the flowmeter is 0~60 L/h, and the range of the shock-resistant pressure gauge is 0~150 Mpa. The T-type thermocouple is used to measure the temperature of the heating surface. The temperature measurement range is −200 ℃~350 ℃, and the temperature measurement error is ± 0.5 ℃. The experimental conditions are shown in Table 1. The inlet pressure in the table is the relative pressure.
The types and concentrations of additives used in the experiment are shown in Table 1. The concentrations are mass concentrations, and all working fluids are prepared and used on site.The specific experimental operations are as follows: ① Clean the entire experimental system with distilled water, and drain the distilled water after cleaning. ② Prepare different concentrations of SDS, n-octanol and Tween 20 solutions, and store them in stainless steel containers. ③ Turn on the power of the low temperature thermostat, set the water temperature in the thermostat to 15 ℃, and control the temperature of the working fluid at the entrance of the cold plate by adjusting the water flow through the heat exchanger in the low temperature thermostat. ④ Turn on the DC power switch to supply power to the inlet and outlet micropumps, adjust the pump power so that the flow and pressure at the inlet of the cold plate reach the preset value, and it is observed that the spray effect is good and the discharge is smooth. ⑤ Connect the AC transformer to supply power to the heat source resistance, and turn the knob to make the voltage at both ends of the heat source reach the preset value. ⑥ Turn on the power of the data acquisition instrument, and at the same time the supporting software on the computer starts to collect the temperature signal of the thermocouple. ⑦ When the temperature value of the thermocouple changes less than 1 ℃ within 10 minutes, it is considered that the entire experimental sys- tem has reached thermal equilibrium, record the data, and adjust the transformer for the next round of experiments. ⑧ After a whole set of experiments, stop supplying power to the heat source, and when the surface temperature of the cold plate drops below 20 ℃, turn off the inlet and outlet micropumps and the cryogenic thermostat. ⑨ Clean the entire experimental system with distilled water. After cleaning, drain the distilled water and proceed to the next set of experiments.
Uncertainty analysis
In this experiment, a thick film resistor with aluminum nitride substrate is used as the heat source, which is connected to the shell by welding. Its thermal conductivity is high (170 W/(m·k)), the area of the heat source is small, and the backside is covered with thermal insulation material, so the heat is mainly dissipated by the spray cooling of the cold plate, and its power is calculated by where U and I are heating voltage and current respectively. The heat flux density is calculated by where A w is the area of the heating surface. The heat transfer coefficient is calculated by where T w is the surface temperature of the heating surface, and T in is the inlet temperature of the working fluid. The measurement error of each measurement parameter is shown in Table 2. According to the uncertainty transfer formula: The uncertainty of main parameters discussed in the heat exchange correlation section can be calculated, which is shown in Table 3.
Experimental results and discussion
Using different concentrations of SDS-distilled water, noctanol-distilled water, and Tween 20-distilled water as cooling fluids, experiments were carried out using the spray cold plate experimental system introduced above to explore the effect of additives on the heat transfer performance of the spray cold plates.
Influence of the concentration of additives on the heat transfer performance of spray cold plate
The cooling curve and heat transfer coefficient curve under different additive concentrations are shown in Figs. 3~ 5. It can be seen from the figure that the heat flux density and the surface temperature of the cold plate are generally linear, indicating that within the temperature range of the experiment, each working medium has not reached its saturation temperature, and the spray cooling is in the single-phase zone. In addition, it can be seen that the adding of part of the concentration of SDS significantly improves the heat transfer effect of the spray cold plate, among which 200 ppm SDS has the best heat exchange effect, followed by 100 ppm. The heat exchange effect of 400 ppm is only slightly better than that of distilled water. The addition of 300 ppm SDS makes the heat exchange effect of the spray cold plate worse, that is, with the concentration of additives increase, the heat exchange performance shows a phenomenon of first getting better and then getting worse. Considering the optimal concentration of 200 ppm, when the surface temperature of the cold plate is 30 ℃, 55 ℃, and 80 ℃, the heat transfer coefficient is increased by 27.10%, 13.71%, and 18.55%, respectively. Different from the effect of SDS, the addition of n-octanol increases the surface temperature of the cold plate and thus reduces the heat transfer coefficient of the cold plate, shown in Fig. 4. Among the three concentrations, 200 ppm n-octanol has the best heat exchange effect, and 100 ppm has the worst heat exchange effect, but all have worse heat exchange effects than distilled water. For the concentration of 200 ppm, when the surface temperature of the cold plate is 30 ℃, 55 ℃, and 80 ℃, the heat transfer coefficient is reduced by 9.66%, 7.43%, and 3.48%, respectively. The addition of Tween 20 also significantly weakens the heat exchange of the spray cold plate, although the 300 ppm Tween 20 showed the best performance, the heat transfer coefficient is reduced by 29.37%, 28.51%, and 17.94% respectively with the cold plate surface temperature at 30 ℃, 55 ℃, and 80 ℃, respectively.
The effect of spray cold plate on the strengthening effect of additives
To specify the difference in the effect between the cold plate using multi nozzles and single nozzle, the experimental comparison was made, shown in Fig. 6. The optimal concentration of SDS, n-octanol and Tween 20 were selected as 200 ppm, 200 ppm and 300 ppm for the cold plate and it were 800 ppm, 200 ppm and 45 ppm for a single nozzle [14,15] . It can be clearly seen from the figure that compared to the single nozzle, the effect of additives on spray cooling has been greatly weakened by using the cold plate.It could be mainly because that the drainage problem could generate in the small closed spray cavity of the compact micro-nozzle array cold plate with high foaming additives. The three additives used in the experiment have a lower density than water and they are partially miscible, so it is easy for them to float on the surface of the working fluid, increasing the flow resistance and viscosity, and the foaming property is extremely enhanced. In the single-nozzle experiment [17,18] , the heating surface is an open platform, the liquid film stays on the heating surface for a short time and the spray flow rate is small. In the environment of the spray cold plate enclosed small spray cavity, foaming makes the drainage effect sharply worse, and the liquid film continues to accumulate, causing the droplets hitting the heating surface to dissipate heat and the convective heat exchange between the liquid film and the heating surface is greatly weakened, so the heat exchange effect becomes poor.
Physical properties of additives and their effects on spray characteristics
The additives used in the experiment are at the level of ppm, so the density, latent heat, boiling point and other physical properties of the working medium have no obvious changes. Fig. 7 shows the viscosity changes and the surface tension curves of each additive at different concentrations. It can be seen that the viscosity increases with the increase of the concentration at low concentrations, and no obvious change after reaching a certain concentration, which increases by 9.7% at most compared with water. It can also be seen that the surface tension decreases significantly with the increase of the concentration, and tends to be stable after reaching the critical micelle concentration (CMC). CMC of SDS and Tween 20 was 150 ppm and 400 ppm respectively, while n-octanol did not reach the CMC point. The physical properties were measured at 25 ℃.
Additives are substances that are slightly soluble in water, act on the water surface and form agglomerates in the water body. It mainly affects heat transfer by changing the surface tension of water. According to the mechanism of droplet breaking, the condition of droplet breaking is that W e is greater than W eb (critical Weber number), and the Weber number is inversely proportional to the surface tension σ, so the smaller the surface tension, the larger the Weber number, the easier the droplets are broken and atomized into small liquid drops. In addition, the reduction of surface tension can reduce the solid-liquid contact angle, improve the spreadability and wettability of liquid droplets on the heating surface, and is conducive to boiling heat transfer. When the promotion effect is greater than the above-mentioned weakening effect, the overall heat exchange performance of the spray cold plate appears to be enhanced, so there is still a partial concentration of SDS that has a positive effect on the heat exchange performance of the spray cold plate.
Dimensionless correlations for spray cold plate using additives
The spray cooling in this experiment is in a single-phase zone, and the heat transfer characteristics are mainly affected by the spray characteristics, the physical properties of the working fluid, and the temperature. It can be represented by Among them,h is the convective heat transfer coefficient(W/(m 2 ·K)), D is the width of the heating surface (m), k is the thermal conductivity (W/(m·K)), ρ is the fluid density (kg/m 3 ), is the average droplet velocity (m/s), is the average sauter diameter of the droplet (m), μ is the viscosity coefficient (Pa·s), σ is the surface Tension(N/m), C p is the specific heat capacity of the working fluid(J/(kg·m 3 )),T in is the inlet temperature of the working fluid(℃), T surf is the surface temperature of the cold plate (℃), T sat is the saturation temperature of the working fluid(℃).
The heat transfer correlation equations of the three additives are obtained by fitting, and the form is as The heat transfer correlation coefficients of the three additives are shown in Table 4, and the fitting curves are shown in Figs. 8~10. It can be seen from the figure that the calculated value of the heat transfer correlation equation fits well with the experimental value, and the maximum error is 2.1%, 2.8%, and 5.4%, respectively.
Conclusions
For improving the heat transfer performance of multi-nozzle cold plate to meet the high-heat flux heat dissipation, this article made an experimental study on the effect of additives on the heat transfer performance of the compact spray cold plate, and different types and concentrations of additives were concerned. In addition, a new dimensionless heat transfer correlation equation of the three additives was fitted to provide a criterion for the theoretical analysis of the spray cold plate. The main conclusions are as follows: The addition of a partial concentration of SDS has a certain effect on improving the heat transfer performance of the spray cold plate. In this experiment, the optimal concentration of SDS is 200 ppm, which increases the heat transfer capacity of the spray cold plate by 19.8%. Both the addition of n-octanol and Tween 20 weaken the heat transfer performance of the spray cold plate. The optimal concentration of noctanol is 200 ppm, which reduces the heat transfer capacity of the spray cold plate by 6.9%; the optimal concentration of Tween 20 is 300 ppm, which weakens the heat exchange capacity of the spray cold plate by 25.3%. This result is completely different from the single-nozzle additive experiment. In this experiment, the reason that the additives will greatly weaken the effect of spray cooling performance is due to the foaming property of the additives and the drainage caused by the closed small spray cavity of the spray cold plate. In addition, based on the experimental data, the dimensionless heat transfer correlations of the spray cold plate under the action of additives is conducted, and the maximum error is 2.1%, 2.8%, and 5.4%. | 2022-02-27T16:25:31.656Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "f27e3f76b890e954959575244ede093534b1f6ee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.52396/justc-2021-0152",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "528ca3e73c63c441eb37e635a988e7f28ba9d7d1",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
27661206 | pes2o/s2orc | v3-fos-license | Fetal alcohol spectrum disorders : Prevalence rates in South Africa
Background Alcohol consumption in South Africa (SA) has a long and complex social, cultural and political history. First-nation South Africans consumed home-brewed alcoholic drinks as part of social and ritual events and used it as a mode of trade for cattle and merchandise. During the colonial times from 1652 to 1948, settlers introduced the ‘dop’ system whereby farm workers were partially paid with alcohol for their labour.[1,2] During the ‘apartheid’ era alcohol was used paternalistically to economically and socially control mine and farm workers. In an attempt to curb social deterioration, black South Africans were prohibited from using alcohol. Paradoxically, local authorities installed beerhalls (taverns) in black townships to enhance local economic development, but at the same time, exercising control over the inhabitants of these townships.[3] This ambivalence led to resistance, with local residents opening their own illegal liquor outlets (‘shebeens’) and brewing their own beer. [4] Since 2004, home-brewed beer has been increasingly replaced by industrial beverages.[5] According to the World Health Organization Global Status Report on Alcohol and Health 2014,[6] 43.7% of SA males and 73.7% females above 15 years of age abstained from alcohol in 2013. In countries with a high abstention rate it is highly likely that the per capita consumption rate will be understated. This, as well as underreporting, provides some reasons why there is often a discrepancy in the reported absolute alcohol (AA) per capita consumption rate in SA. The report states the average SA consumption rate as 11 L AA per person[6] while Peltzer and Ramlagan[5] note a rate of between 10.3 and 12.4 L, compared with a global average of 6.2 L.[7] This gives SA one of the highest alcohol consumption rates per drinker in the world.[5] Peltzer and Ramlagan[5] furthermore link the high burden of alcohol use to hazardous and harmful drinking resulting in social ills such as alcohol-related deaths in transport and due to homicide, risky sexual behaviour among persons living with HIV/AIDS, and a fetal alcohol syndrome (FAS) rate of 10 74 per 1 000 grade 1 learners.
Background
Alcohol consumption in South Africa (SA) has a long and complex social, cultural and political history. First-nation South Africans consumed home-brewed alcoholic drinks as part of social and ritual events and used it as a mode of trade for cattle and merchandise. During the colonial times from 1652 to 1948, settlers introduced the 'dop' system whereby farm workers were partially paid with alcohol for their labour. [1,2] During the 'apartheid' era alcohol was used paternalistically to economically and socially control mine and farm workers. In an attempt to curb social deterioration, black South Africans were prohibited from using alcohol. Paradoxically, local authorities installed beerhalls (taverns) in black townships to enhance local economic development, but at the same time, exercising control over the inhabitants of these townships. [3] This ambivalence led to resistance, with local residents opening their own illegal liquor outlets ('shebeens') and brewing their own beer. [4] Since 2004, home-brewed beer has been increasingly replaced by industrial beverages. [5] According to the World Health Organization Global Status Report on Alcohol and Health 2014, [6] 43.7% of SA males and 73.7% females above 15 years of age abstained from alcohol in 2013. In countries with a high abstention rate it is highly likely that the per capita consumption rate will be understated. This, as well as underreporting, provides some reasons why there is often a discrepancy in the reported absolute alcohol (AA) per capita consumption rate in SA. The report states the average SA consumption rate as 11 L AA per person [6] while Peltzer and Ramlagan [5] note a rate of between 10.3 and 12.4 L, compared with a global average of 6.2 L. [7] This gives SA one of the highest alcohol consumption rates per drinker in the world. [5] Peltzer and Ramlagan [5] furthermore link the high burden of alcohol use to hazardous and harmful drinking resulting in social ills such as alcohol-related deaths in transport and due to homicide, risky sexual behaviour among persons living with HIV/AIDS, and a fetal alcohol syndrome (FAS) rate of 10 -74 per 1 000 grade 1 learners.
FAS as a burden of disease attributed to alcohol use in South Africa
Among the burdens of alcohol consumption is fetal alcohol spectrum disorder (FASD). FASD is an under-diagnosed umbrella term for a range of disorders caused by the teratogenic effects of alcohol on the developing fetus (Table 1). FAS, the most severe form of these disorders, was first described by Lemoine and colleagues in France in 1968, with Jones et al. [8] coining the term in 1973. Replicating reports of the condition soon followed from Canada, European countries and SA.
FAS and FASD prevalence studies in South Africa
In SA, the condition remained under-reported until the end of the last century. The first FAS and partial FAS (pFAS) prevalence study was undertaken by May and Viljoen [10][11][12] in the Western Cape Province in 1997, reporting rates of 46 per 1 000 grade 1 learners in 1997, increasing to 74 per 1 000 in 1999 and 89.2 per 1 000 in 2001. In these studies, the focus was on FAS and pFAS involving all the consenting grade 1 learners in the study area.
Since 1997, various prevalence studies in SA have revealed FAS rates as high as 26 per 1 000 in Gauteng; [13] 64, 74.7, and 119.4 per 1 000 in Upington, Kimberley and De Aar, Northern Cape, respectively; [14,15] 6.7, 9.6 and 100 per 1 000 in the Saldanha Bay Municipality, the Witzenberg sub-district, and Aurora on the West Coast, respectively; [16] and 290 per 1 000 in the Winelands area. [17] The SA studies involve all the cultural groups living in these rural, peri-urban and urban communities (Fig. 1).
Two more evaluations are currently underway in a rural area in the Northern Cape and an urban area in the Eastern Cape.
In a 2015 study in a rural community in Australia, a FAS rate of 120 per 1 000 was reported, [24] this being the first study outside of SA to report figures close to the SA rates. In 2006, the National Institute on Alcohol Abuse and Alcoholism already raised concern for this 'large and rapidly increasing public health problem' . [25] Methods In all the SA community prevalence studies mentioned above, the prevalence of FAS and FASD was determined by active case ascertainment, using a tiered screening and diagnostic approach that were validated and used in SA before. [10,11] The studies were conducted on invitation only. These invitations were received from government departments, local municipalities and/or community leaders. Approval was obtained from the Health Research Ethics Committees of either the University of the Witwatersrand (until 2005) or Stellenbosch University (since 2005) and the relevant Provincial Departments of Education in SA. All the studies involved grade 1 learners (school entry level, 6 years or older) attending all the schools in the research area, or from randomly selected schools (Witzenberg sub-district). Parents/guardians of these children were invited to enrol their children in the study by signing an informed consent form. They could withdraw their children at any time during the study. Demographic data pertaining to names, addresses and dates of birth were obtained from the schools.
Screening and clinical assessments
A research team visited the schools on pre-arranged dates; parents/ guardians were encouraged to attend these sessions. Members of the research team were blinded to the findings of other team members throughout the studies. Anthropometric assessments of head circumference (OFC) growth and height, as well as general physical examinations, were undertaken by Primary healthcare nursing professionals with the support of community workers. As many of the studies were conducted in under-resourced areas, the physical examinations were important and detected a number of health and psychosocial problems unrelated to FASD. All of these were managed by the research team or through referrals if the relevant resources were available. The Centers for Disease Control and Prevention clinical growth charts were used to determine individual learners' centiles. [26] If a learner's measurements were ≤10th centile on OFC and/or height and weight, he/she was referred for a dysmorphology exam by an experienced medical doctor (also qualified as a human geneticist and paediatrician). The Hoyme checklist [27] was adapted to develop a standardised assessment tool yielding a dysmorphology score with a maximum of 50. The dysmorphology features included the primary facial features of FAS such as short palpebral fissures, narrow upper vermillion border, smooth philtrum of the upper lip. [15] School educators could also refer consented grade 1 learners with learning, health or psychosocial problems.
Learners with a dysmorphology score of 11/50 or higher were referred for neurodevelopmental assessments and maternal interviews. All the grade 1 learners, irrespective of whether they participated in the study or not, received refreshments (fruit juice and a muffin).
Maternal interviews
Community workers were trained to use standardised questionnaires to interview biological mothers or guardians. The questionnaire was developed and refined by May and Viljoen [28][29][30] and further adapted in an unpublished Masters study. [31] Information was gathered to determine maternal risk factors before, during and after the gestation of the index child, pertaining to the mother's nutritional and health status, alcohol and nicotine usage, socioeconomic status, educational level, and the child's birth weight and health status immediately Table 1. Key concepts and terms as described by Stratton et al. [9] from the Institute of Medicine in 1996 Prenatal alcohol exposure refers to the fetus being exposed to any amount of alcohol consumed by the biological mother during her pregnancy.
Fetal alcohol spectrum disorders (FASD)
is an umbrella term used for a group of permanent, life-long and irreversible conditions caused by the teratogenic effects of alcohol on the fetus. The Institute of Medicine acknowledges the following four categories: • Fetal alcohol syndrome (FAS) is the most severe form of FASD with at least 2 characteristic facial features, growth retardation (height and weight), head circumference <10th centile, and central nervous system damage with neurodevelopmental delays. A history of regular and/or heavy maternal prenatal alcohol exposure may be present or unknown.
• Partial fetal alcohol syndrome (pFAS) is characterised by some of the discriminating facial features, as well as growth retardation and neurodevelopmental delays. A confirmed history of prenatal alcohol use might be present or not.
• Alcohol-related neurodevelopmental deficits (ARND) refer to structural and/or functional central nervous system damage with neurodevelopmental delays with a confirmed history of prenatal alcohol exposure.
• Alcohol-related birth defects (ARBD) are characterised by congenital skeletal, cardiac, eye, kidney or other organ imperfections with a confirmed history of prenatal maternal alcohol use.
THE NEW MILLENNIUM
after birth. During the interview, both the interviewer (community worker) and the mother/guardian were unaware of the FASD status of the child. Interviewees received a food voucher (ZAR85) to be used at a local food store as an incentive.
Neurodevelopmental assessments
Neurodevelopmental assessments were done by trained psychologists and an occupational therapist, using the Griffiths Mental Developmental Scales-Extended Revised (GMDS-ER). [32] Six developmental domains, namely locomotor (gross motor skills), eye-hand (fine motor coordination), personal-social (adaptive functioning), hearing-speech (verbal ability), performance (pattern construction and speed of performance) and practical reasoning (numerical, time and spatial concepts) were assessed. [15] For each domain, raw scores were converted to z-scores and ultimately a GQ (general quotient as an aggregated scored based on all 6 sub-domains on which the learner is tested). Two or more standard deviations below the mean of the GMDS-ER were indicative of a significant delay. [33] Case conference Final diagnoses were made in a case conference, using the Hoyme criteria [27] for the diagnostic categories FAS, pFAS, ARBD, ARND and 'not FASD' .
To verify a diagnosis of FAS, at least two of the three discriminating facial features, plus growth retardation and neurodevelopmental delay with or without prenatal alcohol exposure, were required. pFAS was diagnosed when two of the three FAS facial features were present, as well as growth retardation and neurodevelopmental delay, plus a maternal history of alcohol consumption during pregnancy. To confirm a diagnosis of ARND, confirmed maternal alcohol use and neurodevelopmental delays, unrelated to any other reason, were required. A diagnosis of ARBD was made when a birth defect such as a heart murmur was present, as well as a history of prenatal alcohol exposure. [15,16,27] Due to limitations related to the study and the instruments used, the primary focus was on the identification of FAS. The researchers therefore acknowledge that most cases of ARND, and ARBD, and even some cases of pFAS, were missed. Therefore, the FASD rates in the research sites could in fact be higher than reported.
Discussion
The published FASD prevalence rates in SA unfortunately indicate that our country has the highest reported rates of this permanently crippling but totally preventable condition. In some of the researched areas, the prevalence of FASD is higher than the HIV/AIDS or tuberculosis rates, but it is yet to be acknowledged as a public health priority by the National Department of Health. Governmental prevention and awareness programmes are limited to a few highrisk areas in the Northern Cape (Kimberley, De Aar and Upington) and the Western Cape (Witzenberg and West Coast). The greatest awareness initiative at present in SA is, as controversial as it might be, driven by the wine and beer beverage industry. This is despite the constant threat of the National Minister of Health to ban alcohol advertising in SA. With the exception of the Vredenburg/Saldanha municipal area in the Western Cape, the industry is currently funding all the FASD training in SA provided to government employees in the Northern, Eastern and Western Cape provinces.
The unfortunate delay in the acknowledgement of this devastating but highly preventable disorder and the reluctance to take action are costing the current and future communities in SA dearly. The cost to families, communities and the country at large has a lifelong crippling effect on the psychosocial, vocational and overall wellbeing of the nation. A concerted effort involving the relevant government departments, civil society, private industry and the SA community at large is needed to break the cycle of misfortune perpetuating the ever-increasing FASD epidemic in SA.
Foundation for Alcohol Related Research (FARR)
FARR was established in 1997 to conduct South Africa's first FASD (fetal alcohol spectrum disorder) prevalence study. Since then, FARR has evolved to become one of the leading organisations in South Africa with regards to not only world-class research, but also in FASD prevention and intervention.
FARR's mission is to establish sustainable awareness, prevention, intervention and training programmes designed to eliminate substance abuse, with the focus on FASD as preventable disorder in South Africa.
During the 3 year-projects conducted by FARR, we conduct comprehensive research in a designated area and implement our Healthy Mother Healthy Baby© prevention programme. The programme focuses on assisting pregnant women in having healthier, substance and alcohol-free pregnancies to ensure the birth of healthier babies. Awareness programmes are also implemented involving the community at large, all possible stakeholders and service providers as well as the atrisk individuals. Finally, FARR's Training Academy provides specialised training courses for service providers, educators, health care professionals, social workers, therapists, etc.
In 2014, FARR established an FASD Support Group that aims at providing support and guidance for biological and foster parents, as well as guardians of children with FASD. | 2018-04-03T00:59:39.082Z | 2016-05-25T00:00:00.000 | {
"year": 2016,
"sha1": "52d898dd5a4bf5150640f1cf13a1ea2c6b77716e",
"oa_license": "CCBYNC",
"oa_url": "http://www.samj.org.za/index.php/samj/article/download/11009/7444",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d36317bd1a53b0ea4e16d33031562c92a40a50d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
153654538 | pes2o/s2orc | v3-fos-license | New National Curriculum and the Impact in the Education Sector of Kosovo : Implications for Successful Implementation Drita Kadriu
In August 2011, the Ministry of Education, Science and Technology (MEST) adopted a new national curriculum. Among other concepts, the Ministry committed to focus on learning outcomes, competencies, differentiated learning, and learner centered education in the new national curriculum. The impact of these changes rippled across the education sector requiring professional capacity building within the Ministry of Education, redesigning teacher pre-service preparation program, restructuring teacher in-service program, innovating staff development and teacher mentoring for professional capacity building in schools, re-writing textbooks and creating supporting teacher resource materials. It was calculated that it would take ten years to implement fully these curriculum concepts. The purpose of this paper is to analyse the consequences of a Decision to create new national curriculum across the education sector and determine the best approach to implement it. History of New Curriculum Framework In 2001, post-conflict Kosovo developed its first national curriculum. It might be best described as subject focused and based upon learning objectives. The emphasis in this original curriculum design was on what teachers would do as they developed learning objectives for their students in each subject area. Curriculum resources necessary for the implementation of the 2001 curriculum were developed over the next six years. However, the 2001 curriculum was never totally implemented. It was after the declaration of independence in 2008 that the Minister of Education announced that MEST would develop and implement a new curriculum that would meet the needs of this new republic. A new curriculum framework was adopted by MEST in 2011 which focused on student competencies and learning outcomes. Currently, teachers mostly rely upon minimalist strategies of lecturing and assigning desk work to prepare students for final examinations. Teachers would have to revise their strategies. Instead of setting objectives for students to meet, they would create strategic learning activities that were designed to lead to students acquiring specific competencies. They would begin by imagining how learning activities would lead individuals to acquire different levels of competencies. These levels were organized in learning typologies, such as Bloom’s Taxonomy, where children would demonstrate knowing, understanding, applying, analyzing, evaluating and synthesizing. Guided by the Typology, teachers would create an array of learning activities that engaged students in learning through group work, debates, project work, paper writing and designing. This change from setting objectives for students to planning how to engage students in learning would ripple throughout the entire sector. Implications for the System of Education in Kosovo Today, the impact of the changes is spreading across all levels of the educational sector in Kosovo. Members of different institutions within the sector are gradually becoming aware of the significance that these changes will have upon them. These changes represent an opportunity for a traditional culture to become a more progressive modern culture characterized by tolerance of diversity and valuing individual achievement. In response to the Minister’s decision to create a new national curriculum, MEST refocused its efforts in all its departments as implications for the curriculum change became clearer. The authors interpreted changes that were anticipated to be experienced at the different levels in the education sector against two major social science constructs: the perception of locus of control over the change (internal or
History of New Curriculum Framework
In 2001, post-conflict Kosovo developed its first national curriculum.It might be best described as subject focused and based upon learning objectives.The emphasis in this original curriculum design was on what teachers would do as they developed learning objectives for their students in each subject area.Curriculum resources necessary for the implementation of the 2001 curriculum were developed over the next six years.However, the 2001 curriculum was never totally implemented.It was after the declaration of independence in 2008 that the Minister of Education announced that MEST would develop and implement a new curriculum that would meet the needs of this new republic.A new curriculum framework was adopted by MEST in 2011 which focused on student competencies and learning outcomes.Currently, teachers mostly rely upon minimalist strategies of lecturing and assigning desk work to prepare students for final examinations.Teachers would have to revise their strategies.Instead of setting objectives for students to meet, they would create strategic learning activities that were designed to lead to students acquiring specific competencies.They would begin by imagining how learning activities would lead individuals to acquire different levels of competencies.These levels were organized in learning typologies, such as Bloom's Taxonomy, where children would demonstrate knowing, understanding, applying, analyzing, evaluating and synthesizing.Guided by the Typology, teachers would create an array of learning activities that engaged students in learning through group work, debates, project work, paper writing and designing.This change from setting objectives for students to planning how to engage students in learning would ripple throughout the entire sector.
Implications for the System of Education in Kosovo
Today, the impact of the changes is spreading across all levels of the educational sector in Kosovo.Members of different institutions within the sector are gradually becoming aware of the significance that these changes will have upon them.These changes represent an opportunity for a traditional culture to become a more progressive modern culture characterized by tolerance of diversity and valuing individual achievement.In response to the Minister's decision to create a new national curriculum, MEST refocused its efforts in all its departments as implications for the curriculum change became clearer.
The authors interpreted changes that were anticipated to be experienced at the different levels in the education sector against two major social science constructs: the perception of locus of control over the change (internal or external); and source of motivation for the change (intrinsic or extrinsic).These two factors created a 2 by 2 matrix comprising of 4 cells (see Analysis section below).Each change was analyzed and placed in the cell that best described it.The results were then compared and implications were drawn to inform MEST how best to approach the changes.Educational Sector: Ministry Level Department of Higher Education: Two years ago, the university was required to respond to an external accreditation decision by the Kosovo Accreditation Agency (KAA) that revoked the accreditation of 8 of the 10 teacher preparation programs in the Faculty of Education.With the new curriculum framework in place, MEST had another reason to require the Faculty of Education to respond to the accreditation report of 2010.The Minister created a Commission to review the current structures and programs for teacher preparation at the university and as a result, he issued a Decision on restructuring teacher preparation programs which was supportive of the implementation of the new curriculum.
Teacher Training Unit: MEST required teachers to upgrade their skills to implement the curriculum as part of teacher licensing.As a result, MEST began to develop a system that recognized teachers' accomplishments.The Teacher Training Unit developed a new Management Information system to track in-service experiences that each teacher successfully completed.MEST began gathering professional and biographical data on 24,000 teachers in Kosovo to track teacher development activities and linked this with salary level and teacher licensing.Curriculum Development Unit: Curriculum implementation documents and tools had to be developed in the Curriculum Development Unit at MEST.These documents included a Curriculum Framework which described the conceptual framework for the new curriculum; three Core Curriculum operational documents for all grade levels that defined learning outcomes and assessment criteria for each curriculum area and stage level; a Subject Syllabus Template to guide teachers to revise the existing syllabi; an Teacher performance standards and minimum teacher competencies were identified for teacher licensing and were linked to the demand-driven process.
Assessment Unit: The Assessment Unit within MEST also had to respond to the Minister's 2008 Decision to create a new curriculum.In the past, student assessment focused on final examinations and preparation for matura examinations after grade 12.With the new curriculum, the Assessment Unit would restructure assessment to include assessment for learning as well as assessment of learning; formative as well as summative assessment.Teachers would learn assessment criteria for topics (units) covered in classes and whole courses (subjects) as well as for the grade level and stages.They would learn to consistently gather data for learning assessment to guide them in revising their teaching strategies according to appropriate cognitive levels in Bloom's Taxonomy.Writing tests and exams would no longer be sufficient to assess the levels of learning identified in Bloom's Taxonomy.The Assessment Unit at MEST had to develop supporting regulations and appropriate strategies to free teachers and school directors from the existing 'one final exam' format and instead enable them to create multiple assessment activities to assess learning as required by the new curriculum.Special Education Unit: The Special Education Unit at MEST responded to the mandate to implement a new curriculum.
Underlying outcomes-based, competency learning is inclusion, and teachers would be expected to develop learning activities in their topics that optimize learning for all students with different abilities.Therefore teachers would adopt differentiated strategies to enable learners with a wide range of abilities.The Special Education Unit coordinated with preschool level learning by developing learning standards and developing implementation guidebooks for pre-school education.
Vocational The Minister formally mandated the Rector to implement changes through a Decision signed in July 2012.Most of the terms of the Decision involved changes at the university or the faculty level.The decision focused on increasing quality of performance of new teachers entering the teaching profession.The university was mandated to move all teacher preparation programs into the Faculty of Education.This would bring together people who understood the concepts of the new curriculum and who had capacity to support its implementation.
Dean's Office: The Faculty of Education would review and revise its two MA degree programs in readiness for accreditation in 2013.A major issue that existed with these programs was that while the Faculty could offer 60 ECTS of coursework, they did not possess the capacity to mentor students through 30 ECTS of candidacy which included a research project and thesis writing.There were over 350 students who were ready to move to the candidacy phase but there were less than 10 mentors, professors who could supervise them.The Faculty considered naming co-mentors, young professors without PhD's, to provide supervisors for these 350 graduate students to complete their degrees successfully.The Faculty was given a mandate to create a VET in-service teacher preparation program by 2014 to ensure that VET teachers have the capacity to implement the new modular VET curriculum.
MEST determined that an existing in-service program would have largely met its goal to provide an opportunity for practicing teachers with 2 year diplomas from Higher Pedagogical Schools to upgrade to a 4 year B Ed degree.This program might be closed, which would have the impact to release a number of professors back into the Faculty of Education so it could better respond to the 2010 Accreditation Report.
MEST required that the Faculty of Education review its practice teaching program.Student teachers were required to successfully complete 22 weeks of practice teaching over a 4 year B Ed degree.Currently too many students were enrolled in the programs and most did not experience more than 16 weeks of practice teaching, and when they did they were most often in groups of up to 10 colleagues who had nowhere to sit so they stood at the back of the classroom observing the teacher.Rarely was a mentor teacher assigned one student.A more effective practice teacher program would build the capacity of teachers to implement the curriculum in the future.Educational Sector: Municipal Level Director's Office: At the level of Municipal Education Districts (MEDs), officials reacted to the announcement of the new curriculum by changing many regulations which guided the work of school directors and teachers.One of the first changes was to create regulations to give teachers and school directors authority to evaluate student learning using formative and summative processes.School directors would encourage teachers to create innovative ways to assess student learning day to day, consistently and accurately.MED officials would support teachers learning to write deep and rich behavioral descriptions that could be used to evaluate student affective aznd psychomotor learning by participating in debates or class presentations.Another change for the MED was to authorize school directors to create annual development plans which would enable planning for staff development activities throughout the year.Municipality officials would emphasize to school directors that they must learn to become an 'educational leader.'In addition to the more easily understood and traditional roles of 'administrating' and 'managing,' school directors would learn the art of leading.The MEDs would provide regulations that captured the art of leading and insist on school directors practicing it.Educational leadership might be best described as leading by doing and motivating teachers through trust, caring and morality.School directors would need to shift from traditional monitoring and evaluating roles to that of mentoring and coaching.In order to implement the curriculum throughout the MED, school directors would create learning cultures in schools where teachers experimented with the best way to engage students in learning activities.When experiments did not work, school directors would ensure teachers were praised for their efforts and not only when successful.A fundamental principle for staff development is to optimize learning conditions for each learner, that is, to consider the whole child.This principle is especially important in summative assessment of learning.The school director would establish protocols for promoting students from one grade to another, one stage to another.Included in this protocol would be promotion meetings which involved all teachers and an educational psychologist who would consider the continuous assessment data of teachers, home situation information, and personal abilities for each student.The school director would consider students one grade at a time.Once the decision of promoting was recorded for each student in one grade, the school director would then review each student on the roster for the next grade until all students in the school were considered.
Competency-based teaching and student-centered approaches require teachers to develop learning activities that use many different teaching materials.As a consequence, school directors need to find sufficient space for teachers to store their teaching materials.In Kosovo, the fact that many schools were scheduled in shifts made it more difficult to find the necessary space for storage and use of teaching materials in school laboratories cabinets.School directors developed strategies that took account of the time that school facilities were used by various groups and classes.One proposal was to organize schedules in such a way that grades 0, 3, 4, 7 and 8 were in the first shift and grades 1, 2, 5, 6 and 9 were in the second shift.This schedule enabled the school director to schedule science labs for lower secondary classes in both shifts and organize the available space in the school building more efficiently and effectively.
The new Kosovo Curriculum Framework with its new approach based on competences and learning outcomes required teachers to change the way they use formative assessment with students.Teachers would revise their practice to include regular and frequent formative assessment strategies to ensure that students achieved the core competencies.Tests or final exams simply measure the lower levels of the cognitive domain of Bloom's Taxonomy of knowledge and understanding.The new curriculum was competency-based that included learning knowledge, skills, attitudes and values.
In addition to the tests and final exams that measured student knowledge and levels of understanding, teachers would use other assessment methods to measure skills, attitudes and values of students.This change of practice was deeply fundamental to how teachers embraced the new curriculum.If teachers failed to adopt multiple assessment practices the new curriculum would not be implemented.School directors would shift from traditional monitoring and evaluation roles to mentoring and coaching roles to support teachers to understand and apply the concepts in the new curriculum.To shift to mentoring and coaching, school directors would become comfortable being in close, supportive relationships with teachers.They would focus daily on intrinsic motivation, for instance, wanting to make changes because the results would optimize learning for students.School directors would shift their style from controlling and compliance, to providing teachers with opportunities to change so students could learn better.Teachers who experience self worth during a change event have a potential of achieving sustainable professional development and life-long learning, whereas teachers who are forced to comply will not.School directors would learn to relate to teachers more empathetically at a human level and less from a traditional status or hierarchical level.As a consequence, teachers would more likely follow the lead of empathetic school directors out of respect for their values and substance of character.School directors play a critical role in determining the type of culture that evolved within schools.Relating with teachers through care and cooperation instead of control and compliance creates school cultures that supported learning, and makes it safe for teachers to take risks and experiment with the new curriculum.
School directors could take advantage of the new curriculum and invite graduate students studying educational leadership at the Master level to come to their school to observe the interactions amongst the school director, teachers and students as they experimented with implementing the new curriculum.As an outside observer, graduate students could provide valuable insight and feedback for the staff to consider.
Educational Sector: Classroom Level Teachers are builders of the curriculum.All other professionals in the educational system are architects or suppliers.Those in the MEST Curriculum Development Unit were the architects who designed the curriculum and indentified the assumptions and concepts within the design.Others in MEST provided supportive operational guidelines, textbooks or teaching and learning resource materials.The Municipal Education Districts (MEDs) staff and school directors provided direction, mentoring and coaching in support of teachers.But it was the teacher who would use formative assessment to determine not only how well students learned, but to determine how well the learning activity worked.In addition the teacher would learn how to assess summative student learning at the end of the year and at the end of the curriculum stage, to determine whether the student was ready to move to the next level of school.
Teachers share their ideas about possible learning activities with colleagues.In doing so, they would are acting professionally and contributing to a safe, risk-free learning school culture.In Kosovo teachers would respond more positively to their director's role as mentor and coach, rather than monitor and evaluator.Working in a risk-free school culture, teachers would feel safe to share their failures as well as their successes for they would understand that as professionals they learned from their mistakes.
Teachers would constantly be on the look-out for new materials that might support learning activities and end up collecting many boxes of materials in the process for each of their courses.They would learn to share materials with colleagues and to rotate responsibilities to lead in monthly staff development activities scheduled by the school director.
Finally, teachers could gain experiential knowledge of concepts related to child-centered and outcomes based learning.They would gain a deep understanding of constructivism and related concepts of validity of multiple realities because of their experimentation.Differentiated learning strategies would no longer be just an abstract idea, rather it would be something teachers practiced daily as they considered individual needs of students in their classrooms.Teachers would begin to feel comfortable with giving up control of learning to the learner, and facilitate activities that were designed to achieve learning of specific outcomes and competencies.Teachers would be able to successfully implement the subject syllabus by the end of the year by skillfully guiding students' learning through planned activities.
Analysis
We considered the impact of the new curriculum framework as it would be felt throughout the educational sector.In each case, implications of the change were placed in a category depending upon type of Locus of Control and type of Motivation.For example, structural changes were placed in a category characterized by an external Locus of Control whereas changes in teaching practice were placed in a category characterized by intrinsic motivation.In the graphic below the changes were approximately equal between external and internal Locus of Control (20/22) whereas the changes were substantially different between extrinsic and intrinsic Motivation (7/35).
This finding suggested that MEST should focus on motivating members of the educational sector using intrinsic means.It could focus on 'WHY' the changes were necessary, and urge educators to establish a greater sense of urgency for change by focusing on critical principles like optimizing student learning, conforming with the Bologna Standards and enabling students to study across Europe and the world, and providing learning experiences that lead to graduates possessing high level skills, knowledge and attributes required for Kosovo to take its place amongst the nations of the world.MEST could urge school directors to identify 'early adopters' amongst teaching staff to create a coalition of change agents who could coach and mentor their colleagues.School directors could develop a vision for a preferred future state and communicate to teachers and members of the community a strategy to achieve this vision.They could empower teachers to mentor and coach their colleagues to implement the changes.They could create strategies to achieve 'quick wins' so community members and teachers can see the benefits of change.Through these actions, educators would shift from centralist control to school-based self-determination.Drawing upon collegial coaching and mentoring, teachers and school directors would emphasize the importance of staff development within safe and caring cultures founded upon the value of cooperation and acceptance of differences.In the end, to implement the curriculum successfully, teachers and school directors would find their own solutions to the change process, diminishing reliance upon so-called 'outside experts' who would otherwise visit schools and provide trainings and workshops.
Discussion
Most of the changes that will be made within the educational sector in Kosovo to implement the new curriculum framework are predicted to be based upon intrinsic motivation.That is, educators will be motivated to make the changes because they believe in them and support the final vision.This vision will include an educational system that optimizes learning for all regardless of race, ethnicity, gender, abilities, or socio-economic status.It will include an educational system that is recognized by nations throughout the world enabling graduates to study overseas.And finally, it will include an educational system whose graduates possess needed knowledge, skills and attributes required by Kosovo to take its place with developed nations of the world.
Teachers are the implementers of the new curriculum while all other members in the educational sector are supporters.Teachers will implement changes if they experience self worth, a sense of efficacy, adequacy and security in relationship with their school directors and amongst colleagues in their schools.These experiences are possible if school directors are perceived by teachers to be consistent and principled: to act as mentors and coaches; to care about teachers and students; to consistently demonstrate educational leadership skills; and, to function for the benefit of student learning.School directors who communicate these principles congruently through structural, official and personal actions will influence teachers the most.These teachers will change their practice because they are attracted to the substance of character demonstrated by their educational leaders.The new curriculum will be implemented successfully if teachers are intrinsically motivated.
Education Unit: The Vocational Education Unit at MEST also developed parallel, modular-based curriculum that reflected the new curriculum framework.The Unit supported the development of Centres of Competence designed to meet needs of the labour sector in Kosovo.A Vocational Education Teacher (VET) In-service B Ed degree program would be developed to qualify VET teachers to be granted a teaching license.Educational Sector: University Level Rector's Office: At the university level, actions tended to be reactive rather than proactive.In 2010 an Accreditation Team reported on the B Ed degree programs in the Faculty of Education where multiple programs failed to gain accreditation.In response to many recommendations made by the authors of the report, the Rector promoted closing 8 subject programs for lower secondary teaching and all the upper secondary programs that were offered by four different academic faculties.Instead, MEST agreed to create a new 3+2 consecutive program model for secondary teachers.This model would require aspiring secondary teachers to complete a 3 year academic degree in a teachable area through an academic faculty then apply to the Faculty of Education for a 2 year Master of Education professional degree.The M Ed degree would focus on pedagogy, methodology, assessment, learning theory, education foundations, educational psychology and teaching practice.The Faculty of Education would create this two year M Ed program by 2014.Reacting to MEST's development of national teacher competencies for beginning teachers, the Faculty of Education would review all the B Ed degree programs and revise them to ensure they covered all the competencies in the national teacher competencies profile.
Table 1 .
0 Predicted changes at the Ministry of Education, Science and Technology Level Recognizing the current lack of capacity within the education system to implement the new curriculum, the Curriculum Development Unit would develop a strategy to invite graduate students in the Faculty of Education to conduct research in the area of curriculum and teacher development studying how teachers revised the course syllabi and piloted the changes in their classrooms.Graduate students the area of educational leadership development would study how school directors developed learning cultures within their schools and mentored and coached teachers through the syllabus change process.State Council for Teacher Licensing: MEST needed to address how to acknowledge the work of Mentor teachers who supervised practice teachers and discussed granting contact hours of professional development that would raise their teacher licensing level.The State Council for Teacher Licensing would revise the way it selected professional development training programs for teacher in-service by using a demand-driven model instead of a supply model.
Table 2
• Create new M Ed Degree for Secondary Teachers • Revise MA (Leadership and Curriculum & Teaching) degrees in readiness for Accreditation • Create new B Ed In-service degree for VET Internal Locus of Control by Extrinsic Motivation • Minister's Decision on Teacher Preparation • Consolidate all Teacher Preparation programmes under Faculty of Education • Close the B Ed In-service degree (Higher Pedagogical School trained teachers) Internal Locus of Control by Intrinsic Motivation • Redesign the Practice Teaching 22 week programme • Revise the Mentor training programme to support the Practice Teaching 22 week programme
Table 3 .0
Predicted changes at the Municipality Level Create new school-based staff development regulations for school directors It was commonly understood that it was not practical to implement a national curriculum through countless workshops offered throughout the country.Instead, MED officials would create regulations to mandate annual school-based staff development plans.School directors would become leaders, and teachers on staff would become experts to support various aspects of the new curriculum design.The school would become a professional organization where experimentation by teachers was recognized by colleagues and thus teachers would be rewarded.Teachers could gather into groups to share their ideas about learning activities and help each other by collecting materials and engaging in peermentoring or peer-coaching.
Educational Sector: School Level School Director's Office: The school level directly functions to support teachers and students.Most communication with parents and the local community occurs at this level.In response to initiatives by the MED, school directors determined professional development training within the school based upon the needs of individuals, groups of teachers and the overall school staff.Using a demand-driven model, the school community would decide what support they required and the director would identify teachers on staff who could provide it or would seek resources through the MED.
Table 5 .
0 Predicted changes by Teachers at the Classroom Level Develop trainings for teachers on methodology that is learner-centred, constructivist, has differential learning strategies to achieve learning outcomes and learning objectives through learning activities
Table 6 .
0 Number of changes predicted by Type of Motivation and Locus of Control | 2017-09-07T13:33:33.764Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "92c3d00813f3841ccef8d135e8ae84fb756eb6e4",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/jesr/article/download/2853/2815",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "92c3d00813f3841ccef8d135e8ae84fb756eb6e4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
231649040 | pes2o/s2orc | v3-fos-license | A systematic review examining the clinical and health-care outcomes for congenital heart disease patients using home monitoring programmes
Objectives This review aimed to present the clinical and health-care outcomes for patients with congenital heart disease (CHD) who use home monitoring technologies. Methods Five databases were systematically searched from inception to November 2020 for quantitative studies in this area. Data were extracted using a pre-formatted data-collection table which included information on participants, interventions, outcome measures and results. Risk of bias was determined using the Cochrane Risk of Bias 2 tool for randomised controlled trials (RCTs), the Newcastle–Ottawa Quality Assessment Scale for cohort studies and the Institute of Health Economics quality appraisal checklist for case-series studies. Data synthesis: Twenty-two studies were included in this systematic review, which included four RCTs, 12 cohort studies and six case-series studies. Seventeen studies reported on mortality rates, with 59% reporting that home monitoring programmes were associated with either a significant reduction or trend for lower mortality and 12% reporting that mortality trended higher. Fourteen studies reported on unplanned readmissions/health-care resource use, with 29% of studies reporting that this outcome was significantly decreased or trended lower with home monitoring and 21% reported an increase. Impact on treatment was reported in 15 studies, with 67% of studies finding that either treatment was undertaken significantly earlier or significantly more interventions were undertaken in the home monitoring groups. Conclusion The use of home monitoring programmes may be beneficial in reducing mortality, enabling earlier and more timely detection and treatment of CHD complication. However, currently, this evidence is limited due to weakness in study designs.
Introduction
Congenital heart disease (CHD) 1 is an umbrella term for a broad range of birth defects that affect the function of the heart. Approximately 1% of babies are born with a heart of circulatory condition, which is usually a form of congenital heart disease. The mean prevalence of CHD globally was reported to be 8.224 (SD ¼ 7.817-8.641) per 1000 for the years 1970-2017. 2 It was estimated that in 2017, there were 11,998,283 people living with CHD globally. 3 Many patients with CHD will require lifelong follow-up in order to monitor for signs of deterioration. In light of this, novel strategies for managing these patients may be useful in terms of reducing the burden of repeated health-care attendances, but also by empowering patients to be proactive in the management of their condition.
Telehealth may be a useful tool in the management of cardiac patients. According to the World Health Organization, telehealth is 'the use of telecommunications and virtual technology to deliver health care outside of traditional health-care facilities'. 4 A systematic review of patients with heart failure reported that telehealth reduced all-cause mortality by 20% (95% confidence interval (CI) 0.68-0.94) and reduced heart failure hospitalisation by 29% (95% CI 0.60-0.83) compared to usual care. It would therefore be of interest to determine any health benefits afforded to patients with CHD through the use of home monitoring, 5 whereby patients would monitor various physiological parameters at home, with the data obtained being reviewed by health-care staff. There has been one paper published to date 6 which has reviewed the use of telehealth for monitoring this patient population. However, this paper was not a systematic review and did not undertake quality appraisal of the included papers. Additionally, this review was published in 2018 and, as such, only included papers published up to 2017, with only the PubMed database searched. This current review will therefore provide a rigorous examination of the literature in this area.
The aim of this systematic review was to provide an overview of the literature on home monitoring technologies used by patients with CHD. The objectives were (a) to present clinical and health-care outcomes for patients who use home monitoring technologies, and (b) to identify areas for research on the use of home monitoring technologies by patients with CHD.
Search strategy
A computerised search of the following databases from inception to November 2020 was performed: Medline, Embase, Cochrane, CINAHL and Scopus.
Search terms comprising Medical Subject Headings (MeSH), database-specific subject headings and key words were employed: EXP TELEMEDICINE OR telehealth* OR tele-health* OR telemedicine* OR tele-medicine* OR telemonitor* OR tele-monitor* OR telemanagement* OR tele-management* OR teleconsult* OR tele-consult OR telecare* OR tele-care* OR telepharmacy* OR tele-pharmacy* OR telenurs* OR tele-nurs* OR 'remote monitor*' OR 'remote consult*' OR 'remote care*' OR 'mobile care*' OR ehealth* or e-health* OR mhealth* OR m-health* OR 'home monitor*' OR 'self monitor*' OR 'self manage*' OR 'home manage*' AND EXP HEART DEFECTS, CONGENITAL OR congenital heart disease* OR congenital heart malformation* OR adult congenital heart disease* OR ACHD* OR grown up congenital heart disease* OR GUCH OR (congenital adj3 heart) OR (congenital adj3 cardiac) OR (cardiac adj3 malformation).
Grey literature was sought from the British Library (ETHoS), opengrey.eu, greylit.org, BMJ Best Practice, US Food and Drug Administration (FDA), National Institutes of Health, National Institute for Health and Care Excellence (NICE), Kings Fund, Nuffield Trust and Google Scholar. Hand searching of reference lists of articles was also performed.
The lead review author (R.C.) screened all records identified for inclusion/exclusion criteria. All identified publications were read as either abstracts or full texts. Papers were included if they were full text investigating home monitoring interventions actively used by the patient (or their carer) and published in the English language. Papers were excluded if they were abstracts only, citations, letters, case studies, investigating interventions used for the diagnosis of CHD, technical studies of telehealth equipment and home monitoring that was passively done without any action by the patient. Following the initial screening process, a full text evaluation of 25 articles was performed to determine correlation with the inclusion criteria. A meeting was held between the four members (R.C., C.M.H., S.Mc.F. and J.C.) of the review team to discuss findings, until inclusion was agreed by consensus.
The lead author (R.C.) extracted the necessary data from each included study into a pre-formatted datacollection table.
Quality assessment and risk of bias
Quality assessment and risk of bias was determined for all studies in this review. A meeting was held with all members (R.C., C.M.H., S.Mc.F. and J.C.) of the review team to determine the quality scores and risk of bias for all studies. Agreement on quality scores and risk of bias was gained by consensus. The quality appraisal tools that were used were as follows: the Cochrane Risk of Bias 2 tool 7 for randomised controlled trials (RCTs), the Newcastle-Ottawa Quality Assessment Scale 8 for cohort studies and the Institute of Health Economics (IHE) quality appraisal checklist 9 for case-series studies.
Results
A total of 2025 articles were identified by various literature searches. Twenty-two articles were deemed appropriate for inclusion in this systematic review (Figure 1).
Study characteristics
The main study characteristics are presented in Table 1. These studies were published between 2001 and 2020. The main types of study were RCT, cohort and case series. In total, they included 2809 participants, with the number in individual studies ranging between 14 and 494 participants.
Risk of bias
The Cochrane Risk of Bias 2 tool was used for all RCTs (N ¼ 4; Figure 2). All four studies 9-12 were assessed as having some concerns, with all indicating a moderate risk of bias. The most common deficit in methodological quality was in the domains for randomisation and selection of the reported result. No study reported whether allocation sequence was concealed until participants were enrolled and assigned to the intervention. No study reported whether the data were analysed in accordance with a pre-specified analysis plan or whether the outcome assessors were blinded to the intervention.
Newcastle-Ottawa Quality Assessment Scale
The Newcastle-Ottawa Quality Assessment Scale was used for the quality appraisal of all cohort studies (N ¼ 12; Figure 3). The most common shortfalls in methodological quality related to the domains of comparability and outcome. Five studies scored zero in the domain of comparability. [13][14][15][16][17] These studies did not provide information on whether they controlled for factors that could affect the outcome measured. Only four studies 16,[18][19][20] reported that the outcomes measured were controlled for multiple variables. One study 13 was rated as low quality for the outcome domain. This study did not provide details of how the outcome was assessed or whether all subjects were accounted for.
IHE quality appraisal checklist
The IHE quality appraisal checklist for case-series studies was used to assess the quality of the remaining studies which used a case-series design methodology (N ¼ 6; Figure 4). The most common deficiencies in methodological quality were for the domains of competing interests and sources of support, statistical analysis and study design. Two studies 21,22 provided details on competing interests and sources of support, with the remaining four studies [23][24][25][26] not providing any information in relation to this. The domain for statistical analysis also identified three studies 23 to have a low level of quality based on the IHE quality appraisal checklist. Overall, the risk of bias results indicate that the RCTs were all rated as having some concerns; cohort studies were rated as being of moderate to high quality, with 64% bring rated as high quality; and case series were all rated as moderate to high quality, with 67% of these studies receiving a rating of high quality. Whilst many of the case studies and cohort studies were judged to be high quality, it must be borne in mind that the studies themselves were less robust by virtue of their design methodology compared to the randomised cohort studies.
Synthesis of results
Thematic analysis of outcome measures was undertaken by the lead author (R.C.) who systematically examined the pre-formatted data-extraction table to identify outcome measures recorded by each of the papers. Themes were discussed and agreed with the coauthors. There were three common outcome measures: mortality, unplanned readmissions/health-care resource use and impact on treatment.
Mortality
There were 17 studies that included mortality as an outcome measure (Table 2). Five studies found that home monitoring programmes were associated with a significant improvement in mortality. 13,14,18,27,28 Mortality was reduced by between 10.2% and 17%. The study with the smallest reduction in mortality reported 12.4% mortality for historical controls compared to 2.2% for the study group. 29 The largest reduction in mortality reported 17% mortality for historical controls compared to 0% in the study group. 28 Five studies found that mortality rates in the home monitoring group trended lower, with mortality reduced by between 3.8% and 7.6%. 15,16,20,30,31 The study with the biggest reduction in mortality reported a mortality rate for historical controls at 13% compared to 5.4% in the home monitoring group (p ¼ 0.2). 31 The study with the smallest reduction in mortality reported that mortality was reduced from 12.1% to 8.3% (p ¼ 0.924) with home monitoring. 30 While there was a trend for lower mortality in these studies, the results were not significant. One study reported 0% mortality in both groups. 9 There were four studies that reported mortality rates. However, there was no comparator. [23][24][25][26] Mortality rates in these studies ranged from 0% to 9%.
Two studies found that mortality trended higher for the home monitoring groups. 17,19 One study reported that mortality in the home monitoring group was 15% versus 10% in the historical controls. 17 In the other study, mortality in the home monitoring group was 5.4% versus 2.4% in the historical controls (p ¼ 0.71). 19 Unplanned readmissions/health-care resource use Unplanned readmissions/health-care resource use was reported in 14 studies (Table 3). Four studies [23][24][25][26] reported data for the home monitoring cohort, but there were no data for comparison. Three studies reported data on breaches in surveillance criteria (e.g. a SpO2 < 70%). 13,25,27 Between 31% and 59% of patients breached surveillance criteria, with up to 57% of patients requiring observation and being admitted to hospital. Readmissions were reported in two studies. 21,26 One study 26 reported that 41% of patients had 27 readmissions, and another study 32 reported that there were two (3.6%) presentations to ED and 1 (1.8%) hospitalisation. The remaining nine studies included data for a control group. There was a significant increase in unplanned readmissions, 12,18,31 with rates up to 35% higher. 18 However, the duration of the readmissions was shorter for the home monitoring group compared to controls, with a median of three days (interquartile range (IQR) 2-7 days) versus four days (IQR 2-10 days; p ¼ 0.002). 31 Conversely unplanned readmissions, readmission days and emergency room visits trended lower with the use of the home monitoring with an App, but this did not reach statistical significance. 28 Two studies found no difference in rates of unplanned readmissions/health-care resource use. 16,30 There was no significant difference in unscheduled readmissions (any cause) between weekly versus daily monitoring, or between no monitoring versus daily monitoring. 16 Contact with health service professionals was advised 18% less often with videoconferencing compared to phone support (p < 0.01). 10 The probability of being admitted at least once to hospital was 37% lower with videoconferencing support compared to standard care (p ¼ 0.004). 11 Home monitoring using the app was associated with significantly shorter unplanned length of stay in the intensive care unit (1 (IQR 1-2) vs. 6 (IQR 1-16); p ¼ 0.03). 9
Impact on treatment
The impact on treatment was reported in 15 studies (Table 4). Seven studies reported that age at stage 2 palliation (S2P) was younger in the home monitoring group, 14,15,20,26,27,30,31 and age at S2P was 22 days earlier with home monitoring (p < 0.002). 31 S2P in the home monitoring groups was earlier (p ¼ 0.016 and p ¼ 0.001). 14,27 One study found that the age of S2P for the home monitoring group was 150 AE 52 days versus 120 AE 114 days for the historical controls (p < 0.001). 30 A study comparing weight and/or SpO2 monitoring against no monitoring reported that S2P was 26 days earlier for those with SpO2 monitoring versus no SpO2 monitoring (p ¼ 0.002) and 21 days earlier for those with weight monitoring compared to no weight monitoring (p ¼ 0.004). 16 It was also reported that S2P was carried out early in 36 patients (23% of participants) who were readmitted due to home monitoring events. 25 Five studies reported on interventions and treatments that were undertaken due to breaches in home monitoring criteria. 13,[17][18][19]26 Two studies included historical controls and found that there were more interventions/procedures undertaken in the home monitoring group, 18,19 with a 78% increase in percutaneous interventions (p < 0.01) 18 and more major post-Norwood procedures (p ¼ 0.02) 19 in the home monitoring groups. Three studies did not have a comparator and reported that there were between 17% and 57% of patients who breached surveillance criteria and required intervention or diagnostic catheterisation. 13,17,26 One study found that the mean age at detection, admission and treatment of residual lesions after shunt placements was younger in the home monitoring group (p < 0.005). 15 It was reported in one study that in 25% of patients with a previous diagnosis of arrhythmia, a recurrence of the arrhythmia was confirmed and treatment initiated. 22 In the final study, it was reported that medication changes were made in 11% of patients based on home monitoring. 21
Discussion
The period between S1P and S2P is known to be a precarious time for CHD patients, with haemodynamic instability. 32 Just over half (56%) of the studies found a significant reduction or a trend for lower mortality with home monitoring.
Interstage mortality (i.e. mortality that occurs in patients between S1P and S2P) is reported in the literature to be between 11% and 19%. [33][34][35] Reported mortality rates compare favourably to the literature, with 10 demonstrating lower mortality. [13][14][15][16]18,20,27,28,30,31 A 15% mortality was noted by one study, 17 while the remaining studies were <11% when home monitoring was used. This would imply that the use of home monitoring may have beneficial effects, which lead to a lower mortality in this patient population. No minor or major clinical complications were reported for patients who were home monitoring their international normalised ration (INR) using a prothrombin time (PT)-INR point-of-care device, 23,24 which suggests that mortality was unrelated to the INR levels.
Studies that reported outcomes for mortality included RCT (n ¼ 1), 9 cohort studies (n ¼ 12) [13][14][15][16][17][18][19][20]27,28,30,31 and case series (n ¼ 4). [23][24][25][26] Whilst 59% of studies reported either a significant reduction or a trend for lower mortality, these were all cohort studies, with 9/ 10 studies using historical controls. Furthermore, 3/10 studies were rated as moderate quality. The two studies that reported a trend for higher mortality also used historical controls which is problematic and introduces bias into the research. Medical and surgical treatments may have improved between the two time periods, data may be recorded differently and so on. Therefore, the evidence is weak for this outcome.
Between 1.8% and 63% of patients had unplanned readmissions. In the context of the literature available, these results compare favourably. A study in 2016 reported that 65.5% of patients had at least one unplanned readmission in the interstage period. 36 Similarly, a more recent study in 2018 found that 75% patients had unplanned readmissions. 37 In 62.5% of the studies that reported on unplanned readmissions, rates were lower than reported in the literature for interstage patients. 36,37 An important finding from this review is that no studies reported unplanned readmissions using home monitoring significantly more than expected from the wider literature. Between 31% and 61% of patients required observation and/or evaluation as a result of data from home monitoring. Two studies 10,11 found that contact with health service professionals was advised less often. This suggests that home monitoring is a valuable resource for detecting clinical deterioration in this patient population.
The increases in unplanned readmissions and healthcare resource use may be explained by the fact that subtle deteriorations in clinical status could be picked up more frequently due to the home monitoring protocols. These increases should be viewed alongside mortality rates. The two studies that found a significant increase in unplanned readmissions also reported either a significant reduction or a trend for lower mortality rates with the use of home monitoring.
Studies that reported data for this outcome included RCTs (n ¼ 4), [9][10][11][12] cohort studies (n ¼ 7) 13,16,18,27,28,30,31 and case series (n ¼ 3). 21,25,26 Three of the four studies which reported lower unplanned readmissions/healthcare resource use were RCTs, which were all judged to have some concerns regarding bias. The other study was a cohort study using historical controls and was rated as high quality. Two of the three studies that reported an increase in relation to this outcome were cohort studies, using historical controls, with one study being an RCT. The remaining studies reported either no differences between groups or had no controls. Based on the design methodologies and quality appraisal scores, there is a stronger evidence that home monitoring reduces unplanned readmissions/ health-care resource use. However, the evidence remains limited for this outcome.
Of the 15 studies [13][14][15][16][17][18][19][20][21][22][25][26][27]30,31 that reported on the impact that home monitoring had on treatment, 47% found that S2P was performed at a younger age in the home monitoring groups. 13,14,20,25,27,31 The timing of S2P varies among centres and is a clinically led decision. Literature on the optimal timing of S2P reports that timing differs depending on whether patients are low, intermediate or high risk. A study in 2018 reported S2P being performed after three months of age was associated with maximal 2 year survival in low/intermediate risk infants. 38 Additionally, another study reported that median age at S2P for the study cohort was 155 days (IQR 109-214 days). 39 The earlier timing of S2P may be a reflection of earlier detection of clinical deterioration or clinicians being providing with more data on the patient, enabling them to make a more informed decision on the timing of S2P.
In studies without a control group, the intervention rates were reported to be between 37% and 38% of patients. The rates of re-interventions in these studies compare favourably to the literature, with a retrospective analysis of 1157 interstage patients reporting that 50% of patients required reintervention. 40 Studies for this outcome included cohort studies (n ¼ 11) [13][14][15][16][17][18][19][20]27,30,31 and case series (n ¼ 4), [21][22][25][26] with all but one being of high quality. There were 9/ 15 (60%) studies that found that either treatment was undertaken significantly earlier with home monitoring or significantly more interventions were undertaken in the home monitoring groups. 14,16,[18][19][20]25,27,30,31 Eight of these were cohort studies using historical controls. 14,16,[18][19][20]27,30,31 Of the remaining studies with either no data for comparison or no control group, only one was rated as high quality, and this was a case series design, with the rest rated as moderate quality. However, whilst most studies reported an either earlier treatment or increased interventions, given that these were cohort studies which primarily used historical controls, the methodological shortfalls in the studies mean that the evidence is limited for this outcome.
Strengths and limitations of the review
This review restricted the search to English language articles and excluded passive forms of home monitoring, which may have caused some relevant research to be excluded. Additionally, this review only considered clinical and health-care outcomes, and as such, qualitative outcomes such as anxiety levels, patient perceptions and so on have not been considered.
There have been no systematic reviews published on this topic to date, and this review systematically examined the full scope of the literature on home monitoring by patients with CHD. As such, this review provides a comprehensive review and interpretation of the data available.
Areas for further research
This systematic review included patients with CHD at any age who were undergoing home monitoring of their CHD. Only 2/20 (9.1%) of these studies were undertaken in adults with CHD. Improvements in survival rates of patients with CHD have led to increasing numbers of children and adults living with complex CHD. Approximately 90% of babies born with cardiovascular abnormalities are expected to reach adulthood, 41 and as surgical techniques continue to improve, this could increase further in the future. In light of this, further research should focus on home monitoring interventions for adults with CHD, as home monitoring of this patient population has the potential to relieve pressure in the health service.
Conclusion
This systematic review examining the clinical and health-care outcomes for CHD patients when using home monitoring programmes identified that home monitoring may be beneficial in reducing mortality, enabling earlier and more timely detection and treatment of CHD complications. | 2021-01-21T06:16:26.473Z | 2021-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "5deb42c4d98a10eb3af1f0bc63f468ee2f3b32d6",
"oa_license": "CCBYNC",
"oa_url": "https://pure.ulster.ac.uk/en/publications/9858194e-6ee3-4e81-94a0-3e795bde9c19",
"oa_status": "GREEN",
"pdf_src": "Sage",
"pdf_hash": "83aaaa7476663320384f1e283a771438b55e630f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14852905 | pes2o/s2orc | v3-fos-license | Synchronized activation and refolding of influenza hemagglutinin in multimeric fusion machines
At the time of fusion, membranes are packed with fusogenic proteins. Do adjacent individual proteins interact with each other in the plane of the membrane? Or does each of these proteins serve as an independent fusion machine? Here we report that the low pH–triggered transition between the initial and final conformations of a prototype fusogenic protein, influenza hemagglutinin (HA), involves a preserved interaction between individual HAs. Although the HAs of subtypes H3 and H2 show notably different degrees of activation, for both, the percentage of low pH–activated HA increased with higher surface density of HA, indicating positive cooperativity. We propose that a concerted activation of HAs, together with the resultant synchronized release of their conformational energy, is an example of a general strategy of coordination in biological design, crucial for the functioning of multiprotein fusion machines.
Introduction
Membrane remodeling in numerous cell biological processes is mediated by specialized fusion proteins. Most of the existing models of protein-mediated fusion suggest that the fusion site is surrounded by multiple proteins or protein machines providing conformational energy to drive the rearrangement of two lipid bilayers into one. Little is known about the mechanism by which fusion proteins coordinate their activity and also about the interactions, at present hypothetical, between fusion proteins in the plane of the membrane. These interactions must be transient and/or weak to allow the dissociation of protein clusters enclosing an expanding fusion pore. The very existence of these interactions still awaits confirmation, even for the well-characterized fusion reaction mediated by influenza hemagglutinin (HA)* (Wiley and Skehel, 1987;Skehel and Wiley, 2000).
HA is a homotrimeric envelope glycoprotein with individual monomers synthesized as a single polypeptide chain (referred to as HA0). Each monomer is cleaved by a trypsinlike protease into two disulfide-linked subunits, HA1 and HA2. Upon acidification of the endosome, this HA1-HA2 form undergoes major changes to acquire a fusion-competent conformation. In the initial HA conformation, the conserved, hydrophobic NH 2 -terminal peptide of HA2 (the fusion peptide) is hidden within the center of the trimeric stem (Wiley and Skehel, 1987). Low pH-dependent activation from this initial conformation to a fusion-competent one involves extrusion of the fusion peptide, i.e., its insertion into the target or viral membrane (for review see Gaudin et al., 1995), and extension of the triple-stranded, ␣ -helical coiled-coil of HA2 together with 180 Њ inversion of its viral, membraneproximal part (Carr and Kim, 1993;Bullough et al., 1994;Wharton et al., 1995;Kim et al., 1998). This conformational change of HA also tilts the molecule from the normal orientation toward the membrane (Tatulian et al., 1995), relocates the HA1 subunit from its initial place at the top of HA molecule, and makes the S-S bond between HA1 and HA2 accessible to reducing agents such as dithiothreitol (DTT). This acidic form of HA is further susceptible to proteolysis by thermolysin and proteinase K (White and Wilson, 1987;Wiley and Skehel, 1987;Kemble et al., 1992).
Conformational changes in HA after low pH application take place in the absence of a target membrane and in a truncated fragment of HA, i.e., its solubilized ectodomain (Wiley and Skehel, 1987). Low pH pretreatment of an HAexpressing membrane (HA-membrane) in the absence of a target membrane causes HA inactivation, detected as a decrease in the fusion rate after the application of an additional pH pulse, in the presence of a target membrane (Puri et al., 1990). Studies of the specific HA activation and inactivation mechanisms have revealed a notable difference between HA molecules of two widely studied HA subtypes: H3 (e.g., X31 and Udorn strains) and H2 (e.g., the A/Japan/305/57 strain). Whereas X31 HA completely inactivates after brief acidification in the absence of a target membrane, Japan HA retains most of its fusogenic activity (Puri et al., 1990;Korte et al., 1999). This experiment has been interpreted as showing that X31 HA has a much faster kinetics of inactivation than Japan HA. This putative differential inactivation for the H3 and H2 subtypes has been used as a basis for revealing the pathways of protein refolding and membrane fusion (Puri et al., 1990;Ramalho-Santos et al., 1993;Korte et al., 1997;Korte et al., 1999).
Do HA refolding and membrane fusion develop at the level of the individual trimer? Available crystallographic studies of initial and final HA conformations (for review see Skehel and Wiley, 2000) did not reveal any specific protein domains that might be involved in trimer-trimer interactions. On the other hand, the notion that viral fusion is mediated by a concerted action of multiple fusion proteins is supported by numerous functional studies (Ellens et al., 1990;Gutman et al., 1993;Blumenthal et al., 1996;Danieli et al., 1996;Gaudin et al., 1996;Plonsky and Zimmerberg, 1996;Chernomordik et al., 1998;Markovic et al., 1998;; but see Gunther-Ausborn et al., 2000). Thus, the key question of whether HA trimers interact with each other during conformational rearrangement and fusion has remained open.
Here, we report that triggering the conformational change in an individual HA trimer is affected by the proximity of other HAs. We modified the surface density of Japan and X31 HA and assayed the transition of HA from its initial to its low pH conformation both as the development of HA susceptibility to S-S reduction and as the digestion of the exposed fusion peptide by thermolysin. Conformational change in HA was also detected functionally as inactivation of HA by low pH pretreatment in the absence of a target membrane. As expected, Japan HA-membranes retained fusogenic activity after longer low pH incubations than did X31 HA-membranes. Our results suggest that this difference reflects slow activation, rather than inactivation as formerly thought (Puri et al., 1990;Gutman et al., 1993;Korte et al., 1999). More importantly, we show that in both slow-and fast-activating strains, the percentage of activated HA increases with an increase in HA density, indicating that HA activation involves positive inter-trimer cooperativity. We propose that this concerted activation of adjacent proteins, which allows synchronized release of their conformational energy, is the mechanism by which multiple fusion proteins coordinate their activity at the fusion site.
General approach
Upon acidification, HA molecules leave their initial, metastable conformation and undergo a transition toward their lowest energy state. Loss of the initial HA conformation was assayed after reneutralization. Hereafter, all HA conformations different from the initial one either in sensitivity to proteases and DTT or in ability to mediate fusion at low pH will be referred to as the low pH-activated conformations. We reasoned that if the conformational change of an HA trimer is somehow affected by its interactions with adjacent trimers, then the efficiency of HA activation will depend on the density of HA. To evaluate whether the observed dependencies are specific for the HA of a particular strain of influenza or instead reflect the general properties of HA activation, we first studied activation for two divergent influenza subtypes. Then, we focused on activation as a function of HA density, using a number of approaches in a number of systems.
Activation of Japan HA is much slower than that of X31 HA In our fusion inactivation assay, cells expressing Japan or X31 HA were first treated with a low pH pulse in the absence of a target membrane (the activating pulse), and then were reneutralized and incubated with red blood cells (RBCs) for 15 min. To trigger fusion, the second low pH pulse (the triggering pulse) was necessary, as RBCs bound to HA-expressing cells (HA-cells) treated solely with the activating pulse gave no fluorescent dye redistribution. An irreversible conformational change of HA molecules at low pH leads to a complete loss of their fusogenic activity (Skehel and Wiley, 2000). Therefore, a higher degree of HA activation during the activating pulse resulted in a more profound HA inactivation, and thus a lower extent of fusion after the triggering pulse. An activating pulse at pH 4.9 for 10 min activated, and then inactivated, most of the available X31 HA molecules, giving no fusion after a 2-min fusion-triggering pulse of pH 4.9 ( Fig. 1 A, closed symbols). In contrast, for Japan HA, the same activating pulse (pH 4.9, 10 min) had no effect on fusion observed after a 2-min triggering pulse of pH 4.9. The lack of fusion inhibition for Japan HA can be explained either by a low level of HA activation during the activating pulse or, as suggested by Puri et al. (1990), a very slow inactivation of Japan HA. If the former were true, one would expect to detect Japan HA inactivation if the pH of the fusion-triggering pulse were shifted from 4.9 to a suboptimal 5.3. Due to an excess of fusion-competent HA molecules, fusion is very robust at pH 4.9, and thus the extent of fusion is not sensitive to small changes in the number of HA molecules capable of mediating fusion. In contrast, the number of activated HA molecules available for fusion at pH 5.3 is significantly lower, and as a result, the system is much more sensitive to variation in pH. (Fusion at pH 5.3 is also more sensitive to membrane lipid composition [Chernomordik et al., 1997] and temperature [Melikyan et al., 1997a].) Thus, fusion induced by a pH 5.3 pulse is expected to be more sensitive to a small loss of fusion-competent HA due to prior inactivation, than is fusion induced by a pH 4.9 pulse. Indeed, increasing the pH of the fusion-triggering pulse to 5.3 ( Fig. 1 A, open symbols) made inactivation easily detectable. The decline in fusion at prolonged low pH pretreatment proceeded similarly for X31 and Japan HA, indicating that although low pH activates X31 HA more efficiently than it does Japan HA, the subsequent inactivation rates are not notably dissimilar.
Further evidence that Japan and X31 HA differ in their efficiency of low pH activation came from functional experiments in which the first low pH pulse was applied in the presence of a target membrane. HA-cells with bound RBCs were exposed to low pH in the presence of lysophosphatidylcholine (LPC), which reversibly blocks fusion ( Fig. 1 B). Then, cells were treated with thermolysin to cleave activated HA molecules. Removal of LPC at this stage did not result in fusion, confirming that most of the low pH-activated HA molecules in the contact zone were cleaved by the enzyme.
The remaining fusogenic activity of HA was evaluated after a second low pH pulse . The majority of X31 HA molecules, but not Japan HA molecules, were restructured after the activating pulse and cleaved by thermolysin, as deduced from the decrease in fusion ( Fig. 1 B, bar 4 vs. 2 and bar 8 vs. 6). Therefore, a single low pH pulse activates only a small portion of the available Japan HA, whereas most of the X31 HA is being activated. Such an activation-thermolysin cleavage cycle in Japan HA can be repeated at least twice without a measurable decrease in the extent of fusion. Similar results were obtained when fusion was blocked by lowering the temperature to 4 Њ C instead of by LPC application. In brief, functional experiments indicated that the rate of activation of X31 HA significantly exceeds that of Japan HA.
This increased efficiency of X31 HA activation was confirmed by cell surface enzyme-linked immunosorbent assay (CELISA), showing an increase in antifusion peptide antibody binding to acidified X31 with no measurable increase for Japan HA. CELISA-derived binding ratios of low to neutral pH HA after a 10-min application of pH 4.9 at 37 Њ C were 2.01 Ϯ 0.07 for X31 and 1.03 Ϯ 0.16 for Japan, n ϭ 3. Neither functional assays nor CELISA allowed a quantitative evaluation of the percentage of HA molecules activated under different conditions. To measure this percentage, we monitored HA activation by means of DTT-induced HA1-HA2 S-S bond reduction and HA1 release (Graves et al., 1983). The percentage of activation was detected by Western blotting under nonreducing conditions (shown in Fig. 2 A for viral particles) either as a loss in the intensity of the HA1-HA2 band or as the ratio of HA2 to the sum of HA2 and HA1-HA2 band intensities. (Both calculation methods gave statistically indistinguishable results.) A 10-min application of pH 4.9 resulted in almost complete disappearance of the HA1-HA2 band for X31 HA, compared with a minor loss of Japan HA1-HA2 (73-85% of X31 vs. 10-20% of Japan for HA-cells). More efficient activation of X31 HA than Japan HA was also found for a membrane-free preparation of bromelain-cleaved HA ectodomain (i.e., 76% vs. 31%) and for viral particles (80% vs. 40%; data in Fig The time course of Japan (squares) and X31 HA (circles) activation/inactivation at low pH in the absence of target membrane. HA-cells were treated with a pH 4.9 activating pulse for 1 to 30 min. Then, after RBC binding, cells were treated with a fusion-triggering pulse of pH 4.9 for 2 min (closed symbols) or pH 5.3 for 5 min (open symbols). A higher degree of HA activation during the activating pulse resulted in a lower extent of lipid mixing after the triggering pulse. (B) X31 (bars 1-4) or Japan HA-cells (bars 5-8) with bound RBCs were incubated for 5 min in pH 4.9 medium containing LPC. Although no lipid mixing was observed in the presence of LPC (bars 1 and 5), its removal completely restored fusion (bars 2 and 6). In the experiments represented in bars 3, 4, 7, and 8, cells at the LPC-arrested fusion stage were treated with thermolysin to cleave low pH-activated HA. Lipid mixing was assayed either directly after LPC removal (bars 3 and 7) or after application of an additional 5-min pulse of pH 4.9 (bars 4 and 8). Points are means Ϯ SE, n Ͼ 3. A). Thus, our biochemical experiments with membraneanchored HA and soluble HA ectodomain confirmed a lower efficiency of activation for Japan HA. The alternative possibility that low pH forms of X31 HA inactivate faster and, in addition, are more sensitive than Japan HA to both DTT and thermolysin, although feasible, seems unlikely. X31 HA and Japan HA have a similar pH dependence for activation (unpublished data; see also Korte et al., 1999). However, the rate of activation was notably different for these two strains (Fig. 2 B). For instance, a 10-min application of pH 4.9 to X31 HA-cells transformed 84% of the HA into a DTT-susceptible form. Longer exposure of X31 HA to pH 4.9, up to 1 h, did not notably increase the level of activation. In contrast, 1 h of acidification of Japan HA-cells yielded only 22% activated HA. Very slow and inefficient Japan HA activation was confirmed in viral particles, where the percentage of activated HA slowly increased with the time of incubation at pH 4.9 (51.2%, 61%, and 82% for 10 min, 6 h, and 24 h, respectively). Leveling off the activation at pH 4.9 after the first 30 min, which was observed for Japan HA-cells and not observed for Japan virus, can reflect a deterioration of the cells caused by long, low pH treatments, visible as obvious changes in cell morphology.
In brief, low pH-triggered activation of Japan HA is notably slower than that of X31 HA. This difference was observed for the proteins expressed in the stable cell lines (HAb2 and HA300a), in viral particles, and in solubilized HA ectodomain. In addition, the higher efficiency of Japan HA activation in influenza virus (vs. that observed in a stable cell line expressing HA) was consistent with the hypothesis that the rate of HA activation increased at higher HA surface density, which is characteristic for viral particles.
HA activation increases with the increase in HA surface density
Both slow-and fast-activating strains of HA were next used to study the role of trimer-trimer interaction in HA activation. If low pH-dependent activation develops at the level of individual trimers, then increasing the number of HA trimers at the cell surface should not change the percentage of activated molecules (i.e., the ratio of activated HA to total HA). In contrast, if HA activation involves positive cooperativity, the efficiency of activation should increase with HA density.
We increased surface density of Japan HA by growing HAb2 cells in the presence of different concentrations of sodium butyrate (NaBut). Flow cytometry indicated that the surface density of HA at all NaBut concentrations varies broadly between cells. However, this assay and two other assays, trypsinization and surface biotinylation, confirmed that preincubation with NaBut shifts the distributions to higher HA densities (Fig. 3 A). The extent of HA expression promotion at 5 mM NaBut was in excellent agreement with that reported in Danieli et al. (1996) (i.e., a 4.9-fold increase in HA from 0 to 5 mM NaBut). The reason for the quantitative discrepancy between our three assays for 9 mM NaBut is unclear to us. Importantly, none of the conclusions in the further discussion depend upon the exact differences in the surface densities.
The NaBut-induced changes in the surface HA expression were proportional to those in the total cellular HA expression, as evidenced by the constant percentage of total HA (83-90%) that was accessible for trypsin cleavage and surface biotinylation (Fig. 3 B). Thus, NaBut-induced changes in the total HA expression provide reliable measure for the changes in the surface HA.
Increasing surface density of HA notably accelerated HA activation (assayed by DTT susceptibility; Fig. 4 A) supporting the hypothesis of HA interaction during activation. Furthermore, for a low pH pulse of a given duration (i.e., 10 min), a higher percentage of HA molecules become activated at progressively higher levels of HA expression (Fig. 4 B). Note that HA density in HA-cells at high NaBut concentrations (e.g., 12.6 ϫ 10 3 HA/ m 2 at 5 mM NaBut; Danieli et al., 1996) approached that in viral particles (15-30 ϫ 10 3 HA/ m 2 ). In cells with the highest level of Japan HA expression, the level of activation (38%) approached the level observed in viral particles (40%) (Fig. 4 B).
Does the cooperative activation stage precede or follow the exposure of the fusion peptide, an early sign of activation (White and Wilson, 1987;White, 1996)? To address this question, we took advantage of the fact that exposed fusion peptides can be cleaved by thermolysin. Cells with different levels of Japan HA expression were treated with a 1-or 10- min low pH pulse immediately followed by thermolysin application (Fig. 4 C). Similar to our results with DTT, we found that the percentage of thermolysin-cleaved HA detected as a loss of HA1-HA2 band intensity increases with increasing HA expression, indicating that HA density affects an early stage of HA activation.
Was the dependence of activation upon HA density conserved among HA subtypes? Since NaBut did not increase HA density in HA300a-cells, we used different approaches to vary the expression levels. These experiments also test whether cooperativity is artifactually dependent upon any particular way of boosting surface density. All strains and ap-proaches yielded the same essential result: positive cooperative activation. In one approach, we varied the concentration of trypsin used to cleave X31 HA0 into the activation-and fusion-competent HA1-HA2 form and found higher levels of activation with higher numbers of activatable HA molecules per cell (Fig. 5 A). Note that the interpretation of this particular experiment is complicated by the possibility that mild trypsinization may result in partial cleavage of HA, such that some monomers in the same trimer may be present in the HA0 and some in the HA1-HA2 form.
In another approach, and to exclude the possibility that other membrane proteins assist in activation at different HA levels, we reconstituted X31 HA into virosomes at different pulse ranged from 1 to 30 min (B) Efficiency of Japan HA activation after a 10-min pulse of pH 4.9 as a function of the surface density of HA. Relative surface density of HA after cell preincubation with 0 to 9 mM NaBut was assayed by measuring the changes in total cellular HA and normalized by that in NaBut-untreated HAb2 cells. Points are means Ϯ SE, n ϭ 4. (C) The percentage of low pH-activated HA molecules that reached the early stage of fusion peptide exposure was assayed by means of thermolysin cleavage. HA expression in cells was altered by pretreatment with 0 to 9 mM NaBut. Cells were incubated at pH 4.9 for 1 or 10 min, reneutralized, and treated with thermolysin to cleave exposed fusion peptides. Cleavage of activated HA resulted in a decrease in the HA1-HA2 band, which was normalized by the pH 7.4 band taken as 100%. HA to lipid ratios. Once again, the percentage of activated HA molecules increased with increasing HA density (Fig. 5 B). In control experiments, we verified that the change in HA to lipid ratio alters the HA surface density in virosomes rather than the ratio of virosomes to protein-free liposomes. Virosomes formed with different ratios of HA and lipid were characterized by ultracentrifugation in a sucrose density gradient. For each of the virosome preparations, HA and lipid peaked in the same fraction of the gradient, with sucrose densities of 1.12 and 1.05 g/cm 3 for HA to lipid ratios of ف 1:200 (undiluted virosomes in which viral HA was reconstituted with no exogenous lipid added) and ف 1:3,650 (lipid-diluted virosomes). These results indicated that the decrease in activation efficiency for virosomes with lower HA to lipid ratios reflects the decrease in HA concentration. Quantitative analysis of these data is complicated by the possibility that some HA molecules can be clustered in the virosome membrane rather than be homogeneously distributed along the membrane according to the average surface density, and by the fact that some HA may have the wrong orientation.
In still another approach, we used Udorn HA, which is 96.7% homologous to X31 HA. In this case, HA density was altered by varying the multiplicity of CV1 cell infection with SV-40 recombinant virus carrying the Udorn HA gene. FACS ® analysis confirmed that in a specific concentration range of recombinant SV-40 virus (0.01-1 mg/ml total viral protein), the number of HA molecules per cell was higher at a higher dose of the virus. Once again we found that at higher HA densities, a higher percentage of HA become activated (Fig. 5 C).
As discussed above, one can evaluate the rate and extent of HA activation by measuring its subsequent inactivation. The relationship between HA density and activation established in biochemical assays was confirmed with functional experiments in which we measured HA inactivation. Boosting HA expression by NaBut (a threefold increase in the surface density of HA, as estimated based on HA sensitivity to DTT) notably accelerated Japan HA activation/inactivation (Fig. 6). The HA activation/inactivation was also accelerated in X31 HA-cells with a higher density of trypsin-cleaved HA. For HA-cells pretreated with two different concentrations of trypsin, we studied the effect of an activating pulse (pH 4.9, 2 min) on fusion between HA-cells and bound RBCs after a fusion-triggering pulse (pH 4.9, 5 min). For X31 HA-cells pretreated with 5 g/ml trypsin (10 min, 22 Њ C), the activating pulse lowered the fusion extent from 48.0 Ϯ 4.7 to 28.4 Ϯ 5.4%, n Ն 3. In contrast, the same activating pulse did not affect fusion if the cells were treated with only 0.5 g/ml trypsin (30.9 Ϯ 5.2 vs. 31.6 Ϯ 6.1%, n Ն 3). Here, as in all other experimental systems we studied, the more HA available for activation, the higher the percentage of activated HA molecules.
HA activation at lowered surface densities is promoted by the target membrane
Although we performed our biochemical experiments in the absence of RBCs, given that HA-cells form clusters, there was potentially a fraction of HA that could interact with the membrane of an adjacent cell upon low pH application. Thus, one might hypothesize that an increase in the activation efficiency at a high density of HA was mediated by HA interactions with the target membrane. For instance, concerted insertion of fusion peptides of different trimers into the target membrane has been proposed by Danieli et al. (1996) as an explanation for cooperativity of fusion. To test this possibility, we plated Japan HA-cells with different HA concentrations as single cells on poly-L -lysine-treated flasks (0.5 mg/ml, 30 min, 22 Њ C). For such single cells (i.e., in the absence of a target membrane), we again found that the percentage of activated HA molecules was higher at higher HA densities (Fig. 7 A). In a parallel experiment, Japan HA-cells, also plated as single cells, were covered with fluorescently labeled liposomes containing gangliosides (Fig. 7 A). An estimate based on the level of fluorescence associated with each cell after incubation with liposomes at 4 Њ C indicated that the cell surface was saturated with liposomes. The percentage of activated HA molecules under these conditions was independent of HA density and corresponded to that observed in the absence of a target membrane at the highest level of HA expression. Thus, the effect of the target membrane on HA activation was the same as that obtained by increasing HA density.
Discussion
Activation of influenza HA can cause the membrane fusion step of viral entry only if this activation occurs in the right place and at the right time. Premature activation and discharge of HA trimers, whether in the absence of a suitable target membrane or before assembly of the multi-trimer machine thought to be needed for fusion (Ellens et Japan HA-cells incubated with 0 (closed circles) or 2 mM NaBut (open circles) were pretreated with an activating pulse, pH 4.9. Next, RBCs were added, and fusion was triggered by a 10-min pulse of pH 5.2. As in biochemical experiments, the efficiency of HA activation is higher for the cells with an HA expression level increased by NaBut.
1996; Gaudin et al., 1996;Plonsky and Zimmerberg, 1996;Chernomordik et al., 1998;Markovic et al., 1998;, would be detrimental to viral entry because it would result in HA inactivation and hence depletion of the available pool of fusion-competent proteins. Here, we show that the degree of HA activation rises with increasing HA surface density, and we conclude that HA activation exhibits positive cooperativity. This finding suggests that the mechanism by which adjacent HA molecules effectively synchronize the release of their conformational energy is through positive cooperativity. The involvement of intertrimer interactions in HA activation is conserved between the fast-activating H3 and the slow-activating H2 influenza subtypes. Below, we discuss the specific stages of activation that are dependent on the trimer-trimer interactions, and the general role of concerted mechanisms in fusion complexes and other multimeric protein machines.
Conformational change in HA depends on trimer-trimer interactions
As shown above, the propensity of HA to restructure into its low pH conformation depends on interactions between cleaved HA trimers capable of undergoing such conformational changes. These interactions may involve different domains of the HA molecule. For instance, low pH forms of the HA ectodomain interact by their fusion peptides, as can be inferred from the formation of rosette structures (Ruigrok et al., 1988). Moreover, fusion peptide interaction among neighboring HAs was hypothesized to be responsible for a measurable decrease in lateral mobility of HA after activation (Gutman et al., 1993). Additional or alternative mechanisms of interaction at low pH might involve the kinked regions of HA2 (residues 106-112), which are responsible for the aggregation of large, membrane-bound polypeptide fragments of HA2 (residues 1-127) at low pH (Kim et al., 1998). It is also possible that the rate of HA re-folding does not depend upon direct trimer-trimer interaction but rather depends on a number of adjacent trimers simultaneously interacting with the membrane in which they are anchored. For instance, multiple low pH-activated HAs might act together in inducing local bending of the viral membrane, thus bridging the gap between the virion and the target cell (Kozlov and Chernomordik, 1998). Such HA-generated membrane bending can hypothetically facilitate the activation of yet nonactivated trimers. Since positive cooperativity in activation was observed in the absence of a target membrane, it apparently does not require HAtarget membrane interaction.
The effect of the target membrane
Although the presence of a target membrane was not required for cooperative activation of HA, membrane contacts increased the level of low pH-triggered activation at low surface densities of the protein. Since cooperativity was detected in the absence of a target membrane, one may argue that concerted HA activation is involved in the inactivation rather than in the fusion process. This argument would imply that HA-target membrane interaction affects the refolding of individual trimers by lowering the energy barrier of HA activation. For instance, the conformational change in HA may proceed beyond transient exposure of the fusion peptide only if the exposed peptides can interact with each other or if they can insert into the target membrane. This implies that the fusion peptide reaches the target membrane at the early stage of HA refolding (Stegmann et al., 1990), rather than being delivered to the target membrane by extension of the central coiled-coil core of HA in a major and irreversible rearrangement of the protein (Carr et al., 1997). Note, however, that this scenario does not explain why the efficiency of activation at a low density of HAs in the presence of liposomes coincides with that at a high density of HAs in the absence of liposomes. Japan HA-cells with levels of HA expression altered by pretreatment with 0 to 9 mM NaBut were plated as single cells on poly-L-lysine-treated flasks and incubated at 4ЊC with saturating concentrations of liposomes. HA expression was normalized by that in NaBut-untreated HAb2 cells. After removal of unbound liposomes, cells were treated with a 10-min pulse, pH 4.9 (22ЊC). Open circles: control, analogous cells in the absence of liposomes. Points are means Ϯ SE, nϭ 4. (B) The cartoon illustrates HA enrichment in the contact zones between an HA-cell and liposomes. HA1-receptor interaction induces HA concentration in the contact zone. Therefore, as long as the contact area is less than the total area of HA-membrane, effective HA density and hence the level of HA activation in the presence of a target membrane exceed those in the absence of a target membrane. The graph represents a theoretical curve based on the estimate of HA enrichment and activation in the HA-cell-target membrane contact region for cells with low HA density. The expected level of activation, shown as a function of the ratio of the contact zone area to the total HA-membrane area, ␣, approaches the activation level observed for cells with high HA density in the absence of a target membrane.
The simplest and, we believe, the most natural interpretation of the promotion of HA activation in the presence of the target membrane to the level observed at the highest HA densities is an enrichment of HA molecules in the contact zone due to HA1-receptor binding (Mittal and Bentz, 2001). k , the dissociation constant of HA1-sialic acid of ف 3 mM (Sauter et al., 1992), can be renormalized to k Ј Х 100 m Ϫ 2 for HA and receptor concentration in the membrane . Even though we know that the entire cell surface was covered with liposomes, the specific geometry of liposome-cell contact, and hence also the percentage of the cell membrane in close contact with liposomes, is not known. If the ratio of the contact zone area to the total HA membrane area equals ␣ , the membrane concentration of receptor-bound HA ( C HA-R ) can be described by the following equation: where C HA and C R stand for total membrane concentrations of HA (e.g., 2,500 m Ϫ 2 in HAb2 cells; Danieli et al., 1996) and receptors (5 mol% or ف 80,000 m Ϫ 2 ). Since the radius of HA trimer is ف 4 nm (Wiley and Skehel, 1987), the maximal membrane concentration of HA in the cell-liposome contact zone was limited to 20,000 m Ϫ 2 . If more than 15% of the cell membrane is in the close contact zone ( ␣ Ն .15), more than 90% of the HA molecules will be assembled in the contact zones, leading to significant enrichment of HA. Taking into account the experimentally observed dependence of activation on the density of HA (Fig. 7 A, open circles), for ␣ varying from 0.1 to 0.9, the percentage of activated HA is expected to be at least twofold higher than in the absence of a target membrane (theoretical curve in Fig. 7 B). Therefore, only at the highest surface densities of HA will the efficiency of its activation in the absence of a target membrane reach the level normally observed in the contact zones.
Concerted HA activation at the fusion site
We hypothesize that at an acidic pH, individual HA trimers first establish a transient early state (depicted as yellow in Fig. 8 B). This stage might involve a limited relocation of HA1 tops and exposure of the fusion peptide. For individual HA trimers, this state is too short-lived to allow detection by any of our assays. Interaction between adjacent HA trimers increases the lifetime of this early state for trimers physically next to the activated ones and promotes their transition to the irreversible lowest energy state (Fig. 8, C and D, orange). As a consequence of such positive cooperativity, activation spreads among adjacent HAs leading to the synchronized release of HA conformational energy by neighboring trimers assembled around the fusion site. The probability of interaction between two HA trimers, and thus their activation, increases with the increase in the local surface density of HA, for instance in the contact zone or membrane domains (e.g., Figure 8. Schematic diagram showing the hypothetical mechanism of cooperative activation of HA at low pH. Low pH application triggers restructuring of HA trimers from initial conformation (depicted as blue) to an early, transient activated form (yellow) followed by the final, lowest energy conformation (orange). (A) Membrane-anchored HA trimers before low pH application. (B-D) After acidification, inter-trimer interactions promote transition from a transient activated form of HA to the lowest energy protein conformation, and increases the probability of activation for native HA molecules proximal to activated ones. Activation consequently spreads among neighboring HAs (red arrows). At a high local density of HA, such concerted activation leads to the synchronized release of the conformational energy by multiple trimers assembled around the fusion site. "rafts;" Simons and Ikonen, 1997) enriched in HA molecules. Note that both the presence of a target membrane and HA enrichment in microdomains are expected to significantly affect HA activation only at low average surface density of HA. In fact, disruption of raft microdomains by cholesterol depletion does not affect the fusion phenotype observed at the relatively high levels of HA expression achieved with either the vaccinia virus or the SV-40 transfection systems (Armstrong et al., 2000;Melikyan et al., 2000).
Our experimental approach detected the interaction of neighboring HA trimers only at low HA densities, when the scarcity of these interactions limit the rate of HA activation. Nonetheless, these results strongly argue that low pH-triggered conformational changes of HA at higher surface densities, such as in the viral envelope, also involve inter-trimer interactions. Concentrating HA molecules in the contact zone and accelerating activation by cooperativity, together ensure that the multiple HA trimers required for a proper fusion complex activate synchronously.
Conclusions
The opening of a fusion pore has been hypothesized to involve the interaction of multiple HA molecules that form a fusion complex. If indeed membrane rearrangements in fusion require simultaneous energy release by the conformational change of several HA trimers, a mechanism that minimizes dissipation of this energy by synchronizing the refolding of HA molecules at the fusion site is needed. Our work suggests that this synchronization of the conformational change of multiple HA trimers involves their concerted activation, such that interaction between adjacent HAs acts to effectively lower the energy barrier separating the initial metastable state from the final low-energy conformation. This mechanism lowers the risk that the first activated trimer at the contact site would already be discharged by the time the next trimer starts its irreversible restructuring. We speculate that this mechanism of concerted activation of individual proteins might optimize the activation potential of viral fusion proteins, lower the probability of their premature activation, and be of crucial importance for the assembly of a functional, multiprotein fusion machine. It is conceivable that the mechanism described here for HA activation applies to other multimeric complexes that operate in the plane of the membrane, such as synaptic signaling (Keleshian et al., 2000) or immune complexes.
Materials
Japan and X31 viral strains were purchased from Charles River Laboratories, where they were propagated in the allantoic cavity of specific pathogen-free eggs and subsequently purified on a sucrose gradient. SV-40 recombinant virus with the Udorn HA gene was a gift from Dr. R. Lamb (Northwestern University, Evanston, IL). Rabbit polyclonal anti-X31 HA serum was directed toward the COOH-terminal portion of HA1, whereas anti-Japan HA rabbit serum targeted the fusion peptide (Covance Laboratories, Inc.). Monoclonal HC67 antibody (Daniels et al., 1983) was a gift from Dr. J.J. Skehel (The National Institute for Medical Research, London, UK). Anti-HA monoclonal antibody FC-125 was a gift from Dr. Thomas J. Braciale (University of Virginia, Charlottesville, VA). Goat anti-rabbit IgG conjugated with horseradish peroxidase or alkaline phosphatase was purchased from Pierce Chemical Co. ECL and chemifluorescence substrates were obtained from Amersham Pharmacia Biotech. Immobilon-P filters and protease inhibitor cocktail were obtained from Millipore and Boeh-ringer, respectively. DTT was purchased from ICN Biomedicals. The trypsin from a bovine pancreas, neuraminidase from Clostridium perfringens, thermolysin (type X, P1512), lipid-soluble probe, PKH26, and disialoganglioside G D1a were purchased from Sigma-Aldrich. All other lipids were purchased from Avanti Polar Lipids, Inc.
Cells
HAb2 cells constitutively expressing Japan HA (Doxsey et al., 1985) and HA300a cells constitutively expressing X31 HA (Kemble et al., 1993) were cultured as previously described. CV1 cells infected with SV-40 recombinant virus containing the Udorn HA gene were cultured as described in Melikyan et al. (1997b). All HA-cells were prepared for fusion as described in Chernomordik et al. (1998). HA expressed at the cell surface was cleaved from HA0 to the fusion-competent HA1-HA2 form by trypsin (5 g/ml, 10 min at 37ЊC, if not stated otherwise). RBCs were labeled with a fluorescent lipid PKH26 . To modify HA density, the following approaches were used: (1) treatment of HAb2-cells with 0 to 9 mM NaBut 24 h before the experiment, producing an increase in HA expression, in agreement with Danieli et al. (1996); (2) variation of the conditions of trypsinization (1-10 g/ml), altering the ratio of the HA0 form to the HA1-HA2 form; (3) variation of the multiplicity of infection with SV-40 recombinant virus carrying the Udorn HA gene; and (4) use of reconstituted viral envelopes with different protein to lipid ratios.
Measuring HA activation/inactivation by SDS-PAGE and Western blotting
HA activation was assayed by reducing the HA1-HA2 S-S bond, which is accessible only in the low pH HA conformation (Graves et al., 1983;Wiley and Skehel, 1987). Trypsinized HA-cells were incubated in citric acid-acidified PBS. Next, 20 mM DTT (20 min at 27ЊC, pH 7.4) was applied to release HA1 from the membrane-anchored HA2 subunit of the low pH HA. The free SH groups were alkylated by a brief wash with 50 mM sodium iodoacetamide in PBS. No additional release of HA1 was observed when the DTT concentration was raised above 20 mM (unpublished data). To study HA activation on viral particles, a viral suspension containing 1 mg/ml total protein was acidified to pH 4.9 at 22ЊC for 10 min, if not stated otherwise, and then neutralized to pH 7.4 and reduced with DTT. Next, the virus was alkylated and precipitated by centrifugation at 80,000 g for 1 h. In some experiments, acidified HA-cells were incubated with thermolysin (0.05 mg/ml, 10 min at 22ЊC) to cleave the fusion peptide of low pH HA (Wiley and Skehel, 1987). After treatment, reduced cells and viral particles were lysed in nonreducing SDS-PAGE lysis buffer (50 mM Tris-HCl, pH 7.5; 1.5% SDS; 50 mM sodium iodoacetamide; 5 mM EDTA; 1 mM AEBSF; 100 M leupeptin; 100 M 3,4 dichloroisocoumarin; 10% glycerol; 0.01% bromphenol blue) for 5 min with shaking, and then the mixture was boiled for 5 min. Release of the HA1 subunit or the fusion peptide in viral or cellular preparations was detected by SDS-PAGE. In quantitative Western blot analysis, proteins blotted to Immobilon-P filters were incubated in rabbit polyclonal serum (1:500 or 1:2,500) followed by goat anti-rabbit IgG conjugated with alkaline phosphatase (1:14,000). After incubation with enhanced chemifluorescence substrate, dried blots were scanned and quantified on a Molecular Dynamics scanner with the ImageQuant software package (Molecular Dynamics). Japan HA activation is presented as a ratio of HA2 to total band intensity within the sample lane, in which the level of neutral pH cleavage was subtracted from that produced in the low pH treatment. X31 HA activation was calculated as a ratio of the low pH HA0 band to the pH 7.4 HA0 band, which is taken as 100%. Compared with each other for the same strain, either X31 or Japan, the two calculation methods gave statistically indistinguishable results.
HA expressed at the cell surface is accessible to trypsin cleavage (Clague et al., 1991). Thus, to determine the ratio of surface HA to total HA we treated HA-cells with trypsin (5 g/ml, 10 min, 37ЊC; our standard trypsinization protocol, see above) and measured the loss of the uncleaved HA0 form by Western blotting. The percentage of cleaved, and thus surface-expressed, HA (65-80% and 90% of the total HA for X31 HA-cells and Japan HA-cells, respectively) did not increase when the trypsin concentration was increased to 10 g/ml. Thus, we assume that all the surface HA is in the HA1-HA2 form under the conditions of our experiments (if not stated otherwise). An alternative way to determine surface density of HA was by cell surface biotinylation. Cell surface labeling of 0-9 mM NaBut-treated HA-cells was performed with 0.5 mg/ml EZ-Link Sulfo NHS-Biotin (Pierce Chemical Co.) for 30 min at 22ЊC. Labeled cells were lysed in the buffer containing 20 mM sodium phosphate buffer, pH 7.5, 500 mM NaCl, 0.1% SDS, 1.5% Triton X-100. The lysate was incubated with the UltraLink Immobilized Streptavidin Plus (Pierce Chemical Co.) for 1 h at 4ЊC. Immobilized Streptavidin with bound proteins was washed four times in lysis buffer, and then biotin-labeled proteins were liber-ated from Streptavidin by boiling in SDS-containing sample buffer and analyzed for HA content by Western blotting. In parallel, we also determined the amount of nonbiotinylated, Streptavidin-unbound HA in the supernatant to estimate the percentage of surface HA among total cellular HA. To minimize the NaBut effect of slowing the rate of cell division (a 1.5-fold higher cell count in 0 mM vs. 9 mM NaBut 24 h after the treatment), each SDS-PAGE sample was normalized by the total protein content.
For Japan HA-expressing cells, the percentage of surface HA among the total cellular HA did not vary when the level of HA expression was boosted by NaBut. Therefore, the intensity of the HA1-HA2 band in lysates of 5ف ϫ 10 5 HAb2 cells treated with different concentrations of NaBut, normalized by that of untreated HAb2 cells, was used in Fig. 4, B and C and Fig. 7 A as a measure of the increase in HA surface density relative to the level of HA expression in untreated HAb2 cells: 005,2ف trimers per m 2 (Danieli et al., 1996).
Surface HA expression for different NaBut concentrations was also evaluated by means of flow cytometry. Cells were labeled with the anti-HA monoclonal antibody FC-125 with Cy5 tag (a gift from Dr. Mukesh Kumar, National Institutes of Health [NIH], Bethesda, MD). The FACSCalibur ® with the CELLQuest software package (Becton Dickinson) was used to record the mean fluorescence of Cy5-positive cells with different HA expression levels.
Measuring HA activation/inactivation by CELISA
The high degree of sequence homology (i.e., 74%) between X31 and Japan HA fusion peptides allowed the use of rabbit serum raised against the Japan HA fusion peptide on both strains in a CELISA. The CELISA assay was performed as described in . In brief, surface HA in HA300a or HAb2 cells was reacted with a 1:100 dilution of the antiserum for 1 h at 22ЊC.
Measuring HA activation/inactivation by cell-cell fusion
Fusion was assayed by fluorescence microscopy as the PKH26 transfer from RBCs to unlabeled HAb2 or HA300a cells, as previously described . Data were quantified as the ratio of dye-redistributed-bound RBCs to the total number of bound RBCs. HA activation was initiated by a pH 4.9 pulse (the activating pulse) to HAb2 or HA300a cells in the absence of RBCs. Next, RBCs were added and allowed to bind to HA-cells; the second low pH pulse (the fusion-triggering pulse) followed. Fusion was measured 20 min after the triggering pulse. Longer incubations at low pH (i.e., 30 min) did not increase the extent of fusion. In general, a higher degree of HA activation after the activating pulse resulted in a greater inactivation and lower fusion after the fusion-triggering pulse.
In Fig. 1 B, HA-cells with bound RBCs were treated with thermolysin (0.05 mg/ml) for 20 min at 22ЊC at the LPC-arrested fusion stage (Chernomordik et al., 1997). Cells were triggered to fuse by application of a low pH medium at 22ЊC in the presence of 285 M lauroyl LPC, which reversibly blocked fusion. LPC removal 30 min after the low pH pulse gave the full extent of fusion.
Udorn HA expression
Infection with SV-40 recombinant virus carrying the Udorn HA gene (Melikyan et al., 1997b) was performed on confluent monolayers of CV1 cells at 0.01, 0.1, and 1 mg/ml total viral protein. After a 1-h viral adsorption period, the viral suspension was diluted at a 1:5 ratio, and cells were incubated at 37ЊC for the next 48 h. Expression of HA at the surfaces of uninfected (i.e., control) CV1 cells and CV1 cells infected with SV-40 at different levels was evaluated by flow cytometry with HA-specific HC67x monoclonal primary antibodies.
Virosomes
Virosomes from X31 influenza virus were prepared as described in Bron et al. (1993). In brief, viral particles were solubilized in 100 mM C 12 E 8 and reconstituted by detergent removal with BioBeads SM2. Approximately 55% of the viral HA and 42% of the phospholipid, relative to the starting material, were recovered in the virosomes, as evaluated by means of quantitative immunoblotting and measuring the fluorescence of virosomes formed from viral particles prelabeled with rhodamine dipalmitoyl phosphatidylethanolamine (Rho-PE, at a final concentration of 0.7 mol%). This recovery efficiency was in agreement with that reported in Bron et al. (1993). To lower HA density, C 12 E 8 -solubilized virus was supplemented with different amounts of the lipid mixture, egg phosphatidylethanolamine/egg phosphatidylcholine/cholesterol (1:1:1) including 1.4 mol% Rho-PE. Influenza virus is estimated to have 500-1,000 trimers per viral particle of -001فnm diameter (Taylor et al., 1987), i.e., 100-200 lipid molecules per trimer. Assuming that virosomes without exogenous lipids have a HA trimer to lipid ratio of 1:200, which corresponds to 15,000 HA trimers per m 2 , the ratios for the two preparations of "diluted" virosomes formed with exogenous lipid were 1:3,650 and 1:1,100, which correspond to densities of 820 and 2,800 trimers per m 2 , respectively. HA incorporation in reconstituted vesicles was readily assessed by means of equilibrium density-gradient analysis using a 2.5-40% linear sucrose gradient. Fractions were collected and analyzed for protein and lipid content by means of Western blotting and fluorescence scanning. The amount of HA in the bottom fraction never exceeded 3.5% of all HA in the sample, indicating that the amount of nonreconstituted HA (HA rosettes) in our preparations was negligible.
Liposomes
Liposomes made of distearoylphosphatidylcholine/cholesterol/Rho-PE/ gangliosides G D1a (49.5:40.5:5:5 mol%) were prepared by extrusion through a 100-nm Nucleopore filter. The size of the extruded liposomes (i.e., less than 100 nm in diameter) was verified by means of quasi-elastic light scattering. Liposomes (-5.0فmole total lipid) were incubated with HA-cells (10 6 cells) at 4ЊC for 60 min. After the removal of unbound liposomes, cells were treated with a 10-min, pH 4.9, pulse at 22ЊC, and the percentage of activated HA was assayed by measurement of protein sensitivity to DTT.
The degree of HA activation, the extent of NaBut-induced promotion of HA expression, and the extent of fusion varied from day to day, possibly because of variation in the level of HA expression. Each experiment presented here was repeated several times, and all functional dependencies reported were observed in each experiment. The data shown in the figures are for the representative experiment or, if shown with error bars, for results averaged over at least three experiments. | 2014-10-01T00:00:00.000Z | 2001-11-26T00:00:00.000 | {
"year": 2001,
"sha1": "d5dd89c8eb677d0cb4da094a5a6f406c3d366e1e",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/155/5/833/1300402/jcb1555833.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "9dfa6950a7273b5e16f0b169916ea2ca8c878bd0",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
204391891 | pes2o/s2orc | v3-fos-license | Positive Youth Religious and Spiritual Development: What We Have Learned from Religious Families
In this article, we highlight the contributions of the findings from a branch of the American Families of Faith national research project that pertain to positive religious and spiritual development in youth. We present detailed findings from six previous studies on religious youth and their parents from diverse faith communities (various denominations in Christianity, three major branches of Judaism, and two major groups in Islam). We discuss what our findings suggest for positive religious/spiritual development, particularly in a family context. Finally, we suggest several ways to strengthen the literature on development in youth by exploring positive religious/spiritual development in relation to (a) social and political activism, (b) popular media and music, (c) participation in secular activities (e.g., sports, arts, gaming), (d) wrestling with BIG questions (i.e., questions involving Being, Intimacy, and God), (e) conversion and disaffiliation, (f) interfaith knowledge and experience, (g) impactful personal experiences, (h) volunteerism and service, (i) religious rituals, ceremonies, and traditions, (j) mental illness, (k) mindfulness and meditation, (l) temperament and personality, (m) agency and personal choices, (n) sexual orientation and experiences, and (o) generative devotion.
1
For information on the American Families of Faith project, please see http://AmericanFamiliesofFaith.byu.edu. 2 This review article overlaps with other articles we have already published because this article is a review of some of our previous work-with a focus on our research that has explored and examined different aspects of positive religious/spiritual development. A central feature of in-depth qualitative work like ours is that the core findings are illustrated by nearly 100 direct quotes related to religious/spiritual development from the parents of youth and from youth themselves. We are pleased to present here a composite of several of the most important findings that our small body of related empirical studies on youth have yielded.
Sampling and Participants
Interviews used in the published studies summarized in this article were conducted as part of the American Families of Faith project (see http://AmericanFamiliesofFaith.byu.edu). The project is an ongoing effort to explore the nexus of religion and family relationships. A strengths-focused approach was employed with exemplar families (Damon and Colby 2013). Religious families were intentionally sampled in a two-stage selection process. First, religious leaders were asked to recommend families in their congregations they considered "strong in their faith" and "successful in their family relationships" to be interviewed. Secondly, recommended families were contacted to assess their willingness to participate. More than 90% of the referred families consented to be interviewed. In addition to the aforementioned process, snowball or participant referral sampling was sometimes employed among more difficult-to-access faiths (e.g., Islam, Orthodox Judaism). Overall, as of 2018, these procedures resulted in a total of 198 families (N = 476 individuals) being interviewed.
Samples for youth-focused studies. For this article, of the six studies we summarize below, the first four were focused on youth and the latter two were focused on parents. The samples in the first four studies included 49-55 families with 80-84 youth (depending on the selection criteria for each of the four studies). To recruit the participant families that included youth, we contacted religious leaders of different faith communities in New England and northern California and asked them to recommend families in their faith community with at least one adolescent child and whom they felt well represented their faith community in terms of religious activity level, devotion, and practices of the faith tradition. These families were contacted and asked to participate in the study.
The families were from three major branches of the Abrahamic faiths: (1) Christian (Baptist, Catholic, Christian and Missionary Alliance, Christian Science, Congregationalist, Episcopalian, Greek Orthodox, Jehovah's Witness, Latter-day Saint, Lutheran, Methodist, Pentecostal, Presbyterian, and Seventh-day Adventist); (2) Jewish (Conservative, Modern Orthodox, Reform, and Hasidic) (3) Muslim (Shiite and Sunni). The adolescents were interviewed in a family group setting including the parents and any adolescents in the family who were available and consented to be interviewed. In any given family the number of interviewed adolescents ranged from 1-5, with the mean number of siblings per family group being 1.6. The samples for the four studies that focused on youth included 80-84 adolescents (41 female, 39 male; age range 10-21 years; M age = 15.1).
In an effort to gather the richest data possible about the adolescents' lives, the adolescents and their parents were interviewed in their homes. This setting allowed the interviewer to "triangulate or obtain various types of data on the same problem, such as combining interview with observation" 3 (Corbin and Strauss 2008, p. 27). The adolescents and the parents were given the opportunity to respond to the questions, providing multi-respondent perspectives.
The interview guide for the adolescent interviews consisted of 26 open-ended questions that were used to guide the conversation. Often follow-up questions were used to clarify the adolescents' and their parents' responses to the original questions. The interviews were recorded and transcribed verbatim. This project was completed with IRB approval.
Summary Findings from Six Studies
In this section, we briefly present major findings from each of six published articles and illustrate the findings with some quotes from youth and their parents that were present in the original articles. 4 We use the original article titles as the headers for the subsequent six sections. 3 Each of the original six articles utilized qualitative approaches to data analysis that included modified grounded theory, content analyses, and theory-based analyses. For details please see the methods sections of the articles. 4 We encourage readers who would like "the rest of the story" to read the six articles from which we draw this summary essay. In presenting summaries of major findings, we have left out most explanatory and transitional statements present in the original articles. Additionally, to illustrate each theme in the findings, we only include one or two quotes from youth and parents while in the original article, typically, there were several for each reported theme or finding. We discuss the articles
Study 1. Talking about Religion: How Religious Youth and Parents Discuss Their Faith
In this study (Dollahite and Thatcher 2008), we analyzed "religious conversations" (meaning conversations about or influenced by religion) using grounded theory methodology. We were guided by the following research questions: (a) What is the context of parent-adolescent conversations on religious issues? (b) What are the processes involved in parent-adolescent conversations involving religion? (c) What qualities of conversational processes are most beneficial for the religious exploration and development of youth? 5 Findings included responses from parents and youth, including when they spoke of one another. Parents and youth reported both positive and negative elements of their religious conversations. Parents and youth were frank about both their frustrations and satisfactions with such discussions. Two main conversational processes were identified: parent-centered and youth-centered. In this summary, we focus on youth-centered conversations.
Many parents expressed understanding of the needs of their children during religious conversations and tailored the conversations to try to meet those needs. These more transactional approaches encouraged the adolescents to be more active in the conversations.
Youth-centered conversations had the following elements: (a) youth talks more and parents listen, (b) youth seeks and receives understanding from parents, (c) religion is related to the youth's life, (d) conversation is open, and (e) parent-youth relationship is nurtured.
Youth talks more and parents listen. Rachel 6 , a Hasidic Jewish mother, said, "We find the older kids get, they have so much to say, and . . . after a whole day of school, they come home and they don't want to hear us talk, they want to talk." Kira, a Lutheran mother, explained that, "I've learned that less words are better." Youth seeks and receives understanding from parents. Mandy, a 15-year-old Christian daughter, said, "They're always willing to talk to me about any questions I ha [ve]. [T]hey explained what they believed to me." Sophie, a Presbyterian mother, said, "Sometimes I have an answer for him [adolescent child] and sometimes I go, 'You know, you've got a point.'" Kelsey, a 13-year-old Orthodox Christian son, commented, "Sometimes my parents don't know the answer so then it's . . . a discussion because they don't have the answer to give me." Yuusif, an East Indian Muslim father, said that one way he approaches religious conversations with his children is, "explaining to them in a way they can understand ... and reason[ing] with them." Religion is related to youth's life. Scott, a 14-year-old Catholic son, said about his parents: "I just feel like they always try to bring religion into our lives and to make us better." Paul, a 46-year-old Christian Scientist father of two said, "I think the time where it comes most to its surface is applying what we know and believe at times of conflict." One Muslim father said, "[W]hen something happens by way of a trial, [we show them] how to be patient and also to be assured that there's going to be good in that too, because it has come from God." Shawn, a Baptist father, speaking of family devotions, said: "There's always the challenge of . in order from earliest published to most recently published. We emphasize quotes from youth but also have included quotes from parents. 5 More specifically, we asked 26 questions of parents and youth covering various topics on how religion influenced parent-child relationships. The following questions were most relevant to those research questions: How do your parents share their faith with you? When you talk together as parents and children about religion, how does the conversation go? How have your parent-child conversations about religion influenced parents and children? What do you consider to be the most important things for you to be or do as a mother or father of faith? As parents, how do you share your faith with your children? Aisha, a 46-year-old African-American Muslim mother of 11, said: We talk a lot. We have very in-depth conver sations because . . . they're very verbal. They have their opinions. . . . [T]hey're allowed to express themselves, even if they disagree with us.
Arella, a 42-year-old Conservative Jewish mother of two, said, "Jews are very open. . . . They're out there. No one holds back." Esther, a 12-year-old Conservative Jewish daughter explained, "Well, it's kind of a stereotypical thing that we [Jewish families] argue a lot, but it's true." Parent-child relationship is nurtured. Dawuud, a Muslim father, reported his desire to be "constantly alert with them and close to them in understanding what they're going through." Amy, a 45-year-old Baptist mother of two, said she tries to compliment her kids and be a friend: I [am] trying to encourage them and to just and let them know how much I respect and admire them and appreciate them as people. . . . I still play with my kids; and I'm very affectionate and I hug them. And even though I'm their mother, I'm also their friend.
Jack, an 18-year-old Baptist son, said that the parents of some of his friends neglected the parent-youth relationship in their efforts to share their faith: I've seen some of my friends . . . where parents are slamming Bible verses in their face, and really not loving them, not helping them grow. It's more like a forceful thing, at unnecessary times, when it really would have been helpful just for them to sit down and talk with their kid.
Study 1's implications for positive religious/spiritual development. The core concept Study 1 yielded was: when parent-youth religious conversations are youth-centered, the emotional experience is more positive for parents and youth than when they are parent-centered. This concept is consistent with previous theory and findings about the fact that the quality of parent-child interactions matters more than the content of conversations in facilitating positive religious/spiritual development in youth. The findings of Study 1 supported a core process of Generative Devotion: generative family conversations. In order for most youth to experience positive religious/spiritual development, they need to be involved in meaningful religious conversations with their parents and other family members. To the extent that those conversations are generally consistent with the features of youth-centered conversations, youth are more likely to report feelings positive about their faith, their parents, and the religious conversations they engage in with their parents. Growth toward Generative Devotion is more likely when adults engage youth in conversations about religious/spiritual development in ways that honor the agency of youth, respect the opinions and emotions of the youth, are oriented toward building and strengthening relationships with God, with family members, and with others in and out of the faith community. In a phrase, Study 1 indicates that to foster positive religious/spiritual development, parents need to listen more and preach less-including when conversations involve religion.
Study 2. Giving up Something Good for Something Better: Sacred Sacrifices Made by Religious Youth
One nearly universal aspect of religion is that it tends to ask something of adherents that takes them outside or beyond themselves. Parents are expected to make sacrifices for their children, but in contemporary Western culture parents are rarely encouraged to ask their children to make meaningful sacrifices. Study 2 ) explored the questions: (a) What sacrifices are highly-religious youth making for their faith? and (b) Why are they willing to make sacrifices for their faith? Analyses indicated that adolescents reported sacrifices in five areas, briefly outlined next. Feeling affective benefits. When asked why he was willing to sacrifice for his faith, a 16-year-old Presbyterian male answered, "I think that it's because when all is said and done and when the parties are over and the day after, it feels so right and I feel so thankful for my decisions [not to party]. And I judge a lot of my decisions in the past by how I felt the day after I made the decision." Study 2's implications for positive religious/spiritual development. Study 2's findings indicate that making religious sacrifices, including financial ones (Marks et al. 2009) remains influential in the lives of many religious adolescents. Furthermore, we see that some adolescents are making these sacrifices for their faith in visible, public ways-including avoiding partying and activities not consistent with their faith, as well as sacrificing to honor their faith's holy days (e.g., Sabbath). Additional data not reported in the present article indicate that youth are also making these kinds of faith-related sacrifices in private ways-such as engaging in personal prayer or scripture study instead of watching TV or playing video games. The findings of Study 2 should be encouraging for religious parents and leaders since they indicate that many religious youths take their spiritual and religious identities seriously enough to make significant sacrifices for God and for their faith, both publicly and privately. The findings suggest that religious parents and leaders should help youth to identify and consider the reasons why they are more or less willing to make sacrifices for religious reasons and how those sacrifices may influence their religious/spiritual development. Adults who can help youth to both reflect upon and strengthen their religious commitments may be better at supporting youth as they live their faith. Growth toward Generative Devotion is more likely when youth are non-coercively willing to enact their inner spiritual values, identities, and commitments by making meaningful sacrifices for God, for others, and for future generations.
Study 3. Anchors of Religious Commitment in Adolescents
Our third study (Layton et al. 2011) explored adolescent religious commitment. Based on interviews with youth and their parents, we proposed a new construct, anchors of religious commitment, to describe what youth committed to as a part of their religious identity or, in other words, where they focused their commitments. Commitments connected the youth to someone or something else. Study 3 identified the following seven categories of anchors of religious commitment. Next, we briefly report on the major findings for each anchor of religious commitment, including some explorations of variations of each anchor.
Commitment to religious traditions, rituals, and laws. The most frequently mentioned anchor of youth religious commitment was commitment to religious traditions, rituals, and laws. One 10-year-old Jewish daughter articulated various celebrations that were meaningful to her: I like Hanukkah. . . .
[I]t's just, it's fun to be able to light your own menorah, and to invite friends over to come do it with you . . . and Purim is fun because you get to dress up, and Pesach [Passover] is fun because the whole family's there and all that sort of stuff.
A Catholic mother of a 15-year-old son and a 13-year-old daughter said that religious rituals were "the little things that we do that have that spiritual meaning to us." A 15-year-old Catholic son said that religious rituals were important to his personal religious identity, Compared to other kids my age, I think I'm pretty religious, 'cause I go to church every Sunday. I pray every day. I altar serve. I go to CCD [religious instruction]. And religion is a big part of my life.
An 18-year-old Muslim daughter asked about her commitment to cover herself with the hijab said, "I look at it basically as just obeying the laws of my faith." Commitment to God. Some youth said they were committed to God as a source of authority, as did a 19-year-old Jehovah's Witness daughter: "We appreciate the fact that He's our Creator-and who better to give us guidelines for how to live our lives?" A 15-year-old Catholic son said, "Being religious is kind of like you have another friend. It's God and Jesus; you just feel like you're able to lean back on someone, if the going's tough." Speaking of his commitment to marry a Christian woman, a 21-year-old Baptist son said, "Well, I think, again for reasons of trust. First of all, I think it would be the most pleasing to God. It's what He would want." A 15-year-old Christian daughter said, "Well, now that I'm thinking about getting a career and everything, He's my counselor, and so I would like to do something that would be in His will." Commitment to faith tradition or denomination. Commitment to their particular faith tradition as an anchor of commitment, was expressed by some youth such as a 15-year-old Muslim son, who said, Islam is not just something you're doing at certain times of the week . . . It's real, like you do it all day . . . it's part of what you do. Part of the way you eat, the way you treat other people.
A 20-year-old Jewish son described himself as "taking on an ethical and moral framework provided by Judaism." A father of a 15-year-old Lutheran daughter said, The values that we like are in the church system. We're not really happy with the values that we see in American culture, a lot of consumerism [and] . . . greed. . . . [S]o we bring them to church, to get this whole other value system.
A 20-year-old Orthodox Jewish son spoke of his father's commitment to Judaism and how this had influenced him: [H]e's someone who feels very much sensitive to the Holocaust and having lived . . . just after it that he felt . . . the weight of all of the sacrifice that had been made for three thousand years so that a father could pass to their son . . . the knowledge that we're Jewish and this is what it means. A 20-year-old Lutheran daughter said, "There's a lot of strength that I draw from being able to share communion with other believers and the reminder and the forgiveness that comes through that is definitely . . . strengthening." A 15-year-old Episcopalian son said, When I talk to my friends, if I say I have to go to church or something, they immediately think it's a bunch of old people and they think I'm just being dragged along. But there's a lot of people [there]; and it's a fun place. . . . I look forward to going to church to see those people.
When asked about her choosing to be an altar server, a 14-year-old Catholic daughter said, "I just like serving the church. It makes me feel good that I'm serving the community." A 14-year-old Jewish son discussed why he experienced a renewed commitment to his faith: The first time when my parents took me to [synagogue], my mom came downstairs and was dancing around the bimah [Torah podium]. . . . I was having a fun time. And so after kiddushin [prayer of sanctification] and everything else, after dinner I said to my mom, 'Can we come back here again?' And so that's when I started getting more and more religious.
Commitment to parents. Another common anchor of commitment for the youth was their parents. A 20-year-old Lutheran daughter said, "Dad's the spiritual head of the family." A 15-year-old Episcopalian son said, My mom wants me to go to church, and my dad. . . . [S]ometimes I'll want to hang out with a friend on Saturday night, but I have to go to church the next morning. And that's not an issue, I'm going to church the next morning.
A 14-year-old Latter-day Saint son mentioned his parents when asked why he doesn't drink religiously proscribed beverages when he was with friends, "They've taught me not to do that and I respect them." A 17-year-old Muslim daughter mentioned her respect for her parents and Islam, "I want to be able to fulfill my duty in Islam upon my parents . . . and I wouldn't want them to be questioned about why wasn't I obeying the rules of God." Commitment to scripture or sacred texts. Another common anchor of religious commitment was sacred texts, an 18-year-old Baptist daughter spoke of how her sacred texts gave her answers to important questions: I just remember sitting in English class last year and we were discussing a lot of things and I just remember sitting there thinking how confused I'd be on this earth if I didn't have the Bible and God's standard and morality to live by.
Commitment to religious leaders. The last anchor of religious commitment evident in Study 3 was commitment to religious leaders. The two main forms of this commitment were to leaders as a source of authority and as a relational support. Referring to the authority of a religious leader, a 13-year-old Latter-day Saint son said, "The prophet, he's the guy who's in charge of our whole church for the whole world, [has] asked us not to [drink alcohol] and so that kind of guides our life." A 14-year-old Orthodox Christian daughter discussed how when she is an adult she wants to have a meaningful relationship with a religious leader, "I [will] . . . consult with a spiritual father [my church leader]. . . . [I want to] have that base and [those] connections with the church." Study 3's implications for positive religious/spiritual development. The findings of Study 3 indicated that there are many "anchors" that enhance, facilitate, and empower the religious and spiritual commitments of youth. This should be encouraging for religious parents and leaders since it implies that there are many ways that youth might feel connected to their faith. Adults who understand this might consider how the youths they know and love are anchored to their faith and if there may be ways to build on those connections and commitments as the youth develops. Growth toward Generative Devotion is more likely to occur if youth have more (and deeper) anchors of religious commitment. Generative adults can inquire of youth what they consider to be the most meaningful anchors to their spiritual and religious commitments and support them in their efforts to deepen, strengthen, and find meaning in those connections.
Study 4. Religious Exploration among Highly Religious American Adolescents
In some ways, of the six studies summarized in the present article, Study 4 (Layton et al. 2012) is the most relevant to issues of religious/spiritual development in youth because it provides information on how youth engage in spiritual and religious exploration as they develop. Issues surrounding religious commitment and exploration are important to adolescents, parents, religious leaders, and the researchers who study them. This study broadened our understanding of religious exploration to include catalysts of exploration and strategies for exploration that involved using established commitments as resources in the exploration process. The insights gained from interviewing Jewish, Christian, and Muslim adolescents suggested the importance of moving beyond conceptualizing religious exploration as merely a matter of whether or not adolescents have significant religious doubts. While doubt matters in some cases, it does not fully explain the processes in active exploration or the processes through which doubts arise and are addressed in religious youth.
The relationships in the lives of youth reportedly provided living models and exposure to different beliefs and manifestations of religious teachings. Youth explored the religious beliefs and practices they have grown up with and tended to compare that with who they were and who they were becoming. These explorations may or may not be considered doubts by the youth, but were an important part of their religious identity formation.
Study 4's Catalysts of Religious Exploration
Study 4's analyses identified six catalysts that led adolescents to think about, question, doubt, and/or experiment with their faith in greater depth, as outlined next.
Examples of different ways. A 16-year-old Presbyterian son stated, "Recently a good friend of mine . . . converted to Buddhism . . . He decided to do this simply because of research and trying to figure [things] out. To him, the Bible didn't quite make sense and the teachings of the Buddha made much more sense to him. . . . [That has got me] thinking about what I want to get out of my religion." Learning new things. A 12-year-old Jewish daughter spoke of the things she had learned in her Torah class about the story of Noah and the ark which stretched her to ask questions about things she had never considered before.
Normal development. An 18-year-old Baptist son discussed his religious/spiritual development by saying, "I'm still a work in progress. . . . I'm convinced more and more it's just a continuing process." Times of stress or crisis. A 15-year-old Lutheran daughter spoke of prayer during hard times, "I mean anytime we have a problem . . . we come home and we [are] like, 'I can't take it anymore. This is too hard' and we always end up with, 'Well, what does the Bible say about it?' Then it goes to prayer." Leaving home. An 18-year-old Baptist son described the impact leaving for college had on him: "This year in fact especially, like entering the University . . . and applying to be a philosophy major, lots of questions come up. I keep having to ask myself, 'What do I really believe? What do I believe about this?'"
Study 4's Strategies for Religious Exploration
In this study, we also learned about five diverse strategies of religious exploration that youth reportedly employed, as addressed next.
Asking questions and having conversations. Youth spoke of the amount of freedom of individual thought that their parents encouraged. A 17-year-old Baptist son said, "First, one of the key things that my parents did, which I am very grateful for, is they . . . give us a good amount of freedom to think, to process." Pondering and self-reflection. One 18-year-old Baptist son discussed his use of the process of self-reflection in his religious exploration by explaining, "I think a lot of my friends and people that I work with are sort of operating, thinking short term, not thinking about consequences. But I think one thing that I've been really challenged to think about [is] . . . what are the consequences of these actions?" Having personal experiences. Some youth mentioned personal experiences with their faith development including a 20-year-old Muslim daughter who said, "I know that if I get lazy and if I don't want to pray for a week or Learning from experiences of others. A 17-year-old Baptist son mentioned that he learned from watching how his mother worked through the death of her brother. He reported, "Seeing my mom going through these stages in life with her brother dying, it really teaches you how important . . . your [faith] community, your immediate family is." Appealing to authority. A 16-year-old Presbyterian son said of his parents, "I go to my parents and they seem to be the wisdom that is passed down to me so that the contradiction can be resolved." A 16-year-old Latter-day Saint son also described how the experience of his parents raising his older siblings helped them guide him better, "My parents have had a lot of experience. They can offer advice that really helps in certain situations. . . . I'm lucky. 'Cause I'm the last one, I think the other three kids helped them to be able to guide me better." The five preceding strategies of religious exploration among youth are of value when considering the important issues of whether youth choose to remain or leave their familial faith.
Study 4's implications for positive religious/spiritual development. One of Study 4's most important findings was that commitments (or "anchors" as discussed in Study 3) seem to be valuable resources that adolescents use as part of the ongoing process of religious exploration. We also learned that religious exploration is not typically a single event where all parts of religious belief and identity are put on the table at once. Instead, typically it is a process where youth tend to explore certain parts of their religious identity while holding others constant as commitments. This approach to understanding adolescent religious identity and exploration is novel. These findings also suggest that parents and youth leaders can expect and should support an active process of religious and spiritual exploration among their youth in their families and congregations. Adults who understand that it is normal and healthy for such exploration to occur are more likely to provide youth the space they need and want, while being there as a stable, supporting, and faithful resource for them. Growth toward Generative Devotion is more likely when adults (a) assist youth to learn how to hold to core religious commitments while seeking answers to religious questions, (b) encourage youth to stay connected with religious leaders while searching for resolutions to their religious doubts, and, especially (c) encourage them to remain in positive relationships with family members while exploring their religious identity.
2.5. Study 5. Beyond the Bucket List: Identity-Centered Religious Calling, Being, and Action among Parents Study 5 (Dollahite et al. 2018b) explored the answers given by practicing Christian, Jewish, and Muslim parents of adolescents to the question, "What do you consider to be the most important things for you to be or do as a mother/father of faith?" Through this question, we explored various dimensions of identity-centered religious calling, being, and action among religious parents regarding their parenting.
In Study 5, there were three primary findings. Parents' responses focused on (a) being an example, being authentic, and being consistent; (b) providing support, love, and help, and (c) teaching values, tradition, and identity. Each category and selected subcategories will be discussed below, with illustrative quotes from the religious parents we interviewed.
What Religious Parents Felt Called to Be
Parents described what they believed they needed to be in three respects: be an example, be authentic, and be consistent.
Be an example. A Latino Catholic father spoke of his own father's example and how he wanted to also be "a good example" for his kids: "I had a fantastic example in my father. My father was, and he is still, an incredible example for me. And I think that if I can pass some of that to my children through my own example, through talking or teachings or verbal example [that would be great]." Be authentic. A Conservative Orthodox Jewish mother spoke of authenticity: "I presented to [our children] an ever-expanding view of Judaism and that I was always honest about my anger with the religion, anger with the Rabbis, my own distress about the religion. [I wanted them to know] that whatever I chose to give them from the more Orthodox approach was something that I really believed in." Be consistent. A Methodist father shared, "[I want] to be consistent, even if it's a consistent pain in the neck." A Lutheran father similarly said, "[I want to] set an example . . . in my daily life. Just to be somebody who's got . . . a soul . . . who's not interested just in the short-term things. . . . That and family, and all these things are important. . . . That's important to me." In addition to being certain things for their children, many parents also discussed domains where they felt an obligation-even a sacred duty-to provide, as discussed next.
What Religious Parents Felt Called to Provide
Mothers and fathers described what they felt they were called to provide in three areas of life: support, love, and help.
Provide support. A United Church of Christ father spoke about being present in his children's lives: "I think a lot of it is just being there and spending time with my children, and listening to them and playing with them. Challenging them to do better." A Catholic father, in discussing being supportive, added, "Certainly you've got to be the motivating force that takes them through a lot of those things they don't want to do." Provide love. A Latter-day Saint father spoke of loving his children, "I want my kids to know . . . even though I . . . have frailties of losing my temper, raising my voice inappropriately, that I love them . . . I love them . . . and care about them." Provide help. An East Indian Muslim father said that part of his job is "to always look for [my children's] welfare and be available to them, to help them through the various situations they face." A Conservative Jewish father said, "The most important thing is to make sure the kids are safe. . . . It's a dangerous world and you want to protect your kids."
What Religious Parents Felt Called to Teach
Religious parents felt called to teach their children in three domains: religious values, the faith tradition, and religious identity.
Teach religious values. A Baptist mother said, "I've tried to be honest about teaching them right from wrong from a Biblical perspective." A Catholic mother, explained her desire as a mother: "Teach them values I'd want them to have forever and ever." A Pentecostal father said, If they have struggled, the biggest struggle . . . has been because they look different, they do different things, they dress differently. And they have considerable peer pressure. For example, they don't date, they don't drink, they don't dress in certain ways, so there are some restrictions that they feel. And we have tried to provide them some alternatives for entertainment and things, and also to explain to them that: 'Yes, you are different than everybody else, and you should be proud of the difference . . . Hopefully we give them enough support that they can be themselves.
Study 5's implications for positive religious/spiritual development.
In Study 5, we explored what exemplary mothers and fathers of various faiths considered most important for them to be (being) and to do (action). Mothers and fathers not only wanted to teach their children about their religious beliefs, they also reportedly strove to become models of what they were teaching their children. Many reportedly drew on a commitment to God and their religious faith as a guide for what and how they should be as parents. The findings of this study suggest that many parents are deeply devoted to their adolescent children's religious/spiritual development (as well as their overall development). Indeed, in many cases, the identities as "mothers and father of faith" were centered on being what they believe God desires them to be in order to help their children know and love God (and/or their respective religious tradition). Being the kind of parent that lovingly and authentically helps their children experience positive religious/spiritual development was, reportedly, the most important objective for many of these parents. We were inspired by parents' level of thoughtful commitment to God, by their commitment to their faith community and its members, and by their commitment to their children. Growth toward the kind of spiritual and religious life we call Generative Devotion was exhibited by many parents.
Study 6. Beyond Religious Rigidities: Religious Firmness and Religious Flexibility as Complementary Loyalties in Faith Transmission
The sixth and final study (Dollahite et al. 2019c) we will discuss explored how religious parents strove to balance firmness and flexibility in their efforts to transmit or pass on their faith to their children. Pew Research Center (2009) found that 44% of Americans reported that they had left the religious affiliation of their childhood. Additionally, more recent national data indicate that 78% of the expanding group of those who identify as religiously unaffiliated ("Nones") reported that they were raised in "highly religious families" (Pew Research Center 2016). We suggested that this may be, in part, associated with religious parents exercising excessive firmness with inadequate flexibility i.e., rigidity. We found examples of (a) religious firmness, (b) religious flexibility, and (c) efforts to balance and combine firmness and flexibility. Examples of firmness and flexibility included those related to (a) religious practices and (b) religious beliefs, as outlined next.
Firmness in family religious practices. A Lutheran mother said; "I can't imagine not going to church on Sundays. And as ritual as that is, I just can't imagine not [going]." An African American Baptist father said, "There are Sundays when [the kids] don't want to go, [but still] I said, 'We have to, you have to go to church.' I mean, that's just a practice of this family." Flexibility in family religious practices. Martha, a Lutheran mother, said, "[T]here's probably a couple of times that we dragged [my son] to church and he wanted to do other things, or sports related things. But mostly we let him do his sports instead of church." Abigail, a Reform Jewish mother, shared, "[B]ecause we're tired on Friday night, we don't get to synagogue as much as we want to. And, because of other time commitments, there's just never enough time to do as much as maybe we should for the Jewish community." Integrated firmness and flexibility in religious practices. Banafsha, a Muslim mother, in connection with the religious practice of salat (Islamic prayer five times daily), shared an example that combined both firmness and flexibility: "We don't want to delay the prayer of anybody. If they are studying, they can pray in their room and keep studying [and] not wait for the other ones . . . we didn't want to make it hard for anybody." Many participants reportedly manifested both firmness and flexibility in connection with religious practices. We now move from practices to beliefs.
Firmness in religious beliefs. A Muslim father said, "If it is something that has already been prescribed religiously, then there is no discussion." A Chinese Christian mother shared her beliefs on marriage that stem from the Bible when she said, "This is the principle; we could not change the order." Flexibility in religious beliefs. A Jewish mother spoke of her view regarding perspectives on gender in worship: I have a problem with gender roles [in] religion in general, so I ignore them. I don't abide by them. . . . Like in Orthodox [Judaism], . . . I don't agree with the idea of having women and men separated during ceremonies. Women are not allowed on the bimah [podium from which Torah is read] and you can't listen to a woman's solo voice and I just don't believe in that.
A Chinese Christian mother spoke of her view that tithing should be flexible: We offer money at church. We all know how we should do, everyone should tithe. But this proportion should be flexible rather than fixed because the condition[s] of families are different. Those families which are in difficulties should adjust. A Latter-day Saint mother, asked whether when confronting a problem she would personally turn to sacred or secular sources, reported, "I would read both. I would give more weight to what was said in the religious publication but I would read a lot everywhere, hoping to find [useful information]." Implications for positive religious/spiritual development. In Study 6, we framed the processes of religious firmness and flexibility such that each process involves an important kind of loyalty. Thus, religious firmness is centered in loyalty to God and that which serves to directly uphold or represent God, e.g., sacred texts, faith tradition, faith community, and divine commandments. Religious firmness is often reflected in (a) religious beliefs that, due to perceived divine origin, are non-negotiable and not subject to personal abrogation, and is also evident in (b) religious practices that are held sacred and inviolable and thus take precedence over other nonreligious or personal activities. Such practices are often maintained even in the face of personal and familial inconvenience or preferences. Similarly, religious flexibility is centered in loyalty to family members (and other loved ones) by maintaining sensitivity to their needs, challenges, and circumstances. For faith communities and for families themselves, integration between these two complementary loyalties may be needed to optimize personal and family wellbeing in the context of acceptance of divine mandates and expectations. Our findings regarding religious firmness and flexibility suggest that parents who wish to best facilitate positive religious/spiritual development in their children would be wise to find ways to balance and integrate religious firmness and religious flexibility. Parents who desire the religious/spiritual development of their adolescent children to be positive and optimal would seek to engage with their youth in ways that respect their agency, their interests, their changing circumstances, and their daily schedules.
Summary and Suggestions for Future Scholarship
In terms of understanding and promoting positive religious/spiritual development in youth, our work in the American Families of Faith project reflects our belief that it is important for scholars to explore (a) the ways that parents and youth talk with each other about religious and spiritual matters and how this dialogue influences positive religious/spiritual development, (b) the kinds of religious sacrifices that youth are asked to make and the reasons they are willing (or unwilling) to make such sacrifices and how these sacrifices influence positive religious/spiritual development, (c) the anchors of religious and spiritual commitment present in the lives of religious youth and how those anchors influence positive religious/spiritual development, (d) the catalysts of religious exploration in the lives of religious youth, the strategies they use in religious exploration, and how those relate to positive religious/spiritual development, (e) what religious parents believe are the most important things they can do to support and facilitate their adolescent children's positive religious/spiritual development, and (f) how religious parents can balance religious firmness with religious flexibility in ways that are more likely to promote positive religious/spiritual development in their adolescent children. The studies on youth, parents, and faith reviewed here suggest that positive religious/spiritual development involves a set of complex and dynamic processes that deserve careful study by scholars. Our own sustained study of religious youth and their parents has identified a number of related processes, as discussed next.
Importance of family context. Our work highlights the importance and benefits of studying youth religious/spiritual development in the context of their families-particularly with sensitivity to their parent-child relationships. We have found that interviewing youth and parents together allows for meaningful, candid, transactional, and insightful conversations about youth religious/spiritual development to occur. It is true that when adolescents are interviewed without their parent(s) present, the interviewee may feel freer to speak of difficult or sensitive issues. Solitary interviews would be more effective, for example, when exploring anti-social or high-risk behavior in youth. However, these have not been our aims in the American Families of Faith project. In the strengths-based approach we employ, we intentionally and purposively seek referrals to exemplar families and strive to uncover the secrets to the familial and religious success that clergy perceive in these families. With such an aim, the ideal approach and method are arguably different, and we posit that there are also informational benefits that can accrue when youth are interviewed with their parent(s) present. During interviews with parents and their adolescent children, we noticed that they reminded each other, challenged each other, filled in gaps, and otherwise complemented each other. Over the past two decades, we have found ourselves increasingly convinced of the utility and value-added contributions of employing what Handel (1996) has called whole family methodology. We believe that future scholarship on youth religious/spiritual development would continue to benefit from consideration of the ways that youth and parents (and ideally, siblings and grandparents) across many faiths influence youth religious/spiritual development.
Importance of diverse samples. Certainly, different research projects have varying aims, objectives, and widely ranging central research questions. Our work also has emphasized the importance and benefits of studying religious/spiritual development across diverse religious and ethnic communities using the same questions and methods of analyses. In terms of providing meaningful contexts for youth religious/spiritual development, our analyses of an array of eight religious-ethnic communities has demonstrated that there are both important similarities across diverse faiths and also important differences Marks 2018, 2019;. Much work is left to accomplish. For example, in terms of diversity it is important to further explore how religious/spiritual development differs across various samples and contexts, e.g., religious, socio-economic, racial-ethnic, and national. We have begun some work in this area as manifest by a special issue (54:7) of Marriage and Family Review. 7 The Marriage and Family Review special issue (and subsequent book, Dollahite and Marks 2019) is devoted to marriage and parenting across eight religious-ethnic communities (Asian American Christian families, Black Christian families, Catholic/Orthodox Christian families, Evangelical Christian families, Jewish families, Latter-day Saint families, Mainline Protestant families, and Muslim families). Multicultural and pluralistic efforts like this help to broaden and deepen the related empirical literature beyond the white, middle to upper-middle class, Christian samples that have dominated social science research on religion in the past (Jones 2016). The present special issue of Religions (edited by Professors Abo-Zena and Rana) is a richly diverse and textured example of what we envision for the future.
Contributions of qualitative data. In addition to calling for richer and more diverse samples and contexts, methodology is of vital concern in future work. We believe that it is important for scholars to bring a variety of methods to bear in the study of youth religious/spiritual development. Because we believe that it is important to carefully explore the in-depth perspectives of youth and parents, our work has emphasized qualitative methods (while in some cases combining these with basic and descriptive statistical analyses). By asking youth and parents to discuss their spiritual beliefs, religious practices, and faith communities in detail, scholars are able to learn first-hand about youth's ideas, experiences, values, concerns, opinions, and narratives regarding those things that most influence them.
We believe that an integrated mixed-method approach to measurement is most likely to yield in-depth, nuanced, and meaningful information about youth religious/spiritual development. Given the dynamic nature of religious and spiritual development, the varieties of lived experience, the existential wrestle of meaning making, and the complexities inherent at the nexus of faith and family processes, we have consistently advocated for gold-standard quantitative and qualitative research that is more fitting for the challenge than traditional, cross-sectional, correlational work (Marks and Dollahite 2011, 2017. We see a particular need for work that pushes past the "whats" to the "whys" (meanings) and "hows" (processes) involved at the nexus of faith and family life where youth development is impacted. To date, longitudinal (quantitative) and narrative (qualitative) approaches seem particularly promising (for examples of related longitudinal quantitative work, see (Bengtson et al. 2013;Smith and Snell 2009)). We are acutely aware that the approaches we are recommending are time intensive. If there is a short route to understanding the complexities at the nexus of faith, family, and youth development, we have been unsuccessful in discovering it.
It is possible to learn important things about youth religious/spiritual development using online surveys, particularly those that include open-ended questions (cf. Hardy et al. 2015;McMurdie et al. 2013). Textual response options allow youth and parents to write about their past and present spiritual experiences (or lack thereof) and important changes in their past to present religious/spiritual development (or lack thereof), allowing for process-oriented view to emerge, versus single, cross-sectional view (Marks and Dollahite 2011). We would make the analogy of comparing videos to snapshots. However, we believe that in-depth interviews can allow for even more fine-grained exploration of spiritual experiences and changes in religious/spiritual development. In sum, we believe that it is helpful to explore positive religious/spiritual development in various contexts: within the family, among diverse religious communities, among diverse ethnic communities, and using diverse methods (qualitative, quantitative, and mixed methods).
What Are We Missing?
In addition to the suggestions above, we also think it would be helpful to explore positive religious/spiritual development among youth in relation to the following: (a) social and political activism, (b) popular media and music, (c) participation in secular activities, e.g., sports, arts, gaming, (d) wrestling with BIG questions, i.e., questions involving Being, Intimacy, and God, (e) conversion and disaffiliation, (f) interfaith knowledge and experience, (g) impactful personal experiences, (h) volunteerism and service, (i) religious rituals, ceremonies, and traditions, (j) mental illness, (k) mindfulness and meditation, (l) temperament and personality, (m) agency and personal choices, (n) sexual orientation and experiences, (o) the dark side of religion, and (p) generative devotion. While some of these have been explored in relation to child or adult spirituality, few have been investigated with youth. And while it might be difficult for any one study to address all these issues simultaneously, we think that looking at combinations of these issues would be of value. Because this section is not intended as a review of the literature, and because search engines make finding articles on various topics fairly straightforward, we will only cite a few studies on the topics mentioned.
Social and political activism. Emphasis on and excitement surrounding politics, social causes, activism, and other ways of trying to make a difference in society have surged since 2016. Exploring how such activism might flow from or influence a young person's religious belief and experience would be helpful. For example, youth could be asked if and how their religious/spiritual development intersects with social and political issues they care about, such as striving toward greater social justice, alleviation of poverty, fighting human trafficking, promoting gender equity, securing human rights, and fighting racism.
Popular media and music. Much of popular culture directed toward youth either ignores religion or portrays people who take their faith seriously in negative ways. Most popular music is quite secular and tends to promote values contrary to those espoused by most world religions. Mass media often portrays religious faith as irrelevant at best and an enemy at worst. It would be helpful to study how youth engagement with popular music and media are influenced by religious commitments and vice versa. For example, youth could be asked how they believe that their religious/spiritual development has intersected with religious and secular media and music.
Participation in secular activities. We think that exploring the ways that religious/spiritual development influences how youth engage with a variety of secular activities such as sports, education, arts, and gaming can provide an important window into the religious and spiritual lives of youth. For example, youth could be asked about how their religious beliefs and commitments have influenced or are influenced by their engagement with education, recreation, and other activities.
Wrestling with BIG questions. We have previously proposed that part of religious/spiritual development may involve wrestling with what we call the BIG questions (questions on Being, Intimacy, and God). We proposed that young people would be well served by thinking about and trying to find answers such existential questions (see Marks and Dollahite 2017, pp. 6-7) because such answers could help them make crucial life decisions that would have significant impact on their psychological and relational wellbeing across the lifespan. We posit that scholars could better understand positive religious/spiritual development if youth were asked, for example, if and how they believe their faith community helps them address core ontological questions as "Who am I?" and "What is my purpose in life?" and "What does God expect of me?" and "What is my mission in life?" Our own conversations with more than 80 religious youth, as well as hundreds of youth with whom we have worked in the youth programs in our own faith communities, indicate that many of them think about these questions and look to their faith for answers to these and many other BIG questions.
Conversion and disaffiliation. For decades, evidence has suggested that many older youth and young adults are more likely to report being less involved with institutional forms of religion (e.g., with congregations). Recent evidence indicates that religious institutions are losing more youth and getting fewer back. With the increasing trend toward less affiliation with religious institutions present among younger Americans, studies investigating why youth leave (disaffiliation), why they stay away, and why they return (reconversion) would be valuable. Studies could investigate the factors that lead to youth choosing to leave the faith they were raised in and, for those who do, what factors lead them to choose to return to that faith. Another important area of study would be how youth can become what clinicians call "transitional characters" or someone who dramatically changes the trajectory of intergenerational family patterns. For example, this would involve study of youth who come from a family with generations of religious involvement who decide to leave that faith or leave faith altogether or, conversely, to study youth who come from a family with a history of religious non-involvement who decide to embrace a serious commitment to a faith community.
Interfaith knowledge and experience. With increasing emphasis in schools, social media, and popular media on tolerance (and even embracing) of differences, including religious differences, it appears that more youth will have better knowledge of and meaningful experiences with those of other faiths or no faith. To what extent such interfaith knowledge and experience might influence positive religious/spiritual development would likely be a fruitful area of scholarly endeavor. For example, youth could be asked about how their understanding of other faiths or their experiences with members of other faiths has influenced their religious/spiritual development.
Impactful personal experiences. Perhaps one of the most significant areas of future investigation would be careful exploration of the potential role of impactful personal experiences. By impactful we mean experiences of such potency and meaning that they influence the direction of one's religious life. Those might be purely or mostly spiritual and religious experiences that could include transcendent experiences-sometimes called mystical experiences-where a youth reports some kind of encounter with God. They could be experiences that lead to conversion or disaffiliation, perceived answers to prayer, or otherwise feeling some kind of divine guidance, protection, or influence. We posit that the ways that exploration of impactful personal experiences of youth might influence religious/spiritual development is an exciting frontier of investigation.
Volunteerism and service. In addition to more vertical, that is human to divine, transcendent experiences, impactful personal experiences might also include deeply influential horizontal experiences where youth choose to serve or relate with others in the human family in ways that directly impact the youth themselves spiritually and/or religiously. Examples include missionary service, religious pilgrimages, and serving the underprivileged through outreach and/or humanitarian efforts whether directly faith-based or not. Various studies have shown that religion promotes, encourages, and facilitates service (Smith and Davidson 2014) including in youth and young adults (Smith and Denton 2005;Smith and Snell 2009). Exploring the ways that such service influences positive religious/spiritual development in youth would help us see if, how, and why working directly to do good in the world reflects, confirms or weakens religious identity and commitment. For example, youth could be asked whether they believe that their participation in a service project or mission trip has influenced them spiritually and, if so, how.
Religious rituals, ceremonies, and traditions. A number of studies indicate the power of rituals in shaping and reflecting religious identity and commitment (Chelladurai et al. 2018;. We encourage investigation of the ways that personal, family, and community rituals, ceremonies, and traditions influence positive religious/spiritual development in youth. For example, youth could be asked about whether any religious ritual, ceremony, or tradition they have participated in has been meaningful or influenced their personal lives, and if so, how and why. Mental illness. Studies have found that religiosity among teenagers often serves as a protective factor against mental illnesses (Wong et al. 2006), including depression (Pearce et al. 2003). Given the high and increasing rates of adolescent suicide, often accompanied by depression and anxiety, it would be important for scholars to study the relationship between mental illness and religious/spiritual development among adolescents. For example, youth could be asked about their own experiences with depression and anxiety and their religious lives. What are the most important aspects of religious belief, practice, and community that serve to help youth cope with depression and anxiety. Are there ways that some aspects of religious belief, practice, and community serve to increase depression and anxiety among youth? Are there ways that religious parents and leaders can help youth draw from religious sources to combat anxiety and depression?
Mindfulness and meditation. Mindfulness training is sweeping America. Meditation has been practiced by religious and nonreligious people across centuries and cultures. Many adults have been exposed to such training and practice and, presumably, find value in it. It would be interesting to know whether, among youth, various aspects of traditional religious practices such as prayer and meditation on sacred texts may serve a similar function. Since many nonreligious youth may find value in mindfulness and meditation it would be interesting to know to what extent and in what ways they consider these practices to be part of their spiritual lives.
Temperament and personality. There are a number of possible avenues of exploration on potential interactions between a person's temperament and personality and their religious/spiritual development. For example: Do more introverted youth prefer more solitary or small group approaches to spirituality? Do more impetuous youth make more dramatic religious choices and changes, for example, conversion from one faith to another? Are more intellectually oriented youth more likely to leave or join some faiths compared to others? Are persons with some personality types more likely to fully reject or embrace spirituality and religion?
Agency and personal choices. Most social science research focuses on the factors that are thought to influence, cause, or determine various human behaviors and attitudes. Less research acknowledges and accounts for human agency or choice. While there is value in looking for statistical correlations between variables, we believe that it is also important to ask youth (and adults) about choices they have made regarding their spiritual and religious lives and decisions. For example, youth could be asked about what personal choices they have made regarding their spiritual and religious commitments, beliefs, and practices. They can be asked to what extent their feel their religious lives have been determined for them or are a result of their own choices. They could be asked, if they had a chance to do it over, what different choices they wish they might have made-or what choices they think they will make when they are older.
Sexual orientation and experiences. Some studies indicate that sexual orientation can influence religiosity and spirituality. While there has been much written about how more traditional religions and more progressive faiths have addressed LGBTQ issues, there is room for scholarly study of how sexual orientation might intersect with positive religious/spiritual development. Since sexual behavior is regulated in most world faiths there has been much written about whether greater religiosity leads to sexual fidelity, sexual repression, sexual dysfunctions, and so forth. What has not been fully explored is how youth feel about the teachings regarding sexuality in their faiths and how sexual abstinence or involvement influences their personal spirituality. Sexual abuse and trauma have both received increasing scholarly attention and media attention (e.g., with the #me too movement) and it would be timely to study connections between sexual abuse and trauma in youth and their religious/spiritual development.
The dark side of religion. This article has focused on positive religious/spiritual development and the salutary role that religious and spiritual involvement tend to have on both youth and their families. While these beneficial connections recur in the empirical literature, there is more to the story. Recent work, including our own, has emphasized that religion can both unite and divide (Kelley et al. 2019). Indeed, "Like a rope that can be used to helpfully bind in some situations-or annoyingly chafe and burn at other times-religious commitments that reportedly help unify many . . . [families] may produce tension, irritation, and conflict [when] . . . commitment to a faith is not shared" (Marks and Dollahite 2017, p. 59, italics in original). Further, religion can be profoundly helpful or deeply harmful depending on what individuals and families actually do as a result of their beliefs (Dollahite et al. 2018a;Marks and Dollahite 2017). Balanced scholarship must attend to constructive and destructive elements of religion for youth and their families.
Toward generative devotion. We hope to see research that explores the ideas of the theory of generative devotion (Dollahite et al. 2019b) as it applies to positive religious/spiritual development in youth. Given that generative devotion is about being religious in ways that attend to the wellbeing of family members, and given that generative devotion is other-oriented, responds to needs of persons, respects other's agency, and is relational in nature, we would appreciate seeing additional studies that address these issues. For example, youth could be asked to what extent they believe that their spiritual and religious lives prepare them to be caring and responsive to others and to relationships. Or they could be asked if and how they think their spiritual and religious commitments help them honor the choices and agency of others. The theory of generative devotion also highlights the problems caused by what we call destructive devotion (Dollahite et al. 2019b). Accordingly, research focused on how to nurture positive religious/spiritual development in youth while avoiding destructive expressions is also critical.
Conclusions
Religious and spiritual involvement tends to have a range of benefits for adolescents. Yet, selfish and destructive devotion can be harmful to youth-and children and adults. Therefore, scholarly investigation of the various ways that youth develop religiously and spiritually in ways that are positive for themselves and others will continue to be an important endeavor to better understand how to facilitate the wellbeing of future generations. | 2019-10-03T09:12:34.586Z | 2019-09-25T00:00:00.000 | {
"year": 2019,
"sha1": "acb8c7fadb05ca13b4ef951fb27fe4d7d4d13871",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/10/10/548/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "277d02c556a56fecf54c025ece99f7b8d2e48530",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
29886552 | pes2o/s2orc | v3-fos-license | The Acute Uraemic Emergency
Les causes et les differences entre l'insuffisance renale aigue et chronique sont analysees, ainsi que les methodes cliniques permettant leur diagnostic
Starting with this issue, the Journal will have a systematic CME section. It is our response to the readership survey (JRCPL 1996;30:246-251). The first topic is renal disease which we will continue in the March issue. It draws heavily on the topics and speakers of the CME day at the College on 30 September 1996, which was also organised by Dr Winearls. We acknowledge with thanks an educational grant from Janssen-Cilag Ltd. ? Medical ? Surgical ? Obstetric Figure 2. Ultrasound of the kidney of a patient with a 6-year history of bladder outflow symptoms whose renal impairment had been attributed to diuretics used for treatment of cor pulmonale associated with emphysema. He presented as an acute uraemic emergency with severe acidosis and hyperkalaemia. Following bladder catheterisation, serum creatinine stabilised at around 250 pmol/l. Severe parenchymal thinning is shown, together with marked hydronephrosis. Hydronephrosis is a very sensitive sign of obstructive nephropathy; parenchymal loss suggests that this is long-standing (Fig 2).
Previous measurements of serum creatinine should be obtained from hospital notes, including those from other hospitals, from laboratory Figure 3. Serum creatinine, plotted on an inverse reciprocal scale, in the patient whose ultrasound scan is shown in Figure 2. Extrapolation of the points obtained by 1994 could have shown that the patient was destined to develop end stage renal failure by 1996. There is no place for fluid challenges in an already fluid-overloaded patient with oliguria; the frequent result is pulmonary oedema. Conversely, severe hypovolaemia is often undertreated with fluid replacement. Serum potassium must be measured daily or more often, and hyperkalemia treated (with dextrose and insulin, salbutamol, correction of acidosis, and oral or rectal resonium resin) and the myocardium protected with intravenous calcium (unless the patient is on digoxin); refractory hyperkalemia is an indication for dialysis.
Further renal insults must be avoided if possible: in particular nonsteroidal anti-inflammatory drugs should be eschewed and aminoglycosides used only when there is no good alternative and with careful monitoring of blood levels. The outcome is often determined by whether or not secondary infection occurs.
Most importantly, advice from a renal unit should always be sought in case of doubt.
Dopamine
Dopamine increases renal blood flow in normal subjects, and acts as a
Outcome
The outcome of the acute uraemic emergency clearly depends on the cause, and on whether renal failure is found to be acute or chronic. In acute renal failure, recovery of renal function is expected in 90% of uncomplicated cases, 40-50% in cases with combined renal and respiratory failure, and 5-10% in those with multiple organ failure. The prognosis for recovery of renal function is poorer in the elderly, in whom the entity of acute irreversible renal failure is increasingly recognised11. Figure 5. Palpable purpura in a 73-year old man who presented as an acute uraemic emergency having been admitted 5 days earlier to a rehabilitation ward 'off legs'. The rash was present on admission, as were haematuria and proteinuria on dipstick urinalysis. Serum creatinine was 159 pmol/1 on admission, rising to 743 pmol/l prior to transfer. Renal biopsy the morning after transfer showed vasculitis affecting arterioles and venules. Renal function improved, and the rash faded, following treatment with methylprednisolone and cyclophosphamide. | 2018-04-03T01:29:57.893Z | 1997-01-01T00:00:00.000 | {
"year": 1997,
"sha1": "ba80c9dffd0657b1ab35cb632c257ebf42e3983d",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9e72176992f6403c09770ad698ebbd3e3dc9bce4",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
3236814 | pes2o/s2orc | v3-fos-license | Early Ultrasound Assessment of Renal Transplantation as the Valuable Biomarker of Long Lasting Graft Survival: A Cross-Sectional Study
Background: To date, there has been little agreement on the use of ultrasonographic parameters in predicting the long-term outcome after transplantation. This study evaluates whether ultrasonography of the graft performed in the early stage after transplantation is a valuable predictor for long-term-outcome. Objectives: The aim of this study was to evaluate the association of ultrasonographic parameters (resistive index [RI], pulsatility index[PI], end diastolic velocity [EDV], graft length and graft parenchymal volume) measured within the first week after transplantation with 6 months graft function. Patients and Methods: A cross-sectional study was performed on 91 (46 males and 45 females) living renal transplants between April 2011 and February 2013. All patients underwent an ultrasonography at the first week after transplantation. Intrarenal Doppler indices including RI, PI and EDV were measured at the interlobar artery level and the graft length and parenchymal volume were defined with gray scale ultrasonography. Graft function was estimated at 6months by glomerular filtration rate (GFR). Unpaired t-test and multivariate-linear and logistic regression analysis were used to estimate the relationship between ultrasonographic parameters and GFR. Results: Fourteen patients (15.4%) had impaired graft function after 6 months (GFR less than 60 ml/min/1.73m2). Multivariate linear regression analysis showed significant correlation between GFR at 6 months and RI, PI and EDV with a P value of 0.026, 0.016 and 0.015, respectively. Logistic regression analysis showed that GFR<60 ml/min/1.73 m2 at 6 months was significantly associated with RI>0.7 (odds ratio=2.20, P value=0.004) and PI>1.3 (odds ratio=2.74, P value<0.001) and EDV<9 cm/Sec (odds ratio=1.83, P value=0.03). Conclusions: In this study, kidney transplant recipients with a lower RI and PI and a higher EDV at 1week showed better graft function at 6 months after transplantation.
Background
Ultrasound is a noninvasive and relatively inexpensive diagnostic tool providing information about renal location, contour and size. Doppler ultrasonography shows kidney morphology and hemodynamics (1). It is widely used to evaluate the graft complications such as obstruction, perirenal collection or vascular complications such as rejection or renal arterial/venous thrombosis (2,3).
Resistive index (RI) has been shown as the best ultrasonographic parameter to determine renal dysfunction (4,5). It is used as a marker of microcirculation injury and a sequel of interstitial edema of any etiology (6,7). Other parameters such as the pulsatility index (PI) and graft dimensions are also used in this circumstance. It was believed that ultrasonography is used just for discrimination of acute rejection episodes or other complications but in recent years, literatures have shown that RI and some other ultrasonographic parameters are handled to predict long-term graft function. These parameters consist of RI, PI, end diastolic velocity (EDV) and graft length. RI with a normal value immediately after transplantation is a good predictor of the future graft function. RI not only reflects resistance at the arterial blood flow of renal arteries, but also shows proximal pathology such as systemic blood pressure (8).
Intrarenal diastolic blood flow has no association with systolic components; therefore, the end diastolic velocity (EDV) obtained with Doppler wave tracing illustrates pathologic changes at kidney graft more reliably than RI (8,9).
PI is another Doppler parameter more emphasized on Doppler wave pattern and has a significant value in graft function prediction. On the other hand, some gray scale parameters such as graft size are often used to predict long-term graft function (10).
Objectives
The aim of this study was to evaluate the association of ultrasonographic parameters (RI, PI, EDV, graft length and graft parenchymal volume) measured within the first week after transplantation with 6 months graft function.
Patients and Methods
Between April 2011 and February 2013, a total of 100 patients who underwent renal transplantation in Afzalipour university hospital were enrolled and a single investigator performed all ultrasonographies by MEDISON V10 device (ACCUVIX V10, MEDISON Co. LTD, Korea) with a curved probe of 3.5-5 MHz (C3-7IM) in the supine position. We consulted the local ethics committee regarding this study. No formal ethical committee approval was required.
Three patients died and six patients were excluded from the study owing to factors that influenced the Doppler parameters; three patients due to hydronephrosis, two patients due to arterial stenosis and one patient due to perirenal collection (1,2). The level of creatinine was measured daily up to a stable level for each patient. All patients were assessed by ultrasonography at approximately the first week after transplantation when the creatinine level was normal (below 1.5 mg/dl) and kidney graft size and parenchymal volume (renal volume -renal sinus volume) were determined ( Figure 1).
RI and PI were calculated by system software at an interlobar artery level ( Figure 2) according to the equations: RI=(V max -V min)/V max PI=(V max -V min)/V mean V max is the maximum systolic velocity, V min is the minimum diastolic velocity, and V mean is the time-averaged mean velocity (11). EDV was calculated by system software. The renal function was evaluated by measurement of serum creatinine (Cr) and estimation of the gloumerular filtration rate (GFR) according to the equation: Age was measured in year, body weight in Kg, creatinine in mg/dl, GFR in ml/min/per 1.73 m 2 and EDV in cm/Sec.
Patients were classified as display decreased graft function when GFR was <60 and normal graft function when GFR was ≥60. Continuous variables were expressed as a mean value±standard deviation. The differences between patient groups were assessed with unpaired standard ttest. The degree of correlation between ultrasonographic parameters and GFR were estimated by multi linear regression models. Logistic regression analysis was used to estimate the potential association between ultrasound
Results
Of the 91 patients with the mean age of 36.9±10.7 years (range, 14-69 years), 46 (50.6%) were male and 45 (49.4%) were female. They were followed up for 6 months after transplantation. Fourteen patients (15.4%) had impaired graft function after 6 months (GFR less than 60 ml/min/1.73 m 2 ). The median of RI and PI at the first week after grafting was 0.71 and 1.2, respectively. Mean RIs were 0.68±0.07 and 0.79±0.07 in patients with normal graft function and graft dysfunction at 6 months after transplantation, respectively (Table 1). On the other hand, the mean PIs at 6 months after transplantation in patients with stable graft function and graft dysfunction were 1.17±0.25 and 1.7±0.54, respectively and at the same condition; the mean EDVs were 9.46±3.6 and 6.6±2.9, respectively (Table 1). This means that patients with stable graft function at 6 months had a lower RI and PI and a higher EDV. Independent t-test showed significant differences between mean RI, mean PI and mean EDV of patients with 6 months normal and impaired graft function (P value<0.001, P value<0.001 and P value=0.002 for RI, PI and EDV, respectively). Both groups neither demonstrated a difference in length (P value=0.801) nor parenchymal volume (P value=0.617) ( Table 1). Multivariate linear regression analysis showed a significant correlation between GFR at 6 months with RI (P value=0.026), PI (P value=0.016) and EDV (P value=0.015) during 1st week post transplantation. No association between graft length, renal parenchymal volume and graft future function were obtained (P values=0.668 and 0.56 respectively). Logistic regression analysis demonstrated a significantly greater odds ratio for decreased graft function at 6 months post transplantation among patients with RI>0.7 (Odds ra-tio=2.20), PI>1.3 (Odds ratio=2.74) and EDV<9 cm/Sec (Odds Ratio=2.1) ( Table 2).
Discussion
Ultrasonographic parameters are widely used to evaluate not only the present graft function, but also as predictive factors of long term outcome of the renal transplant. These parameters have been shown to correlate with short-term renal transplant function determined by serum creatinine or GFR. However, their relationship with long-term function is more controversial, with conflicting results in the literature (9).
The results of this study indicate that RI, PI and EDV in the early phase after transplantation are significantly associated with renal dysfunction 6 months after transplantation. Intrarenal RI is influenced by not only arterial resistance but also various extararenal factors such as the systemic blood pressure. Therefore, RI parameter as a sole element in distinguishing various causes of graft dysfunction has a limited value. On the other hand, an increase in RI can be induced by any interarenal condition such as acute renal failure or urinary tract obstruction that induces a reduction in diastolic renal perfusion (10,11). PI is altered by both physiologic and pathologic conditions. The status of cardiac function and systemic circulation and flow resistance are important factors influenced by PI. Unlike the RI, whole wave shape at one cardiac cycle is calculated by PI and therefore, the PI is a better indicator of graft function than the RI (12). Furthermore, interarenal diastolic flow reflects high vascular resistance within the renal allograft circulation more specifically than RI (8). EDV is measured simply and if it is measured parallel to RI parameter, it provides more specific information about renal resistance to flow. Low diastolic flow is an indicator of poor prognosis for graft survival. Previous studies have shown a significant correlation between long-term graft function and RI, PI and EDV.
Radermacher et al. (5) showed that RI is the best predictor for graft failure. McArthur et al. (10) demonstrated that assessment of PI and RI in the early post transplantation period are significantly associated with longterm transplant outcome including 1-year GFR and transplant survival. Buturovic-Ponikvar et al. (9) demonstrated that RI, EDV and graft length have a predictive value in estimating long-term graft function. Barba et al. (13) found that RI measured as early as 24hr after surgery can predict the long-term graft survival. Adibi et al. (14) showed that early determination of RI and PI could predict long-term graft function in kidney transplant recipients. Kramann et al. (15) published a paper in which they described that only RIs evaluated between 12-18 months after transplantation can predict long-term graft function. On the other hand, some researchers did not support this correlation. Loock et al. (16) and Garcia-Covarrubias et al. (17) have shown no correlation between future graft function and numerous parameters of kidney ultrasonography. Like most of the previous studies, we found that RI, PI and EDV measurement within the first week of grafting significantly correlate with graft function at 6 months follow-up. Interestingly, transplant recipients suffering higher RI and lower EDV and especially higher PI at early stages, showed impaired graft function at 6 months aftertransplantation. This means that RI>0.7, PI>1.3 and EDV<9 cm/Sec could be considered as an indicator of not only vascular complications but also poor outcome of the graft. Buturovic-Ponikvar et al. (9) observed that the graft size before transplantation is significantly associated with creatinine clearance at 12 months. This experiment did not detect any correlation between renal parenchymal volume, graft length and graft future function. It seems possible that the results of Buturovic-Ponikvar were due to the kidney size measurement by weighing the graft before transplantation but we have checked it after transplantation and it had significant correlation with the long-term graft function.
In recent years, management of renal transplant recipients has progressively improved. Moreover, it is necessary to use available short-term tools to predict longterm graft function. Applying these factors make better management of the transplants. In spite of the controversies regarding ultrasound assessment in kidney transplants, it is considered as a prognostic marker for longterm graft survival. This study focused on evaluating whether ultrasonography performed in the early period after transplantation would be a valuable predictor of long-term-outcomes.
Our findings demonstrate that out of the different ultrasonographic parameters that were measured within the first week after renal transplantation, only the early evaluation of RI, PI and EDV were effective in the estimation of long-term graft function. | 2016-05-12T22:15:10.714Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "71cbaf0ae1610673592e3f3b04ef86afbd99ed7b",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3955852?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "383cdfe318981565ca83334f9d12178a1f5f7108",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14566591 | pes2o/s2orc | v3-fos-license | Assessment of Normal Sagittal Alignment of the Spine and Pelvis in Children and Adolescents
Aim. We aimed to determine spinopelvic balance in 8–19-year-old-people in order to assess pelvic and spinal parameters in sagittal view. Methods. Ninety-eight healthy students aged 8–19 years, who lived in the central parts of Tehran, were assessed. Demographic data, history of present and past diseases, height (cm), and weight (kg) were collected. Each subject was examined by an orthopedic surgeon and spinal radiographs in lateral view were obtained. Eight spinopelvic parameters were measured by 2 orthopedic spine surgeons. Results. Ninety-eight subjects, among which 48 were girls (49%) and 50 boys (51%), with a mean age of 13.6 ± 2.9 years (range: 8–19) were evaluated. Mean height and weight of children were 153.6 ± 15.6 cm and 49.9 ± 13.1 kgs, respectively. Mean TK, LL, TT, LT, and PI of subjects were 37.1 ± 9.9°, 39.6 ± 12.4°, 7.08 ± 4.9°, 12.0 ± 5.9°, and 45.37 ± 10.7°, respectively. Conclusion. Preoperation planning for spinal fusion surgeries via applying PI seems reasonable. Predicating “abnormal” to lordosis and kyphosis values alone without considering overall sagittal balance is incorrect. Mean of SS and TK in our population is slightly less than that in Caucasians.
Introduction
Various parameters have been introduced to describe sagittal alignment of the spine and pelvis. Sagittal spine and spinopelvic parameters are different in adults and children, but these parameters correlate with each other to maintain global balance in both groups. There is no proper description of sagittal spinopelvic balance parameters, characteristics, and relationships in children. Correct concept of normal spinopelvic balance of children would effectively help spinal surgeons assessing spinal deformities and proper planning for treatment. Human standing posture is the result of balance between spine and pelvis [1]. Thoracic kyphosis (TK) and lumbar lordosis (LL) are also in balance with each other in normal standing posture so that the minimal amount of energy is used for maintaining posture [2]. Global sagittal balance must account for the position of the head in relation to the spine and pelvis [3]. The sagittal profile of the spine is usually characterized as being kyphotic between T1 and T12, and lordotic between L1 and L5, but this is not necessarily the case. The differences between normal and pathologic curvatures are less clear in the sagittal plane than in the coronal plane [4][5][6]. Some studies investigated the amount of normal spinal sagittal curves [7][8][9] while others evaluated alignment, morphology and pelvic parameters in children [6,[10][11][12][13]. Several studies showed the pelvic sagittal morphology affects standing balance in adults especially when LL changes [1,14,15]. It has also been proven that pelvic incidence (PI) after adolescence remains relatively constant [10,14]. TK is one of the main sagittal spinal parameters which show different values in different studies [16], partly due to unclear visualization of T1-T4 vertebrae in lateral spinal radiography [17] and mainly due to various methods of TK measurements; T1-T12 [18], T2-T12 [7], T4-T12 [19], and even T5-T12 [8] have been used to calculate the normal range of TK. There is no consensus on pelvic sagittal geometries in relation to spine in normal children. In addition, abnormal patterns which develop by aging correlate with sagittal curve patterns in childhood [20]. Most papers published in this field have studied white people [19] and to the best of our knowledge there are only few studies on Asians [19,[21][22][23] and none in Iran. Thus we aimed to determine spinopelvic balance in 8-19-year-old Iranians.
Methods
Subjects of our study were 98 healthy students (50 boys and 48 girls) aged 8-19 years, who live in one of the central parts of Tehran. The study was approved by the ethical committee of our university. Goals and design of the study in addition to X-ray exposure were fully explained to them, and those children and parents who accepted the principles of study were recruited. Demographic data, history of present and past diseases, height (cm), and weight (kg) of all children were recorded. Each subject was examined by an orthopedic surgeon (3rd author). Children with more than 1 cm difference in their legs' length, history of trauma, present or past pelvic or spinal pain, disorder or abnormality, deformity proven via Adam's test, or signs of hip disorder were not included. Entirely 106 subjects had these inclusion criteria. A long cassette (30 cm in 90 cm) was chosen; children were asked to place their right side closely to the cassette in relaxed standing position, with their shoulders being 90degree flexed and elbows fully flexed so that their fingers touched their ipsilateral shoulder. X-ray source was placed at 120 cm distance from the cassette. If femoral head or 7th cervical vertebra was not clearly seen in the radiograph (eight subjects), the subject was excluded. Eight spinopelvic parameters were measured twice by 2 orthopedic spine surgeons on each radiograph of 98 subjects separately. None of them were aware of the other surgeon's measurements. Recorded values of each surgeon for each radiograph were compared, and in case of any inconsistency, the aforesaid values were recalculated by a 3rd orthopedic spine surgeon (4th author). Assessed landmarks were superior end plates of T1, L1, S1, center of C7 body, anterosuperior of T1, L1, anteroinferior of T12, L5, center of sacral plate, and center of femoral heads. If two femoral heads were seen, the midpoint of the connecting line was selected. As shown in Figures 1, 2, and 3, parameters measured were thoracic kyphosis (T1-T12), lumbar lordosis (L1-L5), thoracic tilt (TT), lumbar tilt (LT), pelvic tilt (PT), pelvic incidence, sacral slope (SS), and sagittal vertical axis offset (SVA). Pelvic, lumbar, and thoracic tilts were assumed positive if directed forwards and negative if directed backwards. Thoracic kyphosis (TK) is the angle between lines drawn from the T1 superior end plate and T12 inferior end plate. Lumbar lordosis (LL) is the angle between lines drawn from L1 superior end plate and L5 inferior end plate. Sagittal vertical axis offset is the distance between the posterosuperior point of the sacral plate and the plumb line drawn from C7. anterosuperior point of T1 body and anteroinferior point of T12 body (as shown in Figure 2). SS is defined as the angle between horizontal line and superior end plate of sacrum. PI is defined as an angle subtended by line drawn center of the femoral head to the midpoint of the sacral end plate and a line perpendicular to the center of the sacral end plate. PT is defined as the angle between the vertical line and the line joining the middle of the sacral end plate and the hip axis (as shown in Figure 3).
Data was reported in mean ± SD ranges; Pearson's test was used to determine relation between parameters. Linear correlation was performed to determine the relation between PI and LL. values less than 0.05 were considered significant. Data analysis was done with SPSS v.20.
Correlation matrix between dependent and independent parameters using Pearson's correlation and their related values are shown in Table 2.
Thoracic kyphosis was positively related to lumbar lordosis which means lumbar lordosis would increase as thoracic kyphosis increases. LT had linear positive relation with TK. Besides PI was significantly related to LL. This relationship was positive. However, PI shows significant inverse relation with LT and was not related to TT.
Discussion
Normal ranges of sagittal spinal parameters are incumbent for pre-and intraoperation planning of spinal fusion surgeries [6] to minimize energy consumption for maintaining balance [24] and to decrease the probability of junctional kyphosis [25]. This becomes more important especially when fusion expands to lower segments of the spine [26]. We launched this study on the basis that ethnicity may influence the normal ranges of these parameters. According to Table 1 it is clear that some of these parameters such as TK and LL have wide ranges whereas the tilts and spinopelvic parameters have more limited ranges. So it can be concluded that parameters with narrow spectrum may be a better tool to predicate normal or abnormal standing posture. Although PI has a wide normal range it is a constant amount for each person [27]. Mean PI in this study was 45.37 ± 10.7 (range: which is in accordance with Descamps et al. [28] study in which the mean age of participants was close to the same value in our study (13.5 versus 12.6 years old). However Mac-Thiong et al. [5] who investigated children with a mean age of 12.0 years reported normal mean PI equal to 48.4, which is 3 degrees more than our population. LL and TK4 normal ranges of our study were 2-67 and 6-73 degrees, respectively. As mentioned before, normal sagittal spinal parameters have been less described in Asian population in comparison to western populations. Korean children have less LL, SS, and PI than Caucasian children as Lee et al. [19] reported. Takemitsu et al. [23] evaluated 13-to 16-year-old Japanese boys and girls and reported a mean TK 41 ∘ which is not compatible with values obtained in Caucasians [29][30][31]. LL, SS, and TK in our study is in accordance with Lee et al. and Takemitsu et al. results. Table 3 shows the mean LL, TK, and PI in this study and some previous ones. These data demonstrate that in current study population the mentioned parameters are less than Caucasians. These 8 parameters can be categorized in 3 groups as Berthonnaud [24] and Mac-Thiong et al. [6] have shown: (a) morphologic parameters including PI, (b) segmental shape parameters such as LL and TK, and (c) orientation parameters such as tilt, SS, and SB. Considering global balance importance it seems that using PI or group (b) parameters in order to determine spinal abnormalities is not suitable enough; first PI is exclusive to each person and does not change with changes in position or with deformities. Second, group (b) parameters have a wide normal range [29,32,33] and it is difficult to exactly determine normal range of shape parameters. On the other hand group (c) parameters which encounter limited normal values are closely related to global balance; hence the latter parameters are better to determine spinal abnormalities than the former ones. In other words, as Stagnara et al. [34] suggested, predicating the term "abnormal" to amount of lordosis or kyphosis observed in any segment of spine which is not within the aforementioned ranges seems to be false, since there are various values of kyphosis and lordosis in the normal population which ultimately reach proper balance. So it is obvious that segmental elements are less to be counted upon than the overall balance. Regardless of sagittal spinal parameters, the relationship with each other is another matter of importance [27]. Pelvic orientation is clearly related to spinal sagittal posture [6]; once lordosis increases, SS is augmented. PI is Pelvic incidence PI Figure 4: Statistical significant correlations between spinopelvic parameters introduced by Berthonnaud et al. [24] and modified by Mac-Thiong et al. [6].
also an important morphologic parameter in this study. It is the summation of 2 position-dependent parameters: SS and PT. In standing position, pelvic morphology, which is indicated by PI, is the main determinant of spatial orientation [27]. PI = SS + PT so if the PI increases SS, PT, or both increase as well. Berthonnaud et al. [24] published an algorithm in 2005 which is of great interest ( Figure 4). Mac-Thiong et al. [6] found that PI and LL have the most evident clinical relationship which should be considered in preoperative planning of spinal surgical operations. We also found a strong positive relationship between PI and LL = 0.56, value < 0.001. Figure 4 confirms this linear relation as well. Other researchers have also emphasized the determinant role of PI in sagittal curves' shapes [12,13,24,32,35,36]. PI plays its role via significant correlation with SS ( = 0.62, value < 0.001), as similarly shown in the algorithm, and tight relationship with LL.
There are some differences between relations in Figure 4 and relations obtained from this study. According to Table 2 some of the relations are applicable to results of the algorithm: relations between PI and SS, LL and SS ( = 0.57, value < 0.001), and LL and TK ( = 0.34, value = 0.001) unlike TT which was not significantly related to LT and LL. In addition LT was positively related to TK ( = 0.47, value < 0.001) and negatively related to SS ( = −0.32, value = 0.001), in linear correlation the following equation was obtained: LL = 0.5555 × PI + 10.38.
This equation is relatively similar to Mcthiong's equation LL = 0.5919 × PI + 29.461 [27], particularly the constant. Thus we suggest using PI in preoperative planning of patients with spinal deformity instead of applying a certain normal value of lordosis or kyphosis. Estimating expected LL by calculating PI before operation seems reasonable, especially when taking into account that PI has a linear relation with LL. It should be kept in mind that standard sampling and large sample size are the prerequisites of estimating normal values of any population. So sampling is one of the limitations of this study. In this study we evaluated the sagittal spinal parameters below C7, whereas cervical lordosis which could influence the global balance of the spine [6] was not studied. The authors are investigating other sagittal and spinopelvic parameters on a larger population including cervical lordosis and the results will be published soon.
Conclusion
Preoperation planning for spinal fusion surgeries applying PI seems applicable. Predicating "abnormal" to lordosis and kyphosis values alone without considering global sagittal balance is incorrect. Mean of SS and TK in our population was slightly less than that in Caucasians. | 2018-04-03T05:03:33.346Z | 2013-12-09T00:00:00.000 | {
"year": 2013,
"sha1": "48e9fdbbf86d92a5ff60bfb8bba6351821758c4b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2013/842624.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cf15aa72aaa4131200fc48711353a0c2eda296f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237302548 | pes2o/s2orc | v3-fos-license | Effect of participating in physical fitness assessment on healthcare costs in Korean adults
This study assessed the effects of participation in physical fitness assessment on healthcare costs by analyzing healthcare costs in those who did and did not participate in the physical fitness assessment for the National Fitness Award project conducted by the Korean government. The National Health Insurance Service database was used to compare healthcare costs in the years before and after participation in the National Fitness Award project for 318 individuals who participated (participation group) and 627 individuals who did not participate (non-participation group). Healthcare costs in the years before and after participation was different between the two groups. This difference was corrected for further analysis. Results revealed that the visit days (inpatient) was -0.23±8.16 and 1.53±8.16 days in the participation and non-participation groups, respectively, and was longer in the non-participation group. The change in the total healthcare costs was 28.56±272.88 ten-thousand KRW in the participation group and 53.36±275.33 ten-thousand KRW in the non-participation group, showing an annual difference of approximately 240,000 KRW in healthcare costs between the two groups. These findings suggest that participating in physical fitness assessments can have positive effects on participation in physical activities, thereby reducing healthcare costs.
diseases (NCB) such as heart disease, type 2 diabetes, and various cancers and improvement of quality of life, including relief from depression and anxiety. In fact, the lack of physical activity is the fourth leading cause of death in the world after hypertension, smoking, and hyperglycemia (World Health Organization, 2009). Thus, developed countries are seeking to improve physical fitness to improve the health of the public and reduce healthcare costs (Piercy et al., 2018;Janssen, 2012)).
In a study of the relationship between physical activity and healthcare costs, those who engaged in the recommended amount of physical activity were shown to reduce healthcare costs by approximately 1.7 million won ($1,437) per year compared to those who did not engage in the recommended amount (Carlson et al., 2015). Furthermore, those elderly people who practiced walking spent approximately 125,303 won less per year on healthcare costs than those who did not (Go, 2015).
In addition to physical activity, physical fitness is also a strong predictor of disease prevalence and mortality (Blair et al., 2001;Kodama et al., 2009;Myers et al., 2004;Ross et al., 2016), and it is an important individual factor required to perform daily activities. Assessment of physical fitness, which is the basis of all daily life activities must be conducted in addition to medical examinations to maintain and improve health (Wilder et al., 2006). Assessment of physical fitness can provide objective information for primary care providers to suggest guidelines for the physical activity of patients. The concept of physical fitness generally includes cardiovascular endurance, flexibility, muscle strength, and body composition (US Department of Health and Human Services, 2018). Guidelines based on objective results may be useful to promote health and physical fitness (Purath et al., 2009). As such, physical fitness is an essential factor for health management and affects healthcare costs (Bachmann et al., 2015). Bachmann et al. (2015) reported that high cardiorespiratory fitness in middle-aged individuals is closely related to low healthcare costs, suggesting that cardiorespiratory fitness can help reduce healthcare costs in an aging society.
Physical activities are related to an individual's physical fitness and affect health and healthcare costs caused by diseases (Bachmann et al., 2015;Okunrintemi et al. 2019;Ding et al., 2016). Thus, the Korean government is encouraging the National Fitness Award project to promote public health by promoting sports activities. The National Fitness Award project grants different levels based on physical fitness through relative evaluation, following physical fitness assessment. It provides optimal guidelines on exercises that are suitable for a person's physical fitness level including information on individual fitness compared to the standard fitness required for disease prevention and independent living. Similar systems are implemented in the US and Europe; however, those systems mainly focus on children and adolescents. The National Fitness Award project helps to evaluate the physical fitness of various groups including elementary, middle, and high school students, adults, and the elderly. It provides exercise guidelines and is acknowledged worldwide for its excellence in promoting physical activities.
Previous studies in Korea assessed the relationship of healthcare costs with cardiorespiratory fitness (Bachmann et al., 2015) and the number of physical activities (Carlson et al., 2015;Go, 2015). However, there is a lack of studies that analyze differences in medical expansion and medical service usage after physical fitness assessment and provision of customized exercise guidelines. This study evaluates the effects of participation in physical fitness assessment on healthcare costs through an analysis of healthcare costs in those who did and did not participate in the physical fitness assessment (The National Fitness Award project).
Study design
This study compared and analyzed changes in healthcare costs for the year previous to and the year following participation between the non-participating (control) and participating groups to understand the effect of participation in the physical fitness evaluation-after adjusting for changes in medical use trends such as changes in the number of participants. The study included those who participated in the physical fitness assessment of the National Fitness Award project (and agreed to the use of their personal information) and those who did not participate in the physical fitness assessment to assess differences in the healthcare costs. The participants were matched in a ratio of 1:2 (participating group: non-participating group) using the personal information of the participating group to compensate for bias that may occur due to factors other than participation in the physical fitness assessment. The variables used for matching and selection of participants in the non-participating group are shown in Table 1.
The National Health Insurance Service database from 2011-2016 was used. For the Healthcare utilization of the two groups, the number of inpatient visits, number of outpatient visits, number of prescriptions, visit days (inpatient), treatment days (outpatient) and prescription days were assessed. Additionally, the total healthcare cost was compared between the two groups.
Participants
A total of 318 participants who underwent physical fitness assessment for the National Fitness Award project and agreed to the use of their personal information were included in the participating group, and 627 participants were who were matched for the previous year's characteristic information (sex, age, income, etc.) of the participation group were randomly selected and included in the non-participation group. The distribution of the participation and non-participation groups is shown in Table 2. This study was approved by the Institutional Review Board (IRB) of the Korean Institute of Sports Science.
Statistical analysis
All results obtained in this study were analyzed using SAS 9.4 (SAS Institute, Cary, NC, USA) program, and the detailed statistical analysis methods are as follows. Frequency analysis and descriptive statistics were conducted to compare the number of cases and demographic characteristics per year of participation of the participation and non-participation groups. The Wilcoxon signed rank test was conducted to compare the healthcare utilization and healthcare costs in the year before and the year after participation between the two groups. Analysis of covariance (ANCOVA) was conducted using the value of the previous year of participation as a covariance for comparison between the two groups. A p-value of less than 0.05 was considered statistically significant.
Results
Comparison of the healthcare utilization and healthcare costs before and after participation in physical fitness assessment Table 3 shows the results of the Wilcoxon signed rank test that compared the healthcare utilization and costs in the year before and the year after participation between the participation and non-participation groups.
In the participation group, the cases of participants requiring number of prescriptions significantly increased from 16.7 ± 15.89 in the year before participation to 16.53 ± 14.02 in the year after participation (p=0.007), The prescription days increased from 270.31 ± 292.85 days in the year before participation to 295.1 ± 303.69 days in the year after participation (p=0.023). In addition, the total healthcare costs increased from 173.93 ± 240.20 ten-thousand KRW in the year before participation to 162.60 ± 204.20 In the non-participation group, the number of impatient visits increased from 0.23 ± 0.82 in the year before participation to 0.36 ± 1.14 in the year after participation (p=0.001). The visit days (inpatient) increased significantly from 1.96 ± 9.59 days to 3.49 ± 17.42 days (p=0.033). The prescription days also increased from 275.76 ± 271.16 days to 299.37 ± 281.59 (p=0.003). The total medical care costs significantly increased from 202.49 ± 287.99 ten-thousand KRW in the year before participation to 216.25 ± 315.16 ten-thousand KRW in the year after participation (p<0.0001). The Healthcare costs (inpatient) increased from 33.96 ± 144.03 ten-thousand KRW to 60.43 ± 237.73 ten-thousand KRW (p=0.008). The Healthcare costs (outpatient) increased in the year after participation from 57.91 ± 62.52 ten-thousand KRW to 69.81 ± 118.12 ten-thousand KRW (p<0.0001). Additionally, The healthcare costs (prescription) increased from 52.68 ± 57.26 ten-thousand KRW in the year before participation to 61.31 ± 69.15 ten-thousand KRW in the year after participation (p<.0001).
Comparison of the healthcare utilization and healthcare costs between groups
Covariate analysis was conducted after adjusting for the value of the year before participation to compare differences in the healthcare utilization and costs between the participation and non-participation groups. The results are summarized in Table 4. The change in the visit days for inpatients was -0.23 ± 8.16 and 1.53 ± 15.13 days in the participation and non-participation groups, respectively, and was significantly different between the two groups (p=0.024). The change in the healthcare costs were 28.56 ± 272.88 ten-thousand KRW in the participation group and 53.36 ± 275.33 ten-thousand KRW in the non-participation group, which was significantly different between the two groups (p=0.038). These results suggest that the total visit days and total healthcare costs were significantly lower in the participation group than in the non-participation group.
Discussion
Participation in physical activities is a key factor for the prevention of disease and health promotion. As national medical expenses have increased with the aging of the population, the social role of physical activities is currently emphasized. Thus, this study objectively assesses the effects of the National Fitness Award project on the healthcare utilization and costs in those who did and did not participate in physical fitness assessment.
Comparison of the healthcare utilization and costs showed that the number of prescriptions, prescription days, the total healthcare costs, cost of outpatient and prescription were increased in both groups. This finding may be attributed to the total healthcare cost coverages that are changed each year according to changes in health and medical policies. To correct for such bias, the difference-in-difference analysis method was used to compare the two groups. As shown in Figure 1, the visit days (inpatient) was greater in the non-participation group with 1.53±8.16 days compared to the participation group with -0.23±8.16 days. The change in the total healthcare cost was 28.56±272.88 and 53.36±275.33 ten-thousand KRW in the participation and non-participation groups, respectively, showing an annual difference of approximately 240,000 KRW in healthcare costs between the two groups. Participants in both groups were relatively healthy in the year of participation in physical fitness assessment, and in the year following participation, there was a significant difference only in the visit days, which is a relatively significant event. This is thought to have caused differences in the total healthcare costs. These findings suggest that physical activity promotion projects such as physical fitness assessment and provision of customized exercise guidelines can affect individual health management behaviours, thereby reducing medical expenses. This finding is significant as it was obtained from quantitative analysis based on real-life medical expenses of participants rather than from societal value creation effects using a social value This study compared the healthcare costs of those who participated in physical fitness assessment (National Fitness Award project) and those who did not participate and had similar characteristics. Thus, our study could not be directly compared to previous studies. However, our finding is consistent with a previous study that reported that participation in physical activities was related to the reduction in healthcare costs (Janssen, 2012). Janssen (2012) compared adults in Canada who participated and did not participate in physical activities and provided an estimate of healthcare costs. Direct, indirect, and total health care costs in those who did not participate in physical activities were $2.4 billion, $4.3 billion, and $6.8 billion, respectively, which accounted for 3.8%, 3.6%, and 3.7% of total healthcare costs. Carlsone et al. (2015) compared the healthcare costs of 51,165 Americans by linking the number of physical activities and the cost of medical use. The participants were divided into groups-those who engaged in the recommended amount of physical activity; those who did not meet the recommended amount; and those who did not engage in physical activities at all. In that study, the difference in the annual medical cost between those who did and did not participate at all and between those who engaged in and those who did not meet the recommended amount was $1,437 (approximately 1.7 million KRW) and $713 (approximately 850,000 KRW), respectively. Engaging in the recommended amount of physical activity was shown to reduce healthcare costs, which is supported by our findings.
Physical activities reduce healthcare costs not only in healthy people but also in those with chronic diseases. Bae et al. (2011) reported that regular exercise can save 90,000 KRW in healthcare costs per one hypertension patient in the public sector, which is equivalent to 480 billion KRW nationwide in Korea. Okunrintemi et al. (2019) analyzed trends related to physical activity, sociodemographic factors, and healthcare costs in US women with cardiovascular disease (CVD) using the data of the Medical Expenditure Panel Survey from 2006-2015. Those women with CVD who did not engage in physical activities showed a higher health care cost compared to those who engaged in the recommended amount of physical activity, suggesting that efforts must be made to increase the amount of physical activity. As such, physical activities are effective in reducing medical expenses for chronic diseases such as hypertension and CVD.
In addition to physical activities, physical fitness also affects healthcare costs (Bachmann et al., 2015). The study by Bachmann et al. analyzed the relationship between cardiorespiratory fitness and healthcare costs in 19,751 Americans at the age of 49 who underwent a cardiorespiratory fitness test and were covered by Medicare insurance from 1999 to 2009. They found one metabolic equivalent (MET) decreased 6.8% and 6.7% of healthcare costs in men and women, respectively (Bachmann et al., 2015). This suggests that high cardiorespiratory fitness in middle-aged people is closely related to low healthcare costs and may help to reduce healthcare costs in an aging society.
In a previous study of the elderly, Go (2015) analyzed the reduction in healthcare costs according to the participation of the elderly in exercises. Data of 54,186 elderly people over the age of 65 were analyzed using the health insurance cohort DB from 2001-2010. The results indicated that walking once a week for 30 minutes reduced 125,303 KRW in annual healthcare costs. In addition, walking more than three days reduced more medical expenses than walking one to two days a week. Son et al. (2015) evaluated the effects of senility on healthcare costs in the elderly in Korea. Senility was assessed by measuring weight loss, fatigue, grip strength, gait speed, and a lower amount of physical activity. The physical activities and physical fitness (grip strength) of the elderly were major factors of senility, and the average monthly medical cost for healthy, previously senile, and senile elderly people was 67,100, 73,800, and 88,900 KRW, respectively, suggesting that senility increased the average deductible monthly medical expense. These findings suggest that in general, physical activities, participation in exercise, and physical fitness had effects on healthcare costs. Our findings support these studies on the differences in the total medical cost according to participation in physical fitness assessments.
In this study, healthcare cost were calculated as health insurance claim costs. Expenses that are not covered by health insurance were excluded. Thus, the effects on reduced healthcare costs from participating in physical fitness assessment may be further increased. The National Fitness Award project can track changes in the usage of medical care before and after the project through panel research or the establishment of a cohort, and the effects of other policy changes can be simultaneously evaluated. Moreover, assessing the relationship of the results with a health insurance claim data source (secondary data), the study can be further expanded to be used for policies and reduce the possibility of a failed follow-up.
This study is the first to show that physical fitness assessments and provision of customized exercise guidelines, in addition to continuous physical activity or fitness level, may reduce healthcare costs. Participation in physical fitness assessment is a step that is necessary for regular physical activity and increased physical fitness and can also be a motivating method for continuous participation in physical activities. Assessment of physical fitness and customized guidelines for exercises are the first steps required for behavioural changes of those who do not participate in physical activities. Our findings suggest that experimental personal physical fitness assessment and customized guidelines on exercise, which is different from education materials on the importance and recommended amount of physical activities to encourage physical activities, can reduce the medical expense for the public.
Conclusions
To ascertain an objective relationship between participation in physical fitness assessment and healthcare costs, the National Health Insurance Service database from 2011-2016 was used to evaluate the healthcare utilization and costs in the year before and after participation in two groups-those who did and did not participate in the National Fitness Award project. The visit days was longer in the non-participation than in the participation group, and the total healthcare costs was also approximately 240,000 KRW higher in the non-participating group. Future studies should investigate the characteristics (income quantile and economic level) of sports welfare project participants and assess the relationship with a reduction in medical expenses as data for determining resource allocation priorities. | 2021-08-25T21:06:14.371Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "676a3868cf1807aa4a8de032a38267c29aaf093a",
"oa_license": null,
"oa_url": "https://ijass.sports.re.kr/upload/pdf/IJASS_2021_v33n1_98.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "792620be6e7c034f6a5d28a64c42a73960bb86e0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
248672408 | pes2o/s2orc | v3-fos-license | Exploring the Causal Effect of Constipation on Parkinson’s Disease Through Mediation Analysis of Microbial Data
Background and Aims Parkinson’s disease (PD) is a worldwide neurodegenerative disease with an increasing global burden, while constipation is an important risk factor for PD. The gastrointestinal tract had been proposed as the origin of PD in Braak’s gut–brain axis hypothesis, and there is increasing evidence indicating that intestinal microbial alteration has a role in the pathogenesis of PD. In this study, we aim to investigate the role of intestinal microbial alteration in the mechanism of constipation-related PD. Methods We adapted our data from Hill‐Burns et al., in which 324 participants were enrolled in the study. The 16S rRNA gene sequence data were processed, aligned, and categorized using DADA2. Mediation analysis was used to test and quantify the extent by which the intestinal microbial alteration explains the causal effect of constipation on PD incidence. Results We found 18 bacterial genera and 7 species significantly different between groups of constipated and non-constipated subjects. Among these bacteria, nine genera and four species had a significant mediation effect between constipation and PD. All of them were short-chain fatty acid (SCFA)-producing bacteria that were substantially related to PD. Results from the mediation analysis showed that up to 76.56% of the effect of constipation on PD was mediated through intestinal microbial alteration. Conclusion Our findings support that gut dysbiosis plays a critical role in the pathogenesis of constipation-related PD, mostly through the decreasing of SCFA-producing bacteria, indicating that probiotics with SCFA-producing bacteria may be promising in the prevention and treatment of constipation-related PD. Limitations 1) Several potential confounders that should be adjusted were not provided in the original dataset. 2) Our study was conducted based on the assumption of constipation being the etiology of PD; however, constipation and PD may mutually affect each other. 3) Further studies are necessary to explain the remaining 23.44% effect leading to PD by constipation.
INTRODUCTION
Parkinson's disease (PD) is a neurodegenerative disease manifested as both motor (such as tremor, bradykinesia, rigidity, and postural instability) and non-motor (including constipation, rapid eye movement sleep behavior disorder, and depression) symptoms Rai et al., 2021). The worldwide prevalence was 0.3% in the general population with 40 years of age and older based on a metaanalysis of 47 studies (Pringsheim et al., 2014). An increasing trend of age-adjusted mortality in PD has been found (Rong et al., 2021). Risk factors for PD include age, sex, excess body weight, family history of PD, constipation, and so on (Stirpe et al., 2016). Constipation can be observed in PD patients as early as 20 years before the onset of motor symptoms (Savica et al., 2009). A recent retrospective cohort study based on the Taiwan National Health Insurance Research Database consisting of 551,324 participants free of PD found that the adjusted hazard ratio for developing PD was 3.28 (95% CI 2.14 to 5.03), 3.83 (95% CI 2.51 to 5.84), and 4.22 (95% CI 2.95 to 6.05) for individuals with different constipation severity categories (Lin et al., 2014). Another meta-analysis with a combined sample size of 741,593 participants found that those with constipation had a pooled odds ratio of 2.27 (95% CI 2.09 to 2.46) for developing subsequent PD when compared with those without constipation. In addition, constipation, which occurred more than 10 years prior to PD, had a pooled odds ratio of 2.13 (95% CI 1.78 to 2.56; I 2 = 0.0%) (Adams-Carr et al., 2016).
Although evidence has shown a strong correlation between constipation and PD, the detailed mechanism of constipationrelated PD is still unclear. One of the possible pathogenic mechanisms could be through the alteration of the gut microbiome. Gut microbiota is a complex ecological community composed of trillions of microbes. It is able to influence both normal physiology and disease susceptibilities through bacterial metabolic activities and host interactions (Lozupone et al., 2012). At least 17 studies had reported dysbiosis of intestinal microbiota in PD patients (Keshavarzian et al., 2020). A 2-year follow-up study of 36 PD patients found that the counts of Bifidobacterium, Bacteroides fragilis, and Clostridium leptium were associated with PD severity (Minato et al., 2017). Several mechanisms of intestinal dysbiosis causing PD have been delineated in a recent review article, such as increased permeability of the intestinal barrier and the bloodbrain barrier, increased inflammation and oxidative stress, changes in dopamine production, and molecular mimicry (Huang et al., 2021). Interestingly, intestinal dysbiosis can be developed in patients with chronic constipation (Ohkusa et al., 2019;Zhang et al., 2021), and this type of constipation-related intestinal dysbiosis could be cured by bisacodyl treatment, in which patients' bowels were emptied by taking laxatives (Khalif et al., 2005). A mouse study has also shown that constipation was able to induce the dysbiosis of gut microbiota which further exacerbated experimental autoimmune encephalomyelitis (Lin et al., 2021). Although evidence of the association between constipation, gut dysbiosis, and PD has emerged, no studies have conducted an integral analysis to disentangle the role of intestinal microbial alteration in the mechanism between constipation and PD.
In this study, we hypothesized that constipation can cause PD via inducing intestinal dysbiosis. Mediation analysis was conducted to test and quantify the extent by which the intestinal microbial alteration explains the causal effect of constipation on PD incidence.
Participant Recruitment and Data Collection
We adapted our data from the study of Hill-Burns et al. (Hill-Burns et al., 2017), in which 330 participants (185 men, 145 women, mean age 69.2) were enrolled from the NeuroGenetics Research Consortium during March 2014 to January 2015. The methods and the clinical and genetic characteristics of the NeuroGenetics Research Consortium dataset were described in detail in Hamza et al. (2010) Among the 330 participants, 199 (133 men, 66 women, mean age 68.4) were diagnosed with PD by the modified UK Brain Bank criteria. The remaining 131 controls (52 men, 79 women, mean age 70.4) were self-reporting free of neurodegenerative disease. Constipation symptoms were assessed by the Gut Microbiome Questionnaire, and six participants were excluded from this study due to no information about constipation status. The remaining 324 were included in our final data analysis. Details of the fecal sample collection process, DNA extraction and sequencing, and metadata collection can be found in Hill-Burns et al. (2017).
Data Availability and Ethical Statement
Sequences analyzed in this study are accessible at the European Nucleotide Archive (ENA) under the accession number ERP016332. All data are open access and de-identified. No ethical approval is required.
Processing of 16S rRNA Sequence Data
The 16S rRNA gene is highly conserved in bacteria. As a result, it is highly suited as a target gene for DNA sequencing for bacterial identification. The sequence reads were processed with Trimmomatic v0.39 (Bolger et al., 2014) to remove adaptors. The outputs were then processed, aligned, and categorized using DADA2 1.16 (Callahan et al., 2016). In brief, sequence reads were first filtered using DADA2's recommended parameters. Filtered reads were then de-replicated and de-noised using DADA2 default parameters. After building the amplicon sequence variant (ASV) table and removing chimeras, taxonomy was assigned using SILVA v132 natively implemented in DADA2. We used the addSpecies function in DADA2 to add species-level annotation with SILVA as reference. Sequence counts were normalized to relative abundances (calculated by dividing the number of sequences that were assigned to a unique ASV by the total sequence count in the sample). Bacteria that exist in more than 10% of samples were used in later analysis. Regarding functional enrichment analysis, we used Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt2) version 2.4.1 (Douglas et al., 2020) to infer metagenome composition in the samples, followed the recommended pipeline of normalizing ASVs by copy number (to account for differences in the number of copies of 16S rRNA between taxa), predicted functions using Kyoto Encyclopedia of Genes and Genomes (KEGG) (Kanehisa and Goto, 2000) orthologs, and grouped predicted pathways by KEGG hierarchical level 3.
Statistical Analyses
We compared several demographic characteristics (including age, sex, race, BMI, residence location, alcohol consumption amount, smoking status, diet habit, and other neurological problems) between constipated and non-constipated groups by using the Wilcoxon test for continuous variables and the chisquare test for categorical variables. Analysis of the microbial difference between the two groups was performed using the Wilcoxon test. We also compared the overall taxonomic diversity between groups by calculating alpha and beta diversity values that incorporate both species richness and evenness. Regarding alpha diversity, we estimated the observed richness (i.e., number of ASVs) and the Chao1, Shannon, and Simpson indices from the ASV table (Chao, 1984;Magurran, 1988;Rosenzweig, 1995) using phyloseq 1.32.0 (McMurdie and Holmes, 2013). p-values for alpha diversity were calculated with ANOVA using stats 4.0.5. Regarding beta diversity, we estimated the dissimilarities (distance) between the two groups using the following metrics to ensure that the choice of metrics did not affect the results: unweighted unique fraction metrics (UniFrac), weighted UniFrac (Lozupone et al., 2011), and Canberra distance (Lance and Williams, 1967). Beta diversity indices for weighted and unweighted UniFrac were calculated with phyloseq 1.32.0 (McMurdie and Holmes, 2013). Canberra distance was calculated with vegan 2.5.7. p-values for beta diversity were calculated with ADONIS using vegan 2.5.7.
The mechanism of constipation causing PD is assumedly mediated by microbiome alterations. Mediation analysis was conducted for quantifying the extent by which the intestinal microbial alteration explained the causal effect of constipation on PD. Constipation status is the exposure variable, PD status is the outcome of interest, and intestinal microbial alteration is the mediator. Sex, age, and the amount of fruits and vegetables taken were adjusted for the following mediation analysis. The details of conducting the mediation analysis are shown in the Appendix. All statistical analyses were performed with R version 3.6.0.
RESULTS
We compared the general characteristics between the constipation (N = 31) and non-constipation (N = 293) groups and the results are summarized in Table 1. There were no significant differences in age, sex, race, height, weight, BMI, geographic area (latitude, longitude, location), alcohol and coffee consumption amount, smoking, diet habit (eat fruits or vegetables daily, eat grains daily, eat meat daily, eat nuts daily, eat yogurt daily), and the presence of neurological problem except PD. The presence of PD was significantly higher in the constipated group (93.5% vs. 57% in the non-constipated group; p-value < 0.001).
The p-values for the comparison of microbial differences in relative abundance between the constipated group and the nonconstipated group are shown in Figure 1, under the genus ( Figure 1A) and species levels ( Figure 1B). Eighteen genera and seven species were significantly different between the groups. The results of overall taxonomic alpha and beta diversity between the groups are shown in Figure 2. Regarding alpha diversity, none of the four metrics (observed, Chao1 index, Shannon index, Simpson index) were significantly different between the groups (p Observed = 0.6258; p Chao1 = 0.5654; p Shannon = 0.5335; p Simpson = 0.7536) ( Figure 2A). In contrast, all three metrics of beta diversity (unweighted UniFrac, weighted UniFrac, Canberra distance) showed significant changes in the community structure between the two groups (p Unweighted-unifrac = 0.0001; p Weighted-unifrac = 0.0003; p Canberra = 0.0001) ( Figure 2B).
DISCUSSION
Since Braak first proposed the gut-brain axis hypothesis in 2003 , the GI system had been hypothesized as the origin of Parkinson's disease. Through a series of elegant neuropathological studies in postmortem PD patients, Braak and his colleagues proposed a hypothesis in which toxins or pathogens enter the host through the GI tract, causing inflammation and aggregation of alpha-synuclein protein in the enteric nervous system; this aggregated alpha-synuclein protein moves up to the central nervous system via the vagus nerve, resulting in the degeneration of dopaminergic neurons in the substantia nigra (Hawkes et al., 2007;Keshavarzian et al., 2020). This hypothesis also explains why PD patients develop gastrointestinal symptoms, such as constipation, prior to the onset of the cardinal motor and central nervous system symptoms. Despite the evidence of constipation-related PD being microbially associated, no studies have provided direct evidence of their causal relationship. To the best of our knowledge, this is the first study to examine the mechanism of constipation-causing PD mediated by microbiota by using mediation analysis. We also found 18 genera and 7 species and beta diversity which were significantly different between the constipated and non-constipated groups, indicating that constipation was associated with intestinal dysbiosis and was corresponding to the findings in previous studies (Khalif et al., 2005;Ohkusa et al., 2019;Zhang et al., 2021). It was estimated that 76.56% of the effect of constipationrelated PD was mediated by microbiota at the genus level, providing evidence that strengthened the causal effect of constipation on PD.
Our results indicate that the benefit of probiotics prescription probably prevents around 76.56% of the incidence of constipation-related PD. Bacteria significantly related to the mediation mechanism in this study had all been found strongly related to PD in previous studies (Hill-Burns et al., 2017;Li et al., 2017;Petrov et al., 2017;Lin et al., 2018;Aho et al., 2019;Li et al., 2019;Romano et al., 2021). They were also a subset of the constipated-related bacteria shown in Figure 1.
Based on the results of a correlation network analysis in a previous study, PD-associated bacterial genera can be mapped to three polymicrobial clusters: opportunistic pathogens, short-chain fatty acid (SCFA-producing bacteria, and carbohydratemetabolizing probiotics (Wallen et al., 2020). The bacteria found in our study with significant mediation effect of constipation on PD all fall into the SCFA-producing category (Anaerostipes,Fusicatenibacter,Coprococcus,Dorea,Blautia,Faecalibacterium,Ruminococcaceae_UCG_013,Lachnospiraceae_ND3007_group,and Lachnospiraceae_UCG_004). SCFAs, including acetate, propionate, and butyrate, are the major products of microbial fermentative activity in the gut. A low level of SCFAs, especially butyrate, had been associated with increased intestinal permeability, resulting in a leaky gut (Dalile et al., 2019). SCFAs also enhance the FIGURE 1 | The overall difference of gut microbiota between the constipated group and the non-constipated group was assessed. A negligible number 2*10 -6 was added to the abundance of every amplicon sequence variant (ASV) to avoid the value of the magnitude of fold change being infinitely large or small. This number was generated based on the 100th of the minimal relative abundance. Each point represents an ASV with its magnitude fold change in relative abundance (log2 of the constipated group/non-constipated group) on the xaxis and the value of statistical significance (−log10 of p-value) on the y-axis. The dashed red line shows where p = 0.05 with points above the line having p <0.05 and points below the line having p >0.05. Significant ASVs are colored based on genus (A) and species (B). Points outside of the solid lines are ASVs with a mean abundance of 0 in either the non-constipation group (left) or the constipation (right) group.
integrity of the blood-brain barrier through regulating the maturation of microglia by enabling the microglial expression of SCFA-responsive genes such as histone deacetylase (Dalile et al., 2019). Reduced levels of SCFA-producing bacteria in PD patients had been confirmed in many studies and also in other inflammatory diseases such as IBD, alcohol-associated pathology, and metabolic syndrome (Koh et al., 2016). Our study demonstrated that constipation can cause PD through reducing SCFA-producing bacteria, which further increase intestinal permeability, lead to endotoxin or exotoxin penetration, and induce subsequent pathological change in PD.
Akkermansia, Lactobacillus, and Bifidobacterium are antiinflammatory, carbohydrate-metabolizing probiotics, which had been found increasing in PD patients (Hill-Burns et al., 2017;Petrov et al., 2017;Aho et al., 2019;Barichella et al., 2019;Romano et al., 2021). However, we did not find a significant mediation effect on these bacteria, which may indicate that increased carbohydrate-metabolizing probiotics in PD patients did not have a causal role in the pathogenesis of PD but more likely to be a compensatory change to overcome intestinal dysbiosis.
Alpha-lipoic acid (ALA) is a naturally occurring enzyme cofactor with antioxidant properties and has known neuroprotective effects on PD (Spalding and Prigge, 2010). An animal study showed that ALA can decrease intracellular levels of reactive oxygen species, promote the survival of dopaminergic neurons, and improve motor deficits of a PD animal model (Tai et al., 2020). Our study found a significant mediation effect of lipoic acid metabolism, indicating that lipoic acid metabolism may have a causal role in the pathogenesis of PD.
Three limitations were worthy to note in this study. First, only when all potential baseline confounders were adjusted could causal effects be unbiasedly estimated by regression coefficients. However, several potential confounders, such as the frequency of regular physical activities (Mika et al., 2015;Monda et al., 2017) and medicinal plant intake (e.g., Mucuna pruriens) which may have roles in relieving constipation that could further ameliorate intestinal dysbiosis, were not provided in the original dataset. Second, constipation, intestinal microbial alteration, and PD may mutually affect each other. Our study was conducted based on the assumption of constipation being the etiology of PD, but PD could also reversely accelerate the severity of constipation. This similar bidirectional causal relation applies to PD and microbial alteration, in which alpha-synuclein protein that aggregates in the central nervous system could move down to the intestinal system and further induce microbial alteration. This microbial alteration could also lead to constipation. The bidirectional causality stated above is termed "time-varying" issues in the literature (Lin et al., 2017;VanderWeele and Tchetgen Tchetgen, 2017;Hernań and Robins, 2018). A sophisticated model should be adapted if microbiome and constipation status are measured repeatedly in longitudinal follow-up studies. For those types of studies, PD medications such as entacapone which potentially lead to constipation (Fu et al., 2022) should be considered a candidate mediator or time-varying confounder. A comparison of the microbial alteration between the constipation versus the non-constipation groups among PD patients would also be necessary. Corresponding animal studies by providing probiotics to validate the aforementioned mechanisms are required as well. Finally, our results explain 76.56% of the mechanism of constipation-related PD, while the remaining 23.44% is still unclear. Exotoxin aggregation could possibly be involved in the unknown mechanism. Further studies are necessary to confirm the remaining factors leading to PD by constipation.
CONCLUSION
Our findings support that gut dysbiosis plays a critical role in the pathogenesis of constipation-related PD, mostly through the decreasing of SCFA-producing bacteria, indicating that probiotics with SCFA-producing bacteria may be promising in the prevention and treatment of constipation-related PD.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
S-CF came up with the original idea. S-CF, Y-CH, and P-HW set up and performed the bioinformatics procedures. C-HL, Y-CH, and P-HW conducted the data analysis. L-CS wrote the first version of the manuscript. S-CF, S-HL, and HW contributed to the paper. All authors approved the final version of this article. | 2022-05-11T13:10:36.590Z | 2022-05-11T00:00:00.000 | {
"year": 2022,
"sha1": "2288c312d16cb399359a625d3f68b61629611644",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2288c312d16cb399359a625d3f68b61629611644",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55675145 | pes2o/s2orc | v3-fos-license | A survey on the impacts of brand extension strategy on consumers ’ attitude for new products development
Article history: Received March 25, 2012 Received in revised format 25 September 2012 Accepted 7 October 2012 Available online October 1
Introduction
One of the most important methods for expanding businesses is to introduce a new product with a well known brand.When people have good memory about a good brand and its older products, they may carry their mental impression into newly introducted product.In such a case, a new product can penetrate into market very easily and people may accept it along with older ones (Aaker, 1990).There are many evidences to believe that marketing planning must be accomplished more carefully since the expenses in such planning are significant (Aaker, 1991).A good marketing planning may result in an immediate increase in sales figures while an unsuitable marketing planning could result some undesirable consequences (Aaker & Keller, 1990;Andrews & Kim, 2007;Völckner & Sattler, 2007).Martinez et al. (2009) in an assignment surveyed the impact of market extension on new product development.
During the past few years, there have been tremendous efforts to study the effect of introducing new products on people's mental reactions.Market expention can save marketing expenditures and increase sales, significantly.Barone (2005) investigated the interactive impacts of mood and involvement on evaluations of product extensions, which are different in their similarity to the core brand.The results from an initial experiment demonstrated that, under conditions of high involvement, participants' mood impacted their evaluations of extensions, which were moderately similar to the core brand, but did not impact evaluations of either very similar or dissimilar extensions.Barone also reported that under low-involvement circumstances the impact of positive mood was independent from core brand-extension similarity.Barone (2005) also showed that mood's impact on extension evaluations may further depended on the measurement procedures used to elicit product appraisals.The study focused on important contingencies associated with mood's impact on extension evaluations and, as such, provided some insights on consumers' appraisals of brand extensions.According to Brodie et al. (2009) there is a direct impact of all the aspects of the brand on customers' perceptions of value and brand image, company image and employee trust have a mediated impact on customer value through customers' perceptions of service quality.A service brand may not have a direct impact on customer loyalty but rather its impact is mediated through customer value.Fransen et al. (2008) investigated whether brands can automatically activate mortality-related thoughts and, in turn, influence consumer behavior.They reported that explicit exposure to an insurance brand could increase the accessibility of death-related thoughts, which, in turn, increases personal spending intentions.They also demonstrated that insurance brand exposure positively influences charity donations.Milberg and Sinn (2008) investigated the effect of competitor brand familiarity on the quality perceptions of global brands in Chile when the brand extends into new product developments.The results showed that there was a negative influence on the quality perceptions of brand extensions when an extension competed with well-known and well-liked competitor brands.However, brand extension quality beliefs appeared to produce negative feedback impacts on parent brand quality beliefs only for narrowly extended parent brands but not for broadly extended ones.Olavarrieta et al. (2008) investigated the importance of target market influences on the evaluation of both brand extension strategies.Their findings supported the idea that derived brand names leverage parent brand evaluations and protected parent brand from extension failures.Salinas and Pina Pérez (2009) analyzed how brand-extension evaluation could impact the current brand image and proposed a theoretical model formed by five main factors associated with brand associations, extension congruency and extension attitude.The model estimation included structural equation analysis and the results verified that extension attitude affects brand image, whereas initial brand associations and perceived fit between the new product and either the remaining products were able to strengthen consumer attitude.The study also explained the impact of consumer innovativeness as a moderating factor, suggesting that the characteristics of consumer personality could play essential role.Ruyter and Wetzels (2000) investigated the impact of corporate image in extending service brands to new and traditional markets in the telecommunications sector.Based on corporate image, service brand extensions were primarily related to innovation-related attributes, such as order of entry.Increasingly, firms were extending their services to markets, which were beyond the markets that they traditionally had been active in.
The proposed model
The proposed model of this paper considers the following two hypotheses, 1.There is a relationship on quality perception between an existing brand and a hypothetical new product.
2. There is a relationship on the effects of quality perception between an existing brand and a hypothetical new product.
The proposed study of this paper considers four different brands, which produce various products.
Table 1 shows characteristics of these products.The questionnaire of this survey consists of four sections including personal characteristics, different perceptions from various brands, proposed hypotheses in Likert scale and the effect of purchasing products on brand.In order to validate the content of the survey, we have used the feedback from seven experts.Cronbach alpha was more than 0.65 for all brands and for each selected brand, Cronbach alpha was 0.75, 0.68, 0.79 and 0.75 for Siv, Cheshmak, Tak and Sanich, respectively.As we can observe these figures are well above the minimum acceptable level of 0.7.
The results
Statistical society has been considered from the people whose average income was well above the average and they definitely care about the brand.The proposed study of this paper selects 196 people who choose 12 various products.Dependent variable is consumer perception towards new product development.Independent variables include customers consider newly offered product as replacement or supplement one.They surveyed people were also asked whether they think the new technological characteristics can incorporate older ones' or not.Finally, participants are asked about their perception on product.
The first question of this survey is associated with the factors, which are associated with consumers' perception on brands.Our findings indicate that quality and functional benefits are primary mental association respondents.Table 2 shows details of our survey results.Note that since surveyed people specified more than one mental factor, the number of responses is more than the people who took part in survey.Next, we have repeated the same survey for facial tissue, which is a product for another brand, Cheshmak.Table 3 shows details of our findings, Again, since surveyed people specified more than one mental factor, the number of responses is more than the people who participated in survey.As we can observe from the results of Table 3, being soft is number on mental factor, followed by beautiful boxes and these two are functional items.Next, we have repeated the same survey for macaroni, which is a food product for another brand, Tak and the results are given in Table 4 as follows, In summary, the average mental characterizations, which can be transferred to a new product for four brands of Siv, Tak, Cheshmak and Sanich are 61.8%,80.6%, 48.4% and 25.9%, respectively.
The third question of the survey investigates whether consumers' perception from quality of a particular brand has any influence with consumers' mental perception on new product or not.To answer this question we first have to look at the first hypothesis and looking at the Pearson correlation ratio in terms of quality perception between old and new products.The result of Pearson correlation is 0.257 and P-value is less than 0.01, which means the ratio is statistically meaningful and the first hypothesis is confirmed.We have also measured consumers' insight in terms of ability to substitute and complementary capabilities of both types of products and Table 6 shows details of our findings, Based on the results of Table 6, the average perception of quality perception for this brand for three products of dish soap, baby shampoo and facial tissue are 4.06, 3.89 and 3.35, respectively.In other words, the participants had high ratio of perception towards dish soap, a good perception towards baby shampoo and finally they had some perception towards facial tissue.
Table 7 shows details of our findings for another brand, Chechmak.As we can observe from the results, three proposed new products of Towel, Chinaware and Booklet maintain an average of 3.92, 2.76 and 3.47, respectively.Again, Towel is number one priority followed by booklet and chinaware comes in the last position.The results of Table 8 show that soup powder, Cookie and tomato paste represent means of 3.82, 3.79 and 3.83 for quality perception and the level of mental perception is almost the same for all these products.Finally, Table 9 summarizes the results of our survey for the last brand in our survey.
According to the results of Table 9, customers have had the most quality perception on carbonated soft drinks followed by canned fruits and biscuit.The last question of this survey considers the relationship between the new and old product and consumer's perception towards the effects on new products.To test the last hypothesis, we have used Pearson correlation ratios among different components.As we can observe from the results of Table 10, correlation ratio between technology transfer capabilities and perception towards the new product is calculated as 0.454, which is much more than other variables and P-value is significant when the level of significance is one percent.The other observation is that there is a positive and meaningful relationship between potential for product substitution and perception towards the new product, which has been calculated as 0.227.Therefore, we can fit a regression model as follows, A =0.676 + 0.35 Q + 0.086 S +0.029 C +0.34 T, where dependent variable is mental perception towards to new proposed product (A) and dependent variables are quality of perception product (Q), capability of transferring existing mental perception to new product (T), ability to substitute (S) and complementary capabilities (C).All coefficients are valid when the level of significance is five percent and we can conclude that an increase of one unit in quality perception can increase 0.35 in mental perception to new proposed product.
Conclusion
In this paper, we have investigated how consumer's mental capability could react when a new product is introduced along with a well known brand.The proposed study of this paper selected 196 people who chose 12 various products.They surveyed people were also asked whether they think the new technological characteristics could incorporate older ones' or not.Finally, participants have been asked about their perception on product.Correlation ratio between technology transfer capabilities and perception towards the new product was calculated as 0.454, which was much more than other variables and P-value was significant when the level of significance was one percent.The other observation was that there was a positive and meaningful relationship between potential for product substitution and perception towards the new product, which has been calculated as 0.227.The survey also performed a regression analysis to find the relationship between mental perception towards to new proposed product as dependent variable and quality of perception product, capability of transferring existing mental perception to new product, ability to substitute and complementary capabilities.The results showed that an increase of one unit in quality perception could increase 0.35 in mental perception to new proposed product.
Table 2
Most important factors influencing mental association respondents on the first brand, Siv
Table 3
Most important factors influencing mental association respondents on the second brand, Chechmak
Table 4
Most important factors influencing mental association respondents on the third brand, Tak
Table 5
Most important factors influencing mental association respondents on the third brand, Tak | 2018-12-11T09:00:43.391Z | 2012-09-01T00:00:00.000 | {
"year": 2012,
"sha1": "69a23975c6d6f410c202cd2accb657789087fbb0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.msl.2012.10.017",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "69a23975c6d6f410c202cd2accb657789087fbb0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
51701243 | pes2o/s2orc | v3-fos-license | Arsenic trioxide is required in the treatment of newly diagnosed acute promyelocytic leukemia. Analysis of a randomized trial (APL 2006) by the French Belgian Swiss APL group
In standard-risk acute promyelocytic leukemia, recent results have shown that all-trans retinoic acid plus arsenic trioxide combinations are at least as effective as classical all-trans retinoic acid plus anthracycline-based chemotherapy while being less myelosuppressive. However, the role of frontline arsenic trioxide is less clear in higher-risk acute promyelocytic leukemia, and access to arsenic remains limited for front-line treatment of standard-risk acute promyelocytic leukemia in many countries. In this randomized trial, we compared arsenic, all-trans retinoic acid and the “classical” cytarabine for consolidation treatment (after all-trans retinoic acid and chemotherapy induction treatment) in standard-risk acute promyelocytic leukemia, and evaluated the addition of arsenic during consolidation in higher-risk disease. Patients with newly diagnosed acute promyelocytic leukemia with a white blood cell count <10x109/L, after an induction treatment consisting of all-trans retinoic acid plus idarubicin and cytarabine, received consolidation chemotherapy with idarubicin and cytarabine, arsenic or all-trans retinoic acid. Patients with a white blood cell count >10x109/L received consolidation chemotherapy with or without arsenic. Overall, 795 patients with acute promyelocytic leukemia were enrolled in this trial. Among those with standard-risk acute promyelocytic leukemia (n=581), the 5-year event-free survival rates from randomization were 88.7%, 95.7% and 85.4% in the cytarabine, arsenic and all-trans retinoic acid consolidation groups, respectively (P=0.0067), and the 5-year cumulative incidences of relapse were was 5.5%, 0% and 8.2%. (P=0.001). Among those with higher-risk acute promyelocytic leukemia (n=214), the 5-year event-free survival rates were 85.5% and 92.1% (P=0.38) in the chemotherapy and chemotherapy plus arsenic groups, respectively, and the corresponding 5-year cumulative incidences of relapse were 4.6% and 3.5% (P=0.99). Given the prolonged myelosuppression that occurred in the chemotherapy plus arsenic arm, a protocol amendment excluded cytarabine during consolidation cycles in the chemotherapy plus arsenic group, resulting in no increase in relapse. Our results therefore advocate systematic introduction of arsenic in the first-line treatment of acute promyelocytic leukemia, but probably not concomitantly with intensive chemotherapy, a situation in which we found myelosuppression to be significant. (ClinicalTrials.gov Identifier: NCT00378365)
I n standard-risk acute promyelocytic leukemia, recent results have shown that all-trans retinoic acid plus arsenic trioxide combinations are at least as effective as classical all-trans retinoic acid plus anthracycline-based chemotherapy while being less myelosuppressive. However, the role of frontline arsenic trioxide is less clear in higher-risk acute promyelocytic leukemia, and access to arsenic remains limited for frontline treatment of standard-risk acute promyelocytic leukemia in many countries. In this randomized trial, we compared arsenic, all-trans retinoic acid and the "classical" cytarabine for consolidation treatment (after all-trans retinoic acid and chemotherapy induction treatment) in standard-risk acute promyelocytic leukemia, and evaluated the addition of arsenic during consolidation in higher-risk disease. Patients with newly diagnosed acute promyelocytic leukemia with a white blood cell count <10x10 9 /L, after an induction treatment consisting of all-trans retinoic acid plus idarubicin and cytarabine, received consolidation chemotherapy with idarubicin and cytarabine, arsenic or all-trans retinoic acid. Patients with a white blood cell count >10x10 9 /L received consolidation chemotherapy with or without arsenic. Overall, 795 patients with acute promyelocytic leukemia were enrolled in this trial. Among those with standard-risk acute promyelocytic leukemia (n=581), the 5-year event-free survival rates from randomization were 88.7%, 95.7% and 85.4% in the cytarabine, arsenic and all-trans retinoic acid consolidation groups, respectively (P=0.0067), and the 5-year cumulative incidences of relapse were was 5.5%, 0% and 8.2%. (P=0.001). Among those with higher-risk acute promyelocytic leukemia (n=214), the 5-year event-free survival rates were 85.5% and 92.1% (P=0.38) in the chemotherapy and chemotherapy plus arsenic groups, respectively, and the corresponding 5-year cumulative incidences of relapse were 4.6% and 3.5% (P=0.99). Given the prolonged myelosuppression that occurred in the chemotherapy plus arsenic arm, a protocol amendment excluded cytarabine during consolidation cycles in the chemotherapy plus arsenic group, resulting in no increase in relapse. Our results therefore advocate systematic introduction of arsenic in the first-line treatment of acute promyelocytic leukemia, but probably not concomitantly with intensive chemotherapy, a situation in which we found myelosuppression to be significant. (ClinicalTrials.gov Identifier: NCT00378365) Arsenic trioxide is required in the treatment of newly diagnosed acute promyelocytic leukemia. Analysis of a randomized trial (APL 2006) by the French Belgian Swiss APL group Introduction Acute promyelocytic leukemia (APL) is a specific subtype of acute myeloid leukemia (AML) characterized by its morphology, the presence of t (15;17), and marked sensitivity to the differentiating effect of all-trans retinoic acid (ATRA) and the pro-apoptotic effect of arsenic trioxide (ATO). 1 The combination of ATRA and anthracycline-based chemotherapy has been the mainstay of the treatment of newly diagnosed APL over the last two decades. [2][3][4] Published results have shown that cytarabine (cytosine arabinoside, AraC) could be omitted from chemotherapy in standard-risk APL [i.e., with a baseline white blood cell count (WBC) <10x10 9 /L] but appeared to be useful in high-risk APL (with a WBC >10x10 9 /L), possibly at high doses, to reduce the incidence of relapse. 5 A beneficial role for prolonged maintenance treatment with continuous low-dose chemotherapy (6-mercaptopurine and methotrexate) and intermittent ATRA was also suggested, especially in high-risk APL, following in particular randomized results from our group, 5,6 and from a recent meta-analysis 7 of several trials. With regards to anthracyclines, at least one study suggested that idarubicin gave better results than daunorubicin, 8 while non-randomized studies suggested a potential benefit of adding ATRA during consolidation cycles, at least if AraC was omitted. 2,4 Recently, however, ATO has been demonstrated to have pronounced efficacy in newly diagnosed APL. In particular, it was shown in two large randomized trials that the combination of ATO and ATRA without chemotherapy was at least equal and, with longer term follow-up, even superior to ATRA plus chemotherapy combinations in standard-risk APL. 9,10,11 In high-risk APL, ATO plus ATRA combinations, with very limited added chemotherapy, also appear very promising, 10,12 and are currently being compared with the conventional ATRA chemotherapy approach in randomized trials.
When the APL 2006 trial was launched, ATO was mainly considered as an adjunct to ATRA chemotherapy combinations in the first-line treatment of APL, aimed at reducing the relapse rate (especially in high-risk APL) and/or diminishing the amount of chemotherapy administered (especially in standard-risk APL).
Based on the results of the APL 2006 trial, reported here, we evaluated the role of ATO in the treatment of standardand high-risk APL, in addition to the "classical" ATRA plus chemotherapy backbone regimens.
Patients
Between 2006 and 2013, patients from French, Belgian and Swiss centers with documented (by cytogenetics and or molecular biology), newly diagnosed APL who were aged 70 years or less were eligible for inclusion in the APL 2006 trial, after giring informed consent. The trial was approved by local ethical committees (ClinicalTrials.gov Identifier: NCT00378365). Eligibility criteria in this trial were a morphological diagnosis of APL based on French-American-British criteria and no contraindication to intensive chemotherapy. No minimal performance status was required and patients with therapy-related APL could be enrolled.
Induction treatment consisted of ATRA 45 mg/m 2 /day until complete remission with idarubicin 12 mg/m 2 /day for 3 days and AraC 200 mg/m 2 /day for 7 days starting on day 3.
Patients with a baseline WBC <10x10 9 /L who achieved a complete remission were randomized for consolidation between three groups given treatment containing AraC, ATO or ATRA. The AraC group (standard group) received a first consolidation course with idarubicin 12 mg/m 2 /day for 3 days and AraC 200 mg/m 2 /day for 7 days, a second consolidation course with idarubicin 9 mg/m 2 /day for 3 days and AraC 1 g/m 2 /12 h for 4 days, and maintenance therapy for 2 years with intermittent ATRA 15 days/3 months and continuous treatment with 6 mercaptopurine (90 mg/m 2 /day orally) and methotrexate (15 mg/m 2 /week orally).
The ATO and ATRA groups received the same treatment as the AraC group, but AraC was replaced by, respectively, ATO 0.15 mg/kg/day on days 1 to 25 and ATRA 45 mg/m 2 /day on days 1 to 15 for both consolidation courses. The rationale for the ATRA consolidation treatment was based on results of a Spanish PETHEMA group trial, suggesting that AraC could be omitted from chemotherapy consolidation cycles in standard-risk APL, and that there could be a benefit from adding ATRA to consolidation cycles. The use of prolonged maintenance treatment was based on our previous results in a randomized phase III trial supporting the interest of this approach in reducing relapses after a conventional ATRA chemotherapy regimen.
Patients with a baseline WBC >10x10 9 /L were randomized to consolidation with either chemotherapy or chemotherapy combined with ATO. The chemotherapy group received a first consolidation course with idarubicin 12 mg/m 2 /day for 3 days and AraC 200 mg/m 2 /day for 7 days, a second consolidation course with idarubicin 9 mg/m 2 /day for 3 days and AraC 1 g/m 2 /12 h for 4 days, and 2-year maintenance therapy with intermittent ATRA and continuous 6-mercaptopurine plus methotrexate. The chemotherapy plus ATO group received the same treatment except that ATO 0.15 mg/kg/day was added from day 1 to day 25 during both consolidation courses. After a first interim analysis in September 2010 on data from 81 patients, AraC was deleted from consolidation cycles of the chemotherapy plus ATO group.
Treatment of coagulopathy during the induction phase was based on platelet support to maintain the platelet count at a level greater than 50x10 9 /L until the disappearance of the coagulopathy. The use of heparin, tranexamic acid, fresh-frozen plasma, and fibrinogen transfusions was optional, according to each center's policy.
Prophylaxis and treatment of ATRA syndrome consisted of dexamethasone 10 mg/12 h given intravenously for at least 3 days if the WBC was above 10x10 9 /L (before or during treatment with ATRA) or at the earliest sign of the ATRA syndrome (dyspnea, lung infiltrates, pleural effusion, unexplained renal failure). In the absence of rapid improvement of symptoms (within 24 h), ATRA was transiently stopped until clinical control was obtained.
Statistical methods
The primary endpoint was event-free survival from the time of achieving complete remission. Relapse, survival, side effects of the treatment and duration of hospitalization were secondary endpoints.
Analyses were performed on a modified intent-to-treat principle, excluding only diagnostic errors and withdrawals of consent. Censored endpoints were estimated by the nonparametric Kaplan-Meier method 13 and then compared between randomized groups by the log-rank test. In estimating relapses, we took into account competing risks, i.e., deaths in first complete remission, using cumulative incidence curves and then compared results using the Gray test, whereas a cause-specific Cox model was used to estimate cause-specific hazard ratios. 14 The type I error was fixed at the 5% level. All tests were two-tailed. Statistical analyses were performed using SAS 9.1 (SAS Inc, Cary, NC, USA) and R software packages.
Here we present the results based on all patients included in the trial and data collected before June, 2017.
Results
Eight-hundred and seven patients were included in the trial. The diagnosis of APL could be confirmed in 795 of the patients who had t(15;17) and/or a PML-RAR rearrangement. The remaining 12 patients were excluded as diagnostic errors. The further analyses only dealt with the 795 patients with a confirmed diagnosis of APL who gave their consent to participation in the study and comprised 581 patients with standard-risk APL and 214 with high-risk APL.
Standard-risk acute promyelocytic leukemia
Of the 581 patients with standard-risk APL; 570 (98.1%) achieved a complete remission; the others died early. Fortythree patients were not randomized for consolidation treatment, including the 11 patients who did not achieve a complete remission, 15 due to adverse events, 12 due to the patients' decision and five for other reasons (Figure 1).
Five-hundred and thirty-eight patients were randomized to different consolidation treatment (178, 180 and 180 in the AraC, ATO and ATRA arms, respectively). Pre-treatment characteristics were well-balanced between the three consolidation groups (Table 1).
The median times to an absolute neutrophil count >1x10 9 /L after the first consolidation course were 23.5, 22.8 and 18 days in the AraC, ATO and ATRA groups, respectively (P<0.0001). Similarly, the times to an absolute neutrophil count >1x10 9 /L after the second consolidation course were 23.3, 18.2 and 13.8 days (P<0.0001). The median durations of hospitalization after the first and the second consolidation courses were 31.5, 32.2, and 19.5 days (P<0.0001) and 28.2, 29.9, and 16.5 days in the AraC, ATO and ATRA group, respectively (P<0.0001).
A B
other causes) and two (0.9%) had resistant leukemia. Seventeen patients were not randomized to consolidation treatment, including the nine patients who did not achieve a complete remission, three due to adverse events and five consequent to the patients' decision. One hundred and ninety-seven patients were randomized to consolidation therapy, 99 in the chemotherapy group and 98 in the chemotherapy plus ATO groups. Pretreatment characteristics were well balanced between the two groups ( Table 2). With a median follow-up of 52 months, eight patients (4 in the chemotherapy group versus 4 in the chemotherapy plus ATO group) had relapsed leading to 5-year cumulative incidence rates of 4.6% [95% confidence interval (95% CI: 1.5; 10.6) and 3.5% (95% CI: 0.9; 9.2), P=0.99] and 13 patients had died in complete remission including nine in the chemotherapy arm and four in the chemotherapy plus arm (P=0.98). One patient, randomized to the chemotherapy plus ATO arm, developed AML/MDS. The 5-year overall survival rates were 90% and 93% in the chemotherapy and chemotherapy plus ATO groups, respectively (P=0.62), while the corresponding 5haematologica | 2018; 103(12) year event-free rates were 85.5% and 92.1% (P=0.38) (Figure 3). Excluding AraC (after the protocol amendment) from the consolidation cycles in the chemotherapy plus ATO group did not increase the 5-year cumulative incidence of relapse (4.6% in the chemotherapy arm, 5.3% in the chemotherapy plus ATO with AraC arm and 2.7% in the chemotherapy plus ATO without AraC arm, P=0.61). On the other hand, excluding AraC from consolidation cycles in the chemotherapy plus ATO arm significanty reduced myelosuppression: the median times to an absolute neutrophil count >1x10 9 /L after the second consolidation course were 22, 25 and 18 days in, respectively, the chemotherapy arm, the chemotherapy plus ATO with AraC arm, and the chemotherapy plus ATO without AraC arm (P<0.001), while the median times to a platelet count >50x10 9 /L were 24, 26 and 18 days (P<0.001). Similarly, the median durations of hospitalization after the first and the second consolidation courses were 29 days, 34 days, and 33 days (P<0.0001) and 28 days, 32 days and 31 days (P=0.0005), respectively.
Discussion
The main results of this study are that, in standard-risk APL, addition of ATO to a "classical" ATRA chemotherapy regimen further reduces the incidence of relapse and that, in high-risk APL, AraC (including high-dose AraC) can be replaced by ATO without increasing the relapse risk and with more limited myelosuppression, thus potentially reducing the risk of death in complete remission.
A first finding was the very high complete remission rate obtained in the APL 2006 trial, both in standard-risk and high-risk APL (98.1% and 95.7%, respectively), even though patients could be included up to the age of 70. Recent reports have suggested that, even in the ATRA era, early death rates could be as high as 15% to 20% in "reallife" APL patients. [15][16][17][18][19] On the other hand, we previously published that, during the 2006 to 2011 period, 75% of the patients in the 17 French largest centers participating in the APL 2006 trial could be included in the trial, while 25% could not, mainly based on age, major comorbidities or direct admission to an intensive care unit. 15 The overall complete remission rate was 91.4% and the overall rate of early death was 8.6%. All studies suggest that, if APL is suspected and before the diagnosis is confirmed, the immediate institution of ATRA treatment can reduce the risk of early death. Intensive platelet support during induction treatment can probably also contribute to reducing the risk of early death, particularly in patients with high-risk APL.
In the APL 2006 trial, in standard-risk APL patients aged less than 70 years of age, our aim was to show that by substituting ATO or ATRA for AraC during consolidation cycles, we would not increase the relapse rate, but would reduce myelosuppression, thereby potentially reducing the incidence of deaths in complete remission, which was 5% in our previous experience with AraC-containing consolidation cycles (at a conventional dose for the first consolidation cycle, and intermediate dose for the second). The ATRA and chemotherapy regimen chosen appeared to be an "optimal" regimen, using in particular high cumulative doses of anthracyclines, idarubicin rather than daunorubicin (as the latter may lead to more relapses 8 ), AraC during consolidation and prolonged maintenance treatment with 6-mercaptopurine, methotrexate and intermittent ATRA, which may also contribute to reducing the relapse rate. 7 This reference treatment proved effective, as the incidence of relapse after 5 years was only 5.5%.
Substituting ATRA for AraC did not significantly increase the relapse rate (8.2% at 5 years, compared to 5.5% in the AraC group) , but the replacement significantly reduced the time to recovery from neutropenia after the first and second consolidation cycles, and the duration of hospitalization during those two consolidation cycles. It did not reduce the incidence of deaths in complete remission, but among the six, six and five deaths in complete remission occurring in the three consolidation groups, only two in each arm were Role of arsenic trioxide in APL haematologica | 2018; 103(12) due to myelosuppression (the remaining being due to intercurrent disease or secondary AML/MDS).
However, the main result in this standard-risk APL group was that no relapses were seen in the ATO arm, and that this relapse rate was significantly lower than in the AraC and ATRA consolidation arms. These results suggest that adding ATO to an already highly effective ATRA chemotherapy regimen may further improve the regimen's anti-leukemic effect, and that ATO may not be dispensable in the treatment of standard-risk APL. On the other hand, substituting ATO for AraC did not reduce the duration of neutropenia after consolidation cycles, and neutropenia was longer with ATO and idarubicin than with ATRA and idarubicin consolidation cycles. This finding suggests that ATO, a non-myelosuppressive drug when used alone or combined with ATRA, may worsen myelosuppression when used concomitantly with chemotherapy. The duration of hospitalization was, however, shorter after ATO and idarubicin than after AraC and idarubicin consolidation cycles. The incidence of deaths in complete remission was not reduced in the ATO group, but only two deaths in complete remission were attributable to myelosuppression in the three consolidation arms. Finally, the incidence of secondary AML/MDS was similar in the three treatment arms, and similar to that reported in APL patients treated with ATRA chemotherapy regimens, i.e., between 1% and 2%. [20][21][22] By contrast, in the follow up of the two main clinical trials that used ATRA-ATO regimens without chemotherapy in newly diagnosed APL, no case of secondary AML/MDS has been reported so far (Lo Coco and Russell, personal communications).
Thus, in standard-risk APL, and in spite of very high complete remission and very low relapse rates obtained with ATRA chemotherapy combinations, our results confirm that the rates can be further improved by using ATO during the consolidation regimen. ATO in this situation did indeed reduce the relapse risk in standard-risk APL, confirming results of two recent, large studies. 10,11 Long-term results of one of them, the Italian German study, show in particular that an ATRA-ATO regimen is not just equivalent but superior to ATRA chemotherapy regimens in terms of relapse rate and overall survival. Thus, ATRA-ATO (chemotherapy free) regimens are becoming reference treatments for standard-risk APL.
With regards to high-risk APL, only limited studies of ATO-ATRA regimens without chemotherapy have been published, and in those studies patients often also received myelosuppressive drugs, mainly gentuzumab. 10,12 In the British study, this approach was found to give results equivalent to those of an ATRA chemotherapy regimen, but the overall number of patients included in the randomized study was only 56. 10 A US intergroup study showed that addition of ATO to a classical ATRA chemotherapy regimen significantly reduced the relapse rate. The ATRA chemotherapy regimen was, however based on daunorubicin instead of idarubicin (with a total scheduled dose of 500 mg/m 2 ), which may have contributed to higher relapse rates.
In the present study, among the patients with high-risk APL there was a very high complete remission rate (97.4%) and, contrary to the US intergroup study, a very low relapse rate (2.5%) was seen in the chemotherapy consolidation arm (without ATO), confirming our previous results. 23 The fact that substituting ATO for AraC was not associated with an increased incidence of relapse (5.3% versus 2.7%), but with a reduced incidence of deaths in complete remission (from 7.8% to 0%) was, therefore, an important finding. This substitution also lead to less myelosuppression and less hospitalization for consolidation cycles.
By contrast, the chemotherapy plus ATO consolidation therapy, combining AraC and ATO, used during the first part of the trial, did not further reduce the relapse rate (which was, it should be noted, already very low in the conventional AraC arm) but was associated with increased myelosuppression and a 5% rate of deaths in complete remission. This finding supports the fact that ATO worsens myelosuppression when used concomitantly with chemotherapy, as in the standard-risk group.
Our results therefore support the addition of ATO during consolidation cycles, in high-risk APL, at least in order to reduce the amount of chemotherapy administered and, therefore, the rate of deaths in complete remission (as in our study) but also the relapse rate (according to other studies, including the US intergroup study). While ATO-ATRA regimens without chemotherapy can now probably be substituted for ATRA chemotherapy regimens in standard-risk APL, ongoing clinical trials will show to what extent chemotherapy can also be reduced or even avoided in highrisk APL.
Funding
This study was supported by the programme Hospitalier de Recherche Clinique and the Association pour la Recherche sur le Cancer (ARC). | 2018-08-06T12:56:23.171Z | 2018-07-19T00:00:00.000 | {
"year": 2018,
"sha1": "03699ce88c8cd4e8d7d435e75a672e2149636925",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3324/haematol.2018.198614",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03699ce88c8cd4e8d7d435e75a672e2149636925",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201250101 | pes2o/s2orc | v3-fos-license | Gaussian Neighborhood-prime Labeling of Graphs Containing Hamiltonian Cycle
In this paper, we examine Gaussian neighborhood-prime labeling of generalized Peterson graph and graphs which contain Hamiltonian cycle.
Introduction
The extension of prime labeling on natural numbers to the set of Gaussian integer is known as Gaussian prime labeling. The concept of Gaussian prime labeling with the help of spiral ordering of Gaussian integer was firstly introduced by Hunter Lehmann and Andrew Park in . In this paper, Hunter Lehmann and Andrew Park gave a milestone result that any tree with ≤72 vertices is Gaussian prime tree under the spiral order. Steven Klee, Hunter Lehmann and Andrew Park (Klee, Lehmann and Park 2016) also proved that the path graph, star graph, n-centipede tree, (n, m, k) double star tree, (n, 3) firecracker tree are Gaussian prime graphs. We all know the Entringer (Robertson and Small 2009) conjecture that any tree admits prime labeling, but this conjecture has not yet been proven for all trees. S.K. Patel and N. P. Shrimali (Patel and Shrimali 2015) introduced one of the variation of prime labeling, which is neighborhood-prime labeling of a graph. They proved that following graphs are neighborhood-prime: path, complete, wheels, Helms, closed Helm, flowers, certain union of cycles. (Patel 2017)proved that Generalized Petersen graphs are neighborhood-prime graphs for certain cases. Malori Cloys and N. Bradley Fox (Cloys and Fox 2018) almost covered large class of trees which have neighborhood-prime labeling such as caterpillars, spiders, firecrackers and any tree that contains no two degree vertices.
In addition, Malori Cloys and N. Bradley Fox put forth conjecture that all trees are neighborhood-prime. Similar conjecture was made by Entriger for prime labelings. John Asplund, N. Bradley Fox and Arran Hamm (Asplund and Fox 2018) well-built the result that any graph containing Hamiltonian cycle is neighborhood-prime graph. With the help of Hamiltonicity, they proved that the Generalized Petersen graph GP (n, k) is neighborhood-prime graph for all n and k. The detailed list of neighborhood-prime graph is also available in the dynamic survey of graph labeling written by (Gallian 2016).
The Gaussian neighborhood-prime labeling was firstly introduced by (Rajesh Kumar and Mathew Varkey 2018) with respect to spiral order. Rajesh Kumar et al. initiated their work by showing the graphs like: path, star, (p, n, m) double star tree with n ≤ m, comb P n K 1 , Spiders, (n, 2) centipede tree, cycles C n with n ≡ 2 4 ( ) mod are Gaussian neighborhood-prime graph under spiral ordering of Gaussian integers. In this paper, we investigate Gaussian neighborhood-prime labeling of graph containing Hamiltonian cycle. We will discuss more results depending upon the Hamiltonicity which guarantees that the graph is Gaussian neighborhood-prime graph under the spiral order. Further, we will prove that generalized Petersen graphs are Gaussian neighborhood-prime graphs under the spiral order. pp.162 We begin with some definitions and the background of Gaussian integer before introducing main results. We will use spiral ordering of Gaussian integers and its properties given by (Steven Klee et al. 2016).
Background of Gaussian Integer and Spiral Ordering
The complex numbers of the form γ = + ∈ p iq; p, q are known as Gaussian integer. We denote set of Gaussian (Steven Klee et al. 2016) introduced the Spiral ordering of the Gaussian integer and defined g n+1 recursively starting with g 1 = 1 as follows: where g n denote the n th Gaussian integer with above ordering.
In notation, we write first 'n' Gaussian integers by [g n ].
In (Steven Klee et al. 2016) had already established some useful properties about Gaussian integers with the above ordering like : • Any two consecutive integers are relatively prime.
• Any two consecutive odd integers are relatively prime. • g and γ µ + + ( ) 1 i k are relatively prime, if g is an odd Gaussian integer and m is a unit. where k is a positive integer.
Definition 2.1: (Gross and Yellen 1999) The set of all vertices in G which are adjacent to u is called neighborhood of vertex u. In notation, we write N(u). ( )} g u u N w ∈ are relatively prime for every vertices w V G ∈ ( ) with degree greater than one. A graph which admits Gaussian neighborhoodprime labeling is known as Gaussian neighborhood-prime graph.
Definition 2.3 : (Cloys and Fox 2018)
The size of largest cycle in graph G is called circumference of a graph G.
In this paper, we considered all graphs which are undirected, finite and simple. For the notations and terminology of graph theory, we have referred (Gross and Yellen 1999). Throughout this paper, we will understand that the graph is a Gaussian neighborhood-prime graph meant to be a Gaussian neighborhood-prime graph with the spiral ordering.
Main Results
Theorem 3.1 If H is a Hamiltonian graph having n vertices with n ≡ 2 ( 4) mod , then H is a Gaussian neighborhoodprime graph.
Proof: Firstly, we note that if H is a Gaussian neighborhoodprime graph then the new graph formed by adding an edge in H between two vertices of degree at least two is again Gaussian neighborhood-prime graph.
In order to prove H has Gaussian neighborhoodprime labeling it is enough to show that the Hamiltonian cycle of H has Gaussian neighborhood-prime labeling. Let C v v n = ( , ...., ) 1 v 2 be the Hamiltonian cycle in graph H. Now, we reformulate the labeling of cycles used by (Rajesh Kumar and Mathew Varkey 2018) in as follows: Define Note that each vertex v i i ( ) ≠ 1 of C there exist two neighbors of v i whose labels are consecutive Gaussian integers. The neighbors of v 1 are v 2 , v n , having labels g 1 , g n respectively. Hence, Hamiltonian cycle C has Gaussian neighborhoodprime labeling if n is not congruent to 2 modulo 4. Which completes the proof. pp.163 Theorem 3.2 Let H be a Gaussian neighborhood-prime graph with n vertices, where n is not congruent to 2 modulo 4, having Hamiltonian cycle C = (u 1 , u 2 .... u n ). If graph H is obtained from H using additional k vertices {w 1 , w 2 ,...., w k } in such a way that each w j is adjacent to u m i and u m i +2 where subscripts of V(C) are calculated under modulo n then H is Gaussian neighborhood prime graph. ∈ are relatively prime. As each w j is adjacent to u m i and u m i+2 which are either labeled by consecutive Gaussian integers or one of them contains the label g 1 . Thus, h is a Gaussian neighborhood-prime labeling. Consequently, H is a Gaussian neighborhood-prime graph.
Theorem 3.3 If H is a Hamiltonian graph having an odd cycle then H has Gaussian neighborhood-prime labeling.
Proof: Let H be a Hamiltonian graph having n vertices. forms an odd cycle. We assign the labels to vertices u 1 , u 3 , u 5 ,..., u n-1 by γ γ γ Theorem 3.4 If G is a connected graph with n vertices such that n ≡ 3 ( 4) mod and G has circumference n -1 then G has a Gaussian neighborhood-prime labeling.
Proof: Let C = (w 1 , w 2 ..... w n-1 ) be the cycle and w be the vertex in graph G which does not lie on cycle C. We have following cases for vertex w: Case-I: If deg(w) = 1 then we define the labeling (In which n is replaced by n -1) and h(w) = g n . The reader can easily verify that h is a Gaussian neighborhoodprime labeling. Case-II: If deg(w) > 1 then without loss of generality we assume that w is adjacent to w k (1 ≤ k ≤ n -1) on cycle C. Define a bijection h: and h(w) = g n . where the subscript i + (k -2) is calculated under modulo n -1. From Equation (3), one can see that The other vertices of cycle C consists two neighbors whose labels are consecutive Gaussian integers. In both the cases, we have cycle if length n -1 which is Gaussian neighborhoodprime graph if n − ≡ 1 2 4 ( ) mod . Thus, h admits Gaussian neighborhood-prime labeling if n ≡ 3 ( 4) mod .
(Cloy and Fox 2018) proved GP n n , 2 neighborhood-prime graph. We use the idea of the neighborhood-prime labeling of GP n n , 2 For each 0 ≤ i < n -1, the vertex u i has neighbors v i and u i+1 which are also labeled by consecutive Gaussian integers. The vertices u n--1 has neighbor u 0 whose label is g 1 . Thus, relatively prime Gaussian integers. Therefore, h is a Gaussian neighborhood-prime labeling. (Patel 2017) proved GP(n, k) is neighborhood-prime graph if n and k are relatively prime, we will use the approach of that labeling in the following lemma.
Let w be any vertex of V(GP(n, k)), we claim that the Gaussian integers in the set {h(x): x ∈ N(w)} are relatively prime.
Case-
The neighbors of exterior vertices v j (j ≠ 2, n) are v j-1 , v j+1 having labels h(v j-1 ) and h(v j+1 ) which are consecutive odd Gaussian integers. The vertices v 2 and v n have common neighbor v 1 whose label is g 1 . Case-2: w = u j , 1 ≤ j ≤ n The neighbors of internal vertices u j are v j , u j+k , u j+(k+1) where the subscripts of vertices u j are reduced modulo n. From the Equations (4), (5), (6), (7) observe that for each u j (j ≠ 1), either h(v j ) and h(u j+k ) is consecutive Gaussian integers or h(v j ) and h(u j+k+1 ) are consecutive Gaussian integers. Finally, one of the neighbor of vertex u 1 is v 1 with label g 1 .
Thus, the set {h(x): x ∈ N(w)} consists relatively prime Gaussian integers in each of the above cases which implies that h is pp.165 a Gaussian neighborhood prime labeling. Figure 1. Gaussian Neighborhood-prime labeling of the GP(11.5) and GP(17.8) Theorem 3.7 For each n and k, the generalized Peterson graph GP(n, k) has Gausssian neighborhood-prime labeling.
Proof: In (Alspach 1983) proved that for each n and k, the graph GP(n, k) is Hamiltonian except following two cases: • n ≡ 0 (mod 4) and n ≥ 8 with k n = 2 .
The proof follows from Theorem 3.1 and Theorem 3.3 together with previous two lemmas. | 2019-08-23T12:12:40.413Z | 2019-03-06T00:00:00.000 | {
"year": 2019,
"sha1": "92e66744ee74f17050f108116dbe465845f699a5",
"oa_license": "CCBY",
"oa_url": "https://mjis.chitkara.edu.in/index.php/mjis/article/download/190/125",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "048b9b4c12c95a19640b1961f3e67a3732bf5111",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
182917416 | pes2o/s2orc | v3-fos-license | Are MCDM methods useful? A critical review of Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP)
Abstract Although Multi Criteria Decision Making (MCDM) methods have been applied in numerous case studies, many companies still avoid employing these methods in making their decisions and prefer to decide intuitively. There are studies claiming that MCDM methods provide better rankings for companies than intuitive approaches. This study argues that this claim may have low validity from a company’s perspective. For this purpose, it focuses on one of the MCDM methods referred to as the Analytic Hierarchy Process (AHP) and shows that AHP is very likely to provide a ranking of options that would not be acceptable by a rational person. The main reason that many companies do not rely on current MCDM methods can be due to the fact that managers intuitively notice ranking errors. Future studies should end the promotion of outdated approaches, pay closer attention to the deficiencies of the current MCDM processes, and develop more useful methods.
Abstract: Although Multi Criteria Decision Making (MCDM) methods have been applied in numerous case studies, many companies still avoid employing these methods in making their decisions and prefer to decide intuitively. There are studies claiming that MCDM methods provide better rankings for companies than intuitive approaches. This study argues that this claim may have low validity from a company's perspective. For this purpose, it focuses on one of the MCDM methods referred to as the Analytic Hierarchy Process (AHP) and shows that AHP is very likely to provide a ranking of options that would not be acceptable by a rational person. The main reason that many companies do not rely on current MCDM methods can be due to the fact that managers intuitively notice ranking errors. Future studies should end the promotion of outdated approaches, pay closer attention to the deficiencies of the current MCDM processes, and develop more useful methods.
Subjects: Operations Research; Optimization; Decision Analysis
Keywords: multiple criteria decision analysis; AHP; ANP; MCDM methods
ABOUT THE AUTHORS
Mehdi Rajabi Asadabadi is a researcher at UNSW, Canberra, in Business Intelligence. He was awarded a scholarship for his PhD, was also selected as a top student during his PhD, and received an award, namely the Study Canberra Award supported by the Australian Capital Territory government. He published in journals such as the European Journal of Operational Research (A*). His video on the application of artificial intelligence in large scale projects was selected for the International Video Competition of IJCAI (A* conference) and is available on YouTube. One of his recent papers, titled The Stratified Multi Criteria Decision Making (SMCDM) method, published in Knowledge Based Systems (2018), proposes a novel insight to MCDM methods.
Morteza Saberi is currently a Lecturer (Assistant Professor) at the School of information, system and modelling with University of Technology Sydney. He has an outstanding research record and significant capabilities in the area of business intelligence.
Prof Elizabeth Chang works on logistics at UNSW. She was listed fifth in the world for researchers in Business Intelligence.
PUBLIC INTEREST STATEMENT
Many studies claim that multi criteria decision making (MCDM) methods provide better rankings for companies than intuitive approaches. This study argues that this claim may have low validity from a company's perspective and the main reason that many companies do not rely on current MCDM methods can be due to the fact that managers intuitively notice ranking errors. Future studies should attempt to develop more useful MCDM methods instead of promoting outdated methods to companies.
The deficiencies highlighted in this paper refer mainly to pairwise comparisons and the associated scale, namely Saaty's scale. In both AHP and ANP, since a similar process of pairwise comparisons and the same scale are used, the AHP method, which is the less complex of two, has been selected to expose the deficiencies. When performing pairwise comparisons in AHP, if the number of criteria go beyond three, a consistency concern arises (Maiolo & Pantusa, 2018;Piengang, Beauregard, & Kenné, 2019;Sarmiento & Vargas-Berrones, 2018). This is because humans are not capable of keeping consistent pairwise judgments when the number of elements increases (Miller, 1956). To address this issue, Saaty (Saaty, 1977) introduces a Consistency Ratio (CR) that represents how inconsistently a decision maker assigns the scores using the scale in making pairwise comparisons. When the number of elements to be compared increases, the ratio often falls beyond the threshold (0.1). This questions the credibility of the comparisons so that such comparisons are returned to the decision maker to improve. The evaluator then has to adjust the numbers to improve the consistency and a new ratio is measured. Usually this process is repeated until the ratio becomes acceptable. In many applications of AHP/ANP, evaluators will start managing the numbers in order to decrease the ratio and satisfy the process while gradually paying less and less attention to what they really prefer (Asadabadi, 2017). Doing this may dramatically change the results.
Although there have been numerous studies encouraging organisations to apply such methods, many companies still avoid applying them for solving their multiple criteria decision making problems (Bernroider & Schmöllerl, 2013). For example, Ishizaka and Siraj (2018) used three MCDM methods to compare a number of coffee shops and claimed that MCDM methods provide better rankings than intuitive approaches. In the concluding section of their paper, they mention that many companies still do not use methods such as AHP when making multi-criteria decisions although they are familiar with the methods. They emphasize that the results coming from methods such as AHP are reliable insofar as the CR can even be relaxed to a higher number (CR>0.1). This paper criticizes their claim and exposes inefficiencies with AHP. It is shown that, even given the current consistency ratio (0.1), in many cases AHP fails to provide a rational ranking. From a reasonable person's viewpoint, it can be seen that the general form of MCDM methods, with a number of straightforward steps, provides a reasonable ranking, while a well reputed method, such as AHP, fails to do so. Given this, companies cannot rely on methods with such deficiencies. Therefore, we believe future studies should attempt to develop more useful MCDM methods instead of promoting outdated methods to companies.
The remainder of this paper is organised as follows. First AHP and ANP are briefly reviewed. Then, a simple multi criteria decision making problem is used to expose the deficiencies of AHP in practice. Next, the general form of MCDM methods is set out and applied to the same problem. To a reasonable person, the results of the simple MCDM method seem much more reliable.
Analytic Hierarchy Process (AHP)
In this section, first a review of the method and the scale is submitted then the inconsistency issue when performing pairwise comparisons is discussed.
The method and the scale
AHP is an MCDM method, developed by Saaty (Saaty, 1990(Saaty, , 1977(Saaty, , 1986, utilises pairwise comparisons in order to do the ranking. This approach is associated with a consistency ratio (Ahmadi, Petrudi, & Wang, 2017;Asadabadi, 2014). Assuming that there are a number of criteria and alternatives, the weights of the criteria are first computed through pairwise comparisons (Table 3) using Saaty's scale (Table 1). Then, all of the alternatives are pairwisely compared with respect to each criterion and set out in separate tables using the scale (see Tables 4-6). The sum of each row is computed, normalised, then placed in the last column and labelled local weights. The column is used to build a new table with the criteria set out along the top row and the alternatives build the left-hand column ( Table 7). The value in each cell of each column is multiplied by the weight of the criteria associated with the columns, and the sum of each row is computed. The computed numbers are set out in the last column of the final table, which represents the level of attention that should be paid to the alternatives or global weights. The final ranking is based on the global weights and is introduced to the decision maker (user).
The weights, such asW ij , presented in the cells of the tables (Tables 3-6), are based on how important the i th element is in comparison to the j th element, using Saaty's scale. If W ij is greater than one, the i th element is more important than the j th and vice versa. Satty's 9 point scale is presented in Table1.
This scale assigns 9 to the extremely important elements and the number decreases as the level of importance decreases.
The inconsistency issue
In a matrix with i columns and j rows, if the condition W ij ¼ 1=W ji , W ij ¼ W ik W jk exists, the judgments are perfect, and the comparison matrix is considered consistent, but if not, the consistency test should be performed to find out whether it falls above or below the threshold. The consistency level of the comparison matrix can be computed considering its maximum eigenvalue, namelyλ max . Since λ max is substituted for the order of the matrix (n), its difference from n is used to compute the Consistency Index (CI) and the closer to n, the more consistent the judgments (for more detail see Saaty, 1986).
The principal eigenvalue of the matrix is divided by the average of the principal eigenvalue of a certain number of matrices filled in a random way (RI). If the result is smaller than 0.1 (the threshold), the comparisons are confirmed (perfect comparisons lead to CR = 0).
Values for RI are as presented in Table 2.
As mentioned in the introduction, where the order of a matrix increases to more than three, the inconsistency issue arises and increases exponentially as the number of the criteria and alternatives grows. If CR is more than 0.1, the user is blamed for providing inconsistent comparisons and the tables are returned to them for improvement.
Analytic Network Process (ANP)
AHP is unable to consider the interrelations among the elements, and hence, ANP was developed (Saaty & Takizawa, 1986;Satty, 1996). ANP is a version of AHP which, through additional steps, considers the internal relations between the elements (Tavana, Yazdani, & Di Caprio, 2017). This MCDM method follows a process which is similar to AHP but, additionally, the elements of the same cluster are compared among themselves regardless of the hierarchy. For example, criteria are compared with each other pairwisely with respect to each of them in separate tables using Saaty's scale. Regardless of the fact that these kinds of comparisons with respect to an internal element do not make sense and are very confusing except in limited cases (Asadabadi, 2016), because the number of tables (matrices) considerably increases, the inconsistency issue becomes a more serious concern than it is in AHP. A brief review of the general structure of ANP is submitted below (for more detail see (Saaty, 1996(Saaty, , 1999.
As shown in Figure 1, ANP does not require a hierarchy, but rather, a network of elements. In this network the elements are considered as the nodes of a number of clusters and the level of each may both dominate and be dominated in pairwise comparisons (Saaty, 1996(Saaty, , 1999. Assume that each cluster k, k = 1,…, m, includes n k elements:e K1 ; . . . ; e Kn k . These elements can be used to build a supermatrix, (3), and pairwisely be compared using Saaty's scale.
After all the elements are compared in the supermatrix, it is raised to an arbitrary large limiting power to obtain the cumulative effects of the elements on each other (Cao & Song, 2016;Partovi, 2001). The use of the supermatrix assures that all the possible relations between the elements are considered.
Adding the interdependencies to AHP makes consistent pairwise comparison even more confusing and difficult. Even if the existence of such interdependencies can be justified, it is hard for the decision maker to consider these relations while doing pairwise comparisons. Since the issues that have been raised in this paper exist in both AHP and ANP, AHP is sufficient to highlight the deficiencies. (Baidya, Dey, Ghosh, & Petridis, 2018) among others, which despite the shortcomings, have frequently been applied . In the next section, the unreliability of AHP is exposed through its inability to address a supplier selection multi criteria problem.
Deficiencies of AHP addressing a supplier selection problem
The deficiencies of AHP are exposed here by working through an intuitive example. Doris Pars is a bathroom equipment and accessories wholesaler and manufacturer in Iran. The company has three potential suppliers for their PVC pipes which are located in Tehran, Saveh, and Wenling and are labelled supplier A, B, and C, in tables presented below. The company has to sign a contract for one of its raw materials. Assume that three main criteria must be considered: quality, price, and delivery. AHP is used and pairwise comparisons of the criteria are made using Saaty's scale. The criteria comparisons are presented in Table 3.
The alternatives are compared with each other for each of the criteria, see Tables 4-6.
It can be observed in the above tables that the performance of the alternatives (A, B, and C): (1) with respect to "quality", presented in Table 4, is the same.
(2) with respect to "price", supplier A is just slightly preferred in comparison with supplier B (and moderately, when compared with supplier C).
(3) with respect to "delivery", supplier A is strongly disagreeable to the company.
Given the above information, the reasonable expectation of the company's managers would be that supplier B and C are chosen as the first and second options respectively. In contrast AHP, selects supplier A (see Table 7) which is the less satisfactory option because of its substantial weakness in delivery.
Therefore, when AHP is used, the company should sign a contract with supplier A. This ranking does not make sense as it fails to address two concerns: (A) Even though supplier A has a slightly better price, it is strongly disagreeable in terms of delivery. Hence, supplier A might be the better option in a certain aspect, but is the worst option in other aspects.
(B) The decision maker has already shown his strong negative views regarding supplier A in terms of delivery. This can be inferred from the numbers that the decision maker assigned to that option (1/9 and 1/8) in comparison with the other two options (note: based on the scale, he cannot assign less than 1/9).
Therefore, although the company does not have much disagreement with the price of either supplier B or C, AHP indicates that they should go through extreme delivery difficulties and sign a contract with supplier A. Therefore, the decision maker will probably not rely on AHP in the future decisions of the company.
The issues raised here are with respect to an example in which the matrices are highly consistent, and the order of matrices does not go above three. Inconsistency in judgements and reworking to improve the judgements, as are suggested in AHP, may provide even more questionable results when the orders of the matrices increase.
Further, the scale is very limiting and enforces inconsistency. In the current example, to differentiate the delivery of supplier B from supplier C, which are only slightly different, the company assigns 9 and 8 when comparing them with supplier A. Now, when comparing supplier B and C, there is no choice for the decision maker, but to assign 1. This is because this small difference cannot be expressed using the scale. Another example is where option A, for example, is more preferable than B, and B is strongly better than C (score 7). When A is compared with C, the highest available score is 9 which creates inconsistency. This is an example of how AHP creates inconsistency, and the remedy that it suggests is not helpful. Blaming the decision maker for the inconsistency in the judgements does not help the decision maker who is being asked to make too many confusing pairwise comparisons.
In view of the two highlighted concerns (A and B) that AHP fails to address (see A and B), application of the explained general form of MCDM methods is submitted instead of AHP. The first concern is addressed using the MCDM method, but for the second concern there is currently no MCDM method that can take into account the strength of the user's impressions when assigning relative weights. This can be considered as a starting point for developing innovative MCDM methods.
Applying the general form of MCDM methods
This section explains a general form of MCDM methods, which does not require many pairwise comparisons and results in a rational ranking. We see how an MCDM method should be structured. The method is explained in steps set out below.
(1) Assign weights to the criteria based on their relative importance.
For example, assuming three criteria were listed from the most to least important, namely "quality", "price", and "delivery", the delivery would receive a weight equal to 1. Then, if a criterion is 1.3 times more important than the least important criterion, then the criterion is simply assigned a weight equal to 1.3.
(2) Normalizing the weights of the criteria For example, given the weights of 1, 1.2, and 1.8 for "delivery", "price", and "quality", their normalised weights are computed by dividing the weights by the sum (1 + 1.2 + 1.8).
(3) Obtaining normalised weights of the alternatives with respect to each criterion.
Consider alternatives for each criterion separately and follow similar steps to the above (steps 1 and 2). With respect to each criterion: • Assign weights to the alternatives based on their relative importance For example, considering the example in section 4, with respect to "price", supplier B is the least favourable option, so it receives score 1. If supplier C is 1.15 times (slightly) better than supplier B, it simply receives the weight of 1.15 and then the weights are normalised.
(4) Transferring the normalised weights of the alternatives to a matrix in which the criteria are set out on the top of the columns and the rows represent the alternatives (5) Multiplying the weights of the criteria by the values in their columns (6) Summing each row of the table and ranking them from the highest to the lowest weight Now, the above steps are applied to the previous example.
The alternatives of the previous example are ranked using the explained method. Applying steps 1 and 2, Table 8 is computed.
Applying step 3, three tables such as Table 9 are built.
Applying step 4, Table 10 is built.
Finally, applying steps 5 and 6, the weights of alternatives are computed and then ranked as shown in Table 11.
Comparing the results of AHP and the general MCDM method
In contrast to using AHP, when the general MCDM method is used, supplier A has been recognised as the worst option, as was expected. To avoid extreme delivery problems, either supplier B or C should be selected, but since the price of supplier C is slightly better than B, the method selects supplier C. This seems much closer to what a reasonable person would expect. While in this example, the number of criteria and alternatives are few, when the number of elements, alternatives and criteria increases, it becomes even clearer how the simple MCDM method is less confusing and more efficient than working with AHP.
There are few studies criticizing AHP. In 1990, Dyer (1990) presented a paper questioning the efficiency of AHP. Besides the issues highlighted in that paper, there are often shortcomings in employing Saaty's scale and also with regard to the inconsistency issue arising from pairwise comparisons using AHP. With regard to the scale, assuming that there are three criteria, A, B and C, (1) if A is very much more important than B, and B is much more important than C, there is no number in the scale that can define the relations between A and C.
(2) if A is slightly more important than both B and C, using the AHP matrix (table) the calculated importance weight of A, becomes twice that of the weights of B and C, which is far different from being slightly more important. Note that the above tables are not too sensitive with regard to the selected scores. For instance, changing the numbers in the last column of Table 10 from '1, 3.9, and 4.7ʹ to '1, 4.7, and 4.7ʹ, or to '1, 2, and 3ʹ does not change the ranking. In the next section, the ranking of AHP in Table 7 is compared with the above table, Table 11.
In other words, with regard to Table 5, above, consider that the decision maker wants to express that the price of supplier A is just slightly better than the price of supplier B or supplier C. The smallest number to be assigned is 2. Based on what the decision maker assumes, the weights should be similar to 0.36, 0.31 and 0.32. However, using AHP supplier A obtains 50 percent of the weight.
Additionally, in Table 12, imagine that if quality gets 5 in comparison with delivery and then 2 in comparison with price, it is obvious that price should get 2.5 in comparison with delivery. But, if the scale is followed, the decision maker has to choose between assigning either 2 or 3, and these limiting choices create inconsistency. Such a scale leaves no choice for the decision maker but to create inconsistency, and the decision maker then has to be concerned with the inconsistency ratio. If the consistency level using AHP becomes equal to zero, the values of a single row are enough to make the entire table. For example in Table 12, each row includes the same information as the two other rows. Quality is two times more important than price and price is two times more important than delivery which means the weights should be 0.571, 0.286, and 0.143.
To explain differently: The limiting scale of AHP makes it difficult to have perfect comparisons without inconsistency. The aim in AHP is to have comparisons that are highly consistent. A perfect result in AHP is expected when there is no inconsistency. If there is no inconsistency, AHP becomes a simple ranking method. Therefore, AHP always falls behind a simple ranking method.
In view of performing pairwise comparisons in AHP, when the number of alternatives/criteria increases, the pairwise comparisons become confusing and a high level of inconsistency is expected. Therefore, the comparisons might be returned again and again to the decision maker to improve. The problems with AHP become more serious when the decision maker starts manipulating the value of pairwise comparisons in order to get rid of inconsistency instead of performing a fair comparison between the elements. The consistency issue, pairwise comparisons, and Saaty's scale differentiate AHP from a simple ranking method. This study shows that there are issues in the fundamentals of AHP, namely pairwise comparison and the scale.
As mentioned earlier, ANP is the generalised version of AHP. This study does not criticise ANP directly but, since ANP also relies on pairwise comparisons and Saaty's scale, it suffers from the same issues. In fact, similar deficiencies exist in all methods which follow similar principles, and/or integrate AHP/ANP with other tools and methods. Despite the shortcomings, there are many examples of their applications (Altuzarra, Moreno-Jiménez, & Salvador, 2010;Saaty, 2013). This paper is critical of some recent studies that emphasize the applicability of MCDM methods such as AHP (Bernroider & Schmöllerl, 2013;Ishizaka & Siraj, 2018). The fact is that many companies still avoid using MCDM methods despite having knowledge about them (Bernroider & Schmöllerl, 2013). The reason for companies' lack of interest in employing such methods is the deficiencies of the methods. The deficiencies are sometimes so obvious that the decision maker can easily see them. Future studies should gradually end the promotion of outdated methods and instead begin developing innovative MCDM methods. Such studies can use the understanding of the deficiencies of the current methods, such as those highlighted in section 4, to develop new MCDM methods.
Conclusion
Although MCDM methods such as AHP can address problems in specific situation very well, in practice, many companies avoid using them. This paper reveals that the reason is the deficiencies of such methods. In this study, the deficiencies of AHP and its associated scale have been exposed using a simple example. In the discussed example, we realised that even the general form of MCDM methods may occasionally outperform AHP. Considering the fact that the general form of MCDM does not require many pairwise comparisons, and it does not concern the decision maker with computing consistency levels, the question that arises is: where shall we use AHP and how can we be assured that after all the extra efforts that we put in, AHP leads us to more reliable results than a simple MCDM method. Answering this question may require further studies and the drawbacks may ultimately be resolved later. However, since there are times, such as the discussed example, when it is observable to decision makers that AHP fails to provide a good ranking of a number of options, companies prefer not to apply the method. It is thought that by acknowledging the drawbacks of the current MCDM methods, the development of new MCDM methods will be encouraged. Funding | 2019-06-04T14:39:34.598Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "79c95b70fbff1faab4171d42e8bd292108848101",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311916.2019.1623153",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "79c95b70fbff1faab4171d42e8bd292108848101",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
49393353 | pes2o/s2orc | v3-fos-license | Heuristics-based Query Reordering for Federated Queries in SPARQL 1.1 and SPARQL-LD
The federated query extension of SPARQL 1.1 allows executing queries distributed over different SPARQL endpoints. SPARQL-LD is a recent extension of SPARQL 1.1 which enables to directly query any HTTP web source containing RDF data, like web pages embedded with RDFa, JSON-LD or Microformats, without requiring the declaration of named graphs. This makes possible to query a large number of data sources (including SPARQL endpoints, online resources, or even Web APIs returning RDF data) through a single one concise query. However, not optimal formulation of SPARQL 1.1 and SPARQL-LD queries can lead to a large number of calls to remote resources which in turn can lead to extremely high query execution times. In this paper, we address this problem and propose a set of query reordering methods which make use of heuristics to reorder a set of SERVICE graph patterns based on their restrictiveness, without requiring the gathering and use of statistics from the remote sources. Such a query optimization approach is widely applicable since it can be exploited on top of existing SPARQL 1.1 and SPARQL-LD implementations. Evaluation results show that query reordering can highly decrease the query-execution time, while a method that considers the number and type of unbound variables and joins achieves the optimal query plan in 88% of the cases.
Introduction
A constantly increasing number of data providers publish their data on the Web following the Linked Data principles and adopting standard RDF formats. According to the Web Data Commons project [16], 38% of the HTML pages in the Common Crawl 3 of October 2016 contains structured data in the form of RDFa, JSON-LD, Microdata, or Microformats 4 . This data comes from millions of different pay-level-domains, meaning that the majority of Linked Data is nowadays available through a large number of different data sources. The question is: how can we efficiently query this large, distributed, and constantly increasing body of knowledge?
SPARQL [2] is the de facto query language for retrieving and manipulating RDF data. The SPARQL 1.1 Federated Query recommendation of W3C allows executing queries distributed over different SPARQL endpoints [17]. SPARQL-LD [7,8] is an extension (generalization) of SPARQL 1.1 Federated Query which extends the applicability of the service operator to enable querying any HTTP web source containing RDF data, like online RDF files (RDF/XML, Turtle, N3) or web pages embedded with RDFa, JSON-LD, or Microformats. Another important characteristic of SPARQL-LD is that it does not require the named graphs to have been declared, thus one can even fetch and query a dataset returned by a portion of the query, i.e., whose URI is derived at query execution time. Thereby, by writing a single concise query, one can query hundreds or thousands of data sources, including SPARQL endpoints, online resources, or even Web APIs returning RDF data [8].
However, not optimal query writing in both SPARQL 1.1 and SPARQL-LD can lead to a very large number of service calls to remote resources, which in turn can lead to an extremely high query execution time. Thus, there arises the need for an effective query optimization method than can find a near-optimal query execution plan. In addition, given the dynamic nature of Linked Data and the capability offered by SPARQL-LD to query any remote HTTP resource containing RDF data, we need a widely-applicable method that does not require the use of statistics or metadata from the remote sources and that can operate on top of existing SPARQL 1.1 and SPARQL-LD implementations.
To this end, in this paper we propose and evaluate a set of query reordering methods for SPARQL 1.1 and SPARQL-LD. We focus on fully heuristics-based methods that reorder a query's service graph patterns based on their restrictiveness (selectivity), without requiring the gathering and use of statistics from the remote sources. The objective is to decrease the number of intermediate results and thus the number of calls to remote resources. We also propose the use of a greedy algorithm for computing a near-optimal query execution plan for cases of large number of service patterns.
In a nutshell, in this paper we make the following contributions: -We propose a set of heuristics-based query reordering methods for SPARQL 1.1 and SPARQL-LD, which can also exploit a greedy algorithm for choosing a near-optimal query execution plan. The query optimizer is publicly available as open source. 5 -We report the results of an experimental evaluation which show that a method that considers the number and type of unbound variables and the number and type of joins achieves the optimal query plan in 88% of the examined queries, while the greedy algorithm has an accuracy of 94% in finding the reordering with the lowest cost. The rest of this paper is organized as follows: Section 2 presents the required background and related works. Section 3 describes the proposed query reordering methods. Section 4 reports experimental results. Finally, Section 5 concludes the paper and discusses interesting directions for future work.
SPARQL-LD
The service operator of SPARQL 1.1 (service a P ) is defined as a graph pattern P evaluated in the SPARQL endpoint specified by the URI a, while (service ?X P ) is defined by assigning to the variable ?X all the URIs (of endpoints) coming from partial results, i.e. that get bound after executing an initial query fragment [5]. The idea behind SPARQL-LD is to enable the evaluation of a graph pattern P not absolutely to a SPARQL endpoint a, but generally to an RDF graph G r specified by a Web Resource r. Thus, now a URI given to the service operator can also be the dereferenceable URI of a resource, the Web page of an entity (e.g., of a person), an ontology (OWL), Turtle or N3 file, or even the URL of a service that dynamically creates and returns RDF data. In case the URI is not the address of a SPARQL endpoint, the RDF data that may exist in the resource are fetched at real-time and queried for the graph pattern P . Currently, SPARQL-LD supports a variety of standard formats, including RDF/XML, N-Triples, N3/Turtle, RDFa, JSON-LD, Microdata, Microformats [7,8].
SPARQL-LD is a generalization of SPARQL 1.1 in the sense that every query that can be answered by SPARQL 1.1 can be also answered by SPARQL-LD. Specifically, if the URI given to the service operator corresponds to a SPARQL endpoint, then it works exactly as the original SPARQL 1.1 (the remote endpoint evaluates the query and returns the result). Otherwise, instead of returning an error (and no bindings), it tries to fetch and query the triples that may exist in the given resource. SPARQL-LD has been implemented using Apache Jena [1], an open source Java framework for building Semantic Web applications. The implementation is available as open source 6 .
Listing 1 shows a query that can be answered by SPARQL-LD. The query returns all co-authors of Pavlos Fafalios together with the number of their publications and the number of distinct conferences in which they have a publication. The query first accesses the RDFa-embedded web page of Pavlos Fafalios to collect his co-authors, then queries a SPARQL endpoint over DBLP to retrieve the conferences, and finally accesses the URI of all co-authors to gather their publications. Notice that the co-author URIs derive at query-execution time. In the same query, one could further integrate data from any other web resource, or from a web API which can return results in a standard RDF format.
The query in Listing 1 is answered within a few seconds. However, if we change the order of the first two service patterns, then its execution time is dramatically increased to many minutes. To cope with this problem, in this paper we propose methods to reorder the query's service patterns and thus improve the query execution time in case of non optimal query formulation.
SPARQL Endpoint Federation
The idea of query federation is to provide integrated access to distributed sources on the Web. DARQ [18] and SemWIQ [12] are two of the first systems to support SPARQL query federation to multiple SPARQL endpoints. They provide access to distributed RDF data sources using a mediator service that transparently distributes the execution of queries to multiple endpoints. Given the need to address query federation, in 2013 the SPARQL W3C working group proposed a query federation extension for SPARQL 1.1 [17]. Buil-Aranda et al. [5] describe the syntax of this extension, formalize its semantics, and implement a static optimization for queries that contain the OPTIONAL operator, the most costly operator in SPARQL.
There is also a plethora of query federation engines to support efficient SPARQL query processing to multiple endpoints. The work in [20] provides a comprehensive analysis, comparison, and evaluation of a large number of SPARQL endpoint federation systems.
The ANAPSID system [3] adapts query execution schedulers to data availability and run-time conditions. It stores information about the available endpoints and the ontologies used to describe the data in order to decompose queries into sub-queries that can be executed by the selected endpoints, while adaptive physical operators are executed to produce answers as soon as responses from the available remote sources are received. The query optimizer component of ANAPSID exploits statistics about the distribution of values in the different datasets in order to identify the best combination of sub-queries.
The work in [15] proposes a heuristic-based approach for endpoint federation. Basic graph patterns are decomposed into sub-queries that can be executed by the available endpoints, while the endpoints are described in terms of the list of predicates they contain. Similar to ANAPSID, sub-queries are combined in a bushy tree execution plan, while the SPARQL 1.1 federation extension is used to specify the URL of the endpoint where the sub-query will be executed.
SPLENDID [10] is another endpoint federation system which relies on statistical data obtained from VoID descriptions [4]. For triple patterns with bound variables not covered in the VoID statistics, SPLENDID sends ASK queries to all the pre-selected data sources and removes those which fail the test. Bind and hash joins are used to integrate the results of the sub-queries, while a dynamic programming strategy is exploited to optimize the join order of SPARQL basic graph patterns.
ADERIS [13] is a query processing system for efficiently joining data from multiple distributed endpoints. ADERIS decomposes federated SPARQL queries into multiple source queries and integrates the results through an adaptive join reordering method for which a cost model is defined.
The FedX framework [21] provides join processing and grouping techniques to minimize the number of requests to remote endpoints. Source selection is performed without the need of preprocessed metadata. It relies on SPARQL ASK queries and a cache which stores the most recent ASK requests. The input query is forwarded to all of the data sources and those sources which pass the SPARQL ASK test are selected. FedX uses a rule-based join optimizer which considers the number of bound variables. One of the methods we examine in this paper (UVC) is also based on the same heuristic.
Regarding more recent works, SemaGrow [6] is a federated SPARQL querying system that uses metadata about the federated data sources to optimize query execution. The system balances between a query optimizer that introduces little overhead, has appropriate fall backs in the absence of metadata, but at the same time produces optimal plans in many situations. It also exploits non-blocking and asynchronous stream processing to achieve efficiency and robustness.
Finally, Odyssey [14] is a cost-based query optimization approach for endpoint federation. It defines statistics for representing both entities and links among datasets, and uses the computed statistics to estimate the size of intermediate results. It also exploits dynamic programming to produce an efficient query execution plan with a low number of intermediate results.
Our approach. In this work, we focus on optimizing SPARQL 1.1 and SPARQL-LD queries through plain query reordering. The input is a query containing two or more service patterns, and the output is a near-optimal (in terms of query execution time) reordering of the contained services, i.e., an optimized reordered query. Given the dynamic nature of Linked Data as well as the advanced query capabilities offered by SPARQL-LD (enabling to query any remote HTTP resource containing or returning RDF data), we aim at providing a general query reordering method that does not require statistics or metadata from the remote resources and that, contrary to the aforementioned works, can be directly applied on top of existing SPARQL 1.1 and SPARQL-LD implementation.
Selectivity-based Query Optimization
Another line of research has investigated optimization methods for non-federated SPARQL queries based on selectivity estimation.
The work in [23] defines and analyzes heuristics for selectivity-based basic graph pattern optimization. The heuristics range from simple triple pattern variable counting to more sophisticated selectivity estimation techniques that consider pre-computed triple pattern statistics. Likewise, [24] describes a set of heuristics for deciding which triple patterns of a SPARQL query are more selective and thus it is in the benefit of the planner to evaluate them first. The planner tries to maximize the number of merge joins and reduce intermediate results by choosing triples patterns most likely to have high selectivity. [22] extends these works by considering more SPARQL expressions, in particular the operators FILTER and GRAPH.
In [11] the authors study the star and chain patterns with correlated properties and propose two methods for estimating their selectivity based on precomputed statistics. For star query patterns, Bayesian networks are constructed to compactly represent the joint probability distribution over values of correlated properties, while for chain query patterns the chain histogram is built which can obtain a good balance between the estimation accuracy and space cost.
Our approach. Similar to [23], [24] and [22], we exploit heuristics for selectivity estimation. However, we focus on reordering a set of service graph patterns in order to optimize the execution of SPARQL 1.1 and SPARQL-LD queries. Some of the heuristics we examine in this paper are based on the results of these previous works.
Query Reordering
We first model query reordering as a cost minimization problem (Section 3.1). Then we describe four heuristics-based methods for computing the cost of a service graph pattern (Section 3.2). We also discuss how we handle some special query cases (Section 3.3). At the end we motivate the need for a greedy algorithm for computing a near-optimal reordering for cases of large number of service graph patterns (Section 3.4).
Problem Modeling
Let Q be a SPARQL query and let S = (s 1 , s 2 , . . . , s n ) be a sequence of n service patterns contained in Q. For a service pattern s i , let g i be its nested graph pattern and B i be the list of bindings of Q before the execution of s i . Our objective is to compute a reordering S ′ of S that minimizes its execution cost. Formally: In our case, the execution cost of a sequence of service patterns S ′ corresponds to its total execution time. However, the execution time of a service pattern s i ∈ S ′ highly depends on the query patterns that precede s i , while the bindings produced by s i affect the execution time of the succeeding service patterns. Considering the above, we can estimate cost(S ′ ) as the weighted sum of the cost of each service pattern s i ∈ S ′ given B i . Formally: where cost(s i |B i ) expresses the cost of service pattern s i given B i (i.e., given the already-bound variables before executing s i ), and w i is the weight of service pattern s i which expresses the degree up to which it influences the execution time of the sequence S ′ . We define w i = n−i+1 n . In this case, for a sequence of four service patterns S ′ = (s 1 , s 2 , s 3 , s 4 ), the weights are: w 1 = 1.0 (since s 1 influences the execution time of 3 service patterns), w 2 = 0.75 (s 2 affects 2 service patterns), w 3 = 0.5 (s 3 affects 1 service pattern), and w 4 = 0.25 (s 4 does not affect any other service pattern). Now, the cost of each service pattern s i can be estimated based on the selectivity/restrictiveness of its graph pattern g i given B i . Formally: A service graph pattern that is very unrestrictive will return a large number of intermediate results (large number of bindings), which in turn will increase the number of calls to succeeding service patterns, resulting in higher total execution time. In the query of Listing 1 for example, a large number of bindings of the variables in the first service pattern will result in many calls of the second service. Thus, our objective is to first execute the more restrictive service patterns that will probably return small result sets.
As proposed in [23] and [24] (for the case of triple patterns), the restrictiveness of a graph pattern can be determined by the number and type of new (unbound) variables in the graph pattern. The most restrictive graph pattern can be considered the one containing the less unbound variables (since fewer bindings are expected). Regarding the type of the unbound variables, subjects can be considered more restrictive than objects, and objects more restrictive than predicates (usually there are more triples matching a predicate than a subject or an object, and more triples matching an object than a subject) [23]. Moreover, the number and type of joins can also affect the restrictiveness of a graph pattern since, for example, an unusual subject-predicate join will probably return less bindings. Finally, literals and filter operators usually restrict the number of bindings and thus increase the restrictiveness of a graph pattern. Below, we define formulas for unrestrictiveness that consider the above factors.
Methods for Estimating Unrestrictiveness
We examine four methods for computing the unrestrictiveness cost I. Variable Count (VC). The first unrestrictiveness measure simply considers the number of graph pattern variables without considering whether they are bound or not. For a given graph pattern g i , let V (g i ) be the set of variables of g i . The unrestrictiveness of g i can be now defined as: With the above formula, more variables in a graph pattern means higher unrestrictiveness score. Consider for example the query in Listing 2. The second service pattern contains one variable and is more likely to retrieve a smaller number of results than the first one which contains three variables. Thus the second service pattern is more restrictive and should be executed first.
II. Unbound Variable Count (UVC).
A service pattern containing many new unbound variables is more likely to retrieve a higher number of results compared to a service pattern with less unbound variables. Thereby, we can also consider the set of binding B i before the execution of a service pattern s i . Let first V u (g i , B i ) be the set of new (unbound) variables of g i given B i . The unrestrictiveness of g i can be now defined as: Listing 3 shows an example for this case. After the execution of the first service pattern, we should better run the third one since all its variables are already bound. The second service pattern contains one unbound variable, although its total number of variables is less than those of the third service pattern. III. Weighted Unbound Variable Count (WUVC). The above formulas do not consider the type of the unbound variables in the graph pattern, i.e., whether they are in the subject, predicate or object position in the triple pattern. For a graph pattern g i and a set of bindings be the set of subject, predicate and object unbound variables in g i , respectively. Let also w s , w p and w o be the weights for subject, predicate and object variables, respectively. The unrestrictiveness of g i can be now defined as: According to [23], subjects are in general more restrictive than objects and objects are more restrictive than predicates, i.e., there are usually more triples matching a predicate than an object, and more triples matching an object than a subject. When considering variables, selectivity is opposite: a subject variable may return more bindings than an object variable and an object variable more bindings than a predicate variable. Consider for example the query in Listing 4. The subjects having Greece as the birth place (1st service pattern) are expected to be more than the friends of George (2nd service pattern), while the friends of George are expected to be more than the different properties that connect George with Nick (3rd service pattern). Thus, one can define weights so that w s > w o > w p . Based on the distribution of subjects, predicates and objects in a large Linked Data dataset of more than 28 billion triples (gathered from more than 650 thousand sources) [9], we define the following weights: w s = 1.0, w o = 0.8, w p = 0.1. Moreover, if a variable exists in more than one triple pattern position (e.g., both as subject or object), we consider it as being in the more restrictive position. IV. Joins-aware Weighted Unbound Variable Count (JWUVC). When a graph pattern contains joins, its restrictiveness is usually increased depending on the number and type of joins (star, chain, or unusual join) [11]. For a graph pattern g i , let J * (g i ), J → (g i ), and J × (g i ) be the number of star, chain, and unusual joins in g i , respectively. We consider the subject-subject and objectobject joins as star joins, the object-subject and subject-object as chain joins, and all the others as unusual joins. Let also j * , j → and j × be the weights for star, chain, and unusual joins, respectively. Based on the assumption that, in general, unusual joins are much more restrictive than chain joins, and chain joins are more restrictive than star joins [24], one can define weights so that j × > j → > j * . We define: j × = 1.0, j → = 0.6, j * = 0.5. The following unrestrictiveness formula considers both the number and the type of joins in the graph pattern g i : Listing 5 shows an example for this case. The first service pattern contains a star join, the second a chain join, and the third an unusual join. The unusual join will probably return fewer results than the star and chain joins. In Section 4 we evaluate the effectiveness of the above four methods on finding the optimal, in terms of query execution time, query reordering.
Handling of Special Cases
Query plans with same cost. In case the lowest unrestrictiveness cost is the same for two or more query reorderings, we consider the number of literals and filter operators contained in the graph patterns. Literals are generally considered more selective than URIs [24], while a filter operator limits the bindings of the filtered variable and thus increases the selectivity of the corresponding graph pattern [23]. Thus, we count the total number of literals and filter operators in each service pattern, and consider it when we get query plans with the same unrestrictiveness cost. If the corresponding service patterns contain the same number of literals and filter operators, then we maintain their original ordering, i.e., we order them based on their order in the input query.
SERVICE within OPTIONAL.
In case a service call is within an optional pattern, then we separately reorder the service patterns that exist before and after it. An optional pattern requires a left outer join and thus changing its order can distort the query result.
Variable in SERVICE clause. If a service clause contains a variable instead of a URI, we should ensure that this variable gets bound before the execution of the service pattern. Thereby, during reordering we ensure that all other services containing this variable in their graph patterns are placed before the service pattern having the variable in its clause.
Projection variables. The set of variables that appear in the SELECT clause of a service pattern are called the projection variables. Since these are part of the answer and affect the size of the bindings, we only consider these variables in all the proposed formulas.
UNION operator, nested patterns, combination of triple and SERVICE patterns. In this work we do not study the case of queries containing the UNION operator as well as nested patterns. Such queries require the reordering of groups of service patterns which is not currently supported by our implementation. In addition, our implementation does not yet support the case of queries containing both triple patterns (that query the "local" endpoint) and service patterns. We leave the handling of these cases as part of our future work.
Computing a near-optimal query-execution plan
Computing the unrestrictiveness score for all the different query reorderings may be prohibitive for large number of service patterns, since the complexity is n! (where n is the query's number of service patterns). This applies in all the proposed optimization methods apart from VC where no all permutations are needed to be computed. For example, for queries with 5 service patterns there are 5! (=720) different permutations, however for 10 service patterns this number is increased to more than 3.6 million permutations and for 15 to around 1.3 trillion. Table 1 shows the time required for computing the reordering with the lowest cost for different number of service patterns using the JWUVC method (the time is almost the same for also UVC and WUVC). Our implementation (cf. Footnote 5) is in Java and uses Apache Jena for decomposing the SPARQL query, while we run the experiments in an ordinary computer with processor Intel Core i5 @ 3.2Ghz CPU, 8GB RAM and running Windows 10 (64 bit). We see that the time is very high for queries with many service patterns. For example, more than 1 hour is required for just finding the reordering with the lowest cost for a query with 12 service patterns. This illustrates the need for a cost-effective approach which can find a near-optimal query execution plan without needing to check all the different permutations. We adopt a greedy algorithm starting with the service pattern with the smaller unrestrictiveness score (local optimal choice) and continuing with the next service pattern with the smaller score, considering at each stage the already bound variables of the previous stages. To find the local optimal choice, we can use any of the proposed unrestrictiveness formulas. Considering the UVC formula for example, in the query of Listing 6 the greedy algorithm first selects the 2nd service pattern since it contains only 1 variable. In the next stage, it selects the 3rd service pattern which contains 2 unbound variables, fewer than those of the 1st service pattern.
Evaluation
We evaluated the effectiveness of the proposed query reordering methods using real federated queries from the LargeRDFBench [19] dataset 7 . From the provided 32 SPARQL 1.1 queries, we did not consider 10 queries that make use of the UNION operator (it is not currently supported by our implementation) and 5 "large data" queries (due to high memory requirements). To consider larger number of possible query permutations, and since some of the queries contain only 2 service patterns, we removed the OPTIONAL operators keeping though the embedded service pattern(s). 8 For instance, we transformed the query: to the query: The final evaluation dataset contains 17 queries of varying complexity (each one containing at least two service patterns), while their service patterns require access to totally 7 remote SPARQL endpoints. Note that there is no benchmark for SPARQL-LD, however this does not affect the objective of our evaluation since the proposed methods do not distinguish between SPARQL 1.1 and SPARQL-LD queries (a SPARQL endpoint can be considered an HTTP resource containing all the endpoint's triples).
For each query, we found the optimal reordering by computing the execution time of all possible permutations (average of 5 runs). Then, we examined the effectiveness of the proposed optimization methods (VC, UVC, WUVC, and JUWVC, as described in Section 3.2) on finding the optimal query execution plan. Figure 1 shows the results. VC finds the optimal query plan in 8/17 queries (47%), UVC in 10/17 queries (59%), WUVC in 9/17 queries (53%), and JUWVC in 15/17 queries (88%). We notice that the JUWVC method, which considers the number and type of joins, achieves a very good performance. Given the infrastructure used to host the SPARQL endpoints in our experiments 9 , query reordering using JUWVC achieves a very large decrease of the query execution time for many of the queries (for example, from minutes to some seconds for the queries S4, S10, S12, C7, C10).
JUWVC fails to find the optimal query plan for the queries S13 and C6, which both contain 2 service patterns. The first service pattern of S13 contains 1 star join and the second 2 star joins. As regards C6, its first service pattern contains 1 star join and 1 chain join, and its second 5 star joins. In both queries, although the second service pattern contains more joins than the first service pattern, it returns larger number of bindings and this increases the number of calls to the first remote endpoint and thus the overall query execution time. Note that, without exploiting dataset statistics, such cases are very difficult to be caught by an unrestrictiveness formula.
S2
S3 S4 S5 S10 S11 S12 S13 S14 C1 As regards the effectiveness of the greedy algorithm which avoids computing the cost of all possible permutations (cf. Section 3.4), it manages to find the reordering with the lowest cost using JUWVC in 16/17 queries (94%). It fails for the query C2, however the returned reordering is very close to the optimal (the difference is only a few milliseconds).
One of the limitations of such a fully heuristics-based method is that it is practically impossible to always find the optimal query plan. However, this is the case also for methods that pre-compute and exploit metadata and statistics from the remote resources, or which make use of caching. The reason is that the Web of Data is a huge and constantly evolving information space, meaning that we may always need to query a new, unknown resource discovered during query execution. A solution to this problem is the exploitation of VoID [4], in particular the publishing of a rich VoID file alongside each resource. In this case, an optimizer can access (and exploit for query reordering) such VoID descriptions at query execution time, considering though that all publishers follow a common pattern for publishing these VoID files. 10
Conclusion
We have proposed and evaluated a set of fully heuristics-based query reordering methods for federating queries in SPARQL 1.1 and SPARQL-LD. The proposed methods reorder a set of service graph patterns based on their selectivity (restrictiveness) and do not require the gathering and use of statistics or metadata from the remote resources. Such an approach is widely-applicable and can be exploited on top of existing SPARQL 1.1 and SPARQL-LD implementations.
Since the new query functionality offered by SPARQL-LD (allowing to query any HTTP resource containing RDF data) can lead to queries with large number of service patterns which in turn can dramatically increase the time to find the optimal reordering, we proposed the use of a simple greedy algorithm for finding a near-optimal query execution plan without checking all possible query reorderings. The results of an experimental evaluation using an existing benchmark showed that a query reordering method which considers the number and type of unbound variables and the number and type of joins achieves the optimal query plan in 88% of the examined queries, resulting in a large decrease of the overall query execution time (from minutes to a few seconds in many cases). Regarding the greedy algorithm, it has an accuracy of 94% in finding the reordering with the lowest cost.
As part of our future work, we plan to offer a holistic query reordering approach which will cover any type of federated queries. This involves the handling of queries containing UNION and nested graph patterns, as well as queries which combine triple and service patterns. We also plan to offer this query reordering functionality as a web service, allowing for on-the-fly query optimization. | 2018-06-25T13:25:00.783Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "80129e16d72baa2d1d48300979bfb6580f50d299",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f233971d80089b9eeaab3f38e7666a5a6b97ea64",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247610368 | pes2o/s2orc | v3-fos-license | Shaking Table Test Investigation on Seismic Performance of Joint Model of Immersed Tunnel
Critical for the seismic safety of immersed tunnels is the magnitude of deformation and force developing in the segment joints. To investigate the seismic performance of segment joint in immersed tunnel, this paper presents a series of shaking table arrays tests that were performed on a microconcrete tunnel model embedded in the soil. The tests take account of the uniform and wave passage effect in different apparent wave velocity of longitudinal seismic excitation. The result showed that the wave passage effect had a great impact on axial force, bending moment and deformation of joints. The comparison showed that structural response under nonuniform earthquake excitation is larger than that under uniform excitation. The simplified model was established in ABAQUS for numerical analysis. The soil around the tunnel was simplified as spring-damper, the tunnel was simplified as beam element, and the joint was simulated by nonlinear spring. The numerical simulation results were in good agreement with the experimental data. In addition, the model was analyzed by changing input apparent wave velocity, joint stiffness and joint number. The results showed that the deformation of joints was smaller and the deformation of flexible joints was greater under high apparent wave velocity.
Introduction
Many immersed tunnels have been constructed in soft ground at port areas, where the response of both the soft ground and the immersed tunnels are amplified during earthquakes. Since China is located in one of the most active seismic zones in the world, the earthquake resistance of these immersed tunnels must be checked from various points of view, especially the magnitude of deformations developing in the segment joints. e seismic excitation may affect the overall stability of the immersed tunnel, leading to decompression of the joint gaskets, jeopardizing the water tightness of the tunnel. Presently, no uniform criteria have been available for the seismic design of segmental joints. e safety of segmental joints is especially important and worth of extraordinary attention [1].
Recently, more and more researchers have got to investigate the immersed tunnel. Anastasopoulos [2] applied finite element method (FEM) to investigate the seismic response of a very-deep immersed tunnel in Greece, under the simultaneous action of longitudinal, transversal, and vertical seismic excitation. e joints between the tunnel segments are modeled realistically with special nonlinear hyperelastic elements. Oorsouw [3] investigated the behavior of segmental immersed tunnel subjected to seismic loading and especially on the sensitive segment joint. Jun-Hong Ding [4] used the finite-element code LS-DYNA to analysis the large-scale seismic response of immersed tunnel. e behavior of nonlinear material such as soil, nonreflecting boundary definition, and soil-tunnel interaction were taken account. Yang Ding [4] and Tayloy [5] established numerical models of immersed George Massey tunnel for nonlinear dynamic geotechnical analyses and the centrifuge models was used to verify and calibrate the numerical models. However, most of the literature carries out theoretical and numerical simulation analysis on immersed tunnel, and few of the actual experimental studies are carried out. Asynchronous ground motion does not significantly affect buildings, but they may have significant impact on the seismic response of extended structures such as bridges and tunnels. Consideration of spatially varied seismic input motion for such structures is challenging owing to its complexity [6]. Haitao Yu [7,8] carried out multipoint shaking table tests on a long immersed tunnel. Jun Chen [9,10] investigated the utility tunnel under nonuniform earthquake wave excitation. However, the force of the joint was not measured in the test. Immersed tunnel is a long linear structure, it is very important to consider the nonuniform earthquake excitation for seismic response analysis. Seismic performance of segmental joints is particularly important for tunnel safety. erefore, it is necessary to carry out shaking table tests to research the seismic performance of segment joints under wave passage effect excitation.
e Zhoutouzui tunnel is a newly built immersed tunnel in Guangzhou. e whole length is 2200 m, including immersed tunnel (about 340 m) under the Pearl River [11], as is shown in Figure 1. e first and last segments are variable cross section. e geotechnical engineering investigation report on the Zhoutouzui variable cross section immersed tunnel indicates that, the tunnel is laid on rock-soil layers with variable physical mechanics properties, and lithologies in this area puts up quite complex, in some parts intensely weathered zones and weak weathered zones are in alternating layers, which makes the mechanics analysis of this tunnel under earthquake quite difficult.
To investigate the mechanical properties of immersed tunnel joint under nonuniform earthquake, this paper presents a series of shaking table arrays tests that were performed on an immersed tunnel model embedded in the soil. e wave passage effect was considered. e main results obtained from the tests were summarized and discussed. Further, a FEM of the test immersed tunnel was established. e results obtained from the suggested numerical model are compared with experimental measurements in terms of displacement. e comparisons show that the numerical results match the experimental measurements quite well. For further analysis, the different stiffness of joints, different apparent wave velocity and multi joints were taken into account. Table Array. In this experimental test, the shaking table array system at Beijing University of Technology (BJUT, Beijing, China), which was able to support multisupport inputs, was used for the dynamic experimental test of the immersed tunnel under asynchronous excitations. Each table consisted of a 1 × 1 countertop, actuators, link rods, and a base, as shown in Figure 2. e performance parameters of the table array are shown in Table 1 [12]. In this experiment, four independent shaking tables were used, and the space between each table was 1 m.
Test
Model. For the test model under dynamic loads, the physical parameters of the structural dynamic characteristics were as follows: (1) structural geometry (l); (2) material properties, including elastic modulus (E), mass density (ρ), Poisson's ratio (µ), stress (σ), and strain (ε); (3) loads, including force (F) and moment (M); and (4) dynamic indicators, including time (t), frequency (ω) damping ratio (ξ), acceleration (a), stiffness (k), and mass (m). us, the solution equation for structural dynamics problems can be expressed as [13] f(E, ρ, μ, σ, ε, F, M, t, ω, ξ, a, k, m) � 0. (1) First, because the aim of the experiment was to investigate the response of an immersed tunnel under seismic excitation, the influence of gravity on the model was ignored. Second, a strength model with the same materials as the prototype was adopted, e.g., the materials and the strain of the tunnel model were the same as that of the prototype.
us, it was assumed that (1) vertical force did not affect the transverse stiffness of the structure, i.e., the stiffness similitude parameter S k � S E S L ; (2) the strain of the tunnel model was the same as that of the prototype, i.e., strain similitude parameter Sε � 1; and (3) the damping ratio of the tunnel model was the same as that of the prototype, i.e., the damping ratio similitude parameter Sξ � 1. According to equation (1), the similitude relations of the tunnel model could be obtained by dimensional analysis.
Considering the limitations of the laboratory space and the actuation capacity of the shake tables, the test immersed tunnel was scaled to 1/60 of the prototype tunnel in geometric dimensions.
e tunnel was made out of microconcrete, and galvanized steel wire gauze was used for the reinforcement. e microconcrete mixture ratio of the experiment was: cement (42#): sand: lime: water � 1 : 6.0 : 0.6 : 0.5. e modulus of elasticity was 7410 N/mm 2 . e compressive strength of the concrete cube was 5.679 N/mm 2 . e density of the microconcrete was similar to that of the prototype concrete, so the similitude parameter of density was set as 1, and the elastic modulus similitude parameter was set as 1/4. According to the scale factor, the model of two middle segments corresponded to a 503 mm × 160 mm rectangle tunnel having an equivalent concrete lining thickness of about 20 mm. e first and the end models corresponded to a variable cross section, the dimensions of which are shown in Figure 3.
In this experiment, the most important requirement for the similitude relations of the dynamic test model was to determine the similitude parameters of acceleration 2 Shock and Vibration and time. e influence of the gravity soil model was also ignored. Because it was difficult to meet the requirements of similarity relation between the model and prototype, the predominant period of similarity relation was used for the design of the model soil, ensuring the predominant period of prototype soil and model soil similarity. e shear wave velocity was changed with depth and the property of the soil. e predominant period of the soil can be calculated by the equivalent shear wave velocity: us, due to e shear wave velocity of soil is where m is the model and P is the prototype. In this model soil, according to the similarity ratio.
According to the geotechnical engineering investigation report, the equivalent shear wave velocity was 250 m/s. According to (6), the equivalent shear wave velocity of the model soil was set as 88.75 m/s. e results of the similitude relation are presented in Table 2.
To obtain an appropriate equivalent shear wave velocity, clay was chosen, and different proportions of sawdust were mixed in as test samples. ree groups of samples with different proportions were selected for the resonant column test. e proportions were: 1 : 2.5, 1 : 3 and 1 : 3.5. e max shear modulus and shear wave velocity are shown in Table 3. Finally, sawdust: prototype: water � 1 : 3:2.7 was chosen for the experiment tests [14]. e height of soil on the prototype tunnel was 2.29 m, and the density was 2 g/cm 3 . e depth of water was 6 m, and the density was 1 g/cm 3 . e water that was converted to the depth of soil was equal to 3 m. According to the similitude relations, the embedded soil depth was 14 cm.
Joint.
Limited by the test conditions, the similarity ratio of the model is set as 1/60. As a result, many parts of the joint cannot be made according to the prototype structure. Considering the joint failure caused by excessive tensile displacement at the joint, the design focus of the joint is to design the tensile stiffness of the model. e shear key at the joint is designed to make the joint work normally, without considering the similarity of the shear key. e longitudinal deformation of the tunnel depends significantly on the gasket and prestressed tendon. In this work, 38 sets of tendons were used in the prototype tunnel. e whole tension stiffness was 83,406.2 KN/m. e compressive stiffness of the gasket was simplified to the bilinear model by the result of the finite element analysis, as illustrated in Figure 4. e Gina gasket was simplified to a rubber ring with a section equal to the cross section of the tunnel model, and the thickness was 2 cm. e angle steel was embedded at the end of the tunnel model. e angle steel embedded on the side of the tunnel was cut, and the steel bar welded with angle steel was bound with steel wire gauze. e end steel shell was welded to the angle steel, and the rubber ring was glued to the side of the end steel shell. e horizontal and vertical shear key was into the rectangular ring, and its thickness was 2 mm. One end of the shear key was S ρ Set as 2 Set as 0.7 Poisson's ratio, µ - Dynamic indicators welded to the end steel shell, the other end was free, so that the free end could insert into a moveable middle steel shell with a section larger than the section of the shear key. e gap in the shear keys was to prevent collisions during the test, as is shown in Figure 5(a). e prestressed cables were simplified to six bolts with a diameter of 4 mm. e effective tensile length of the cables was 55 mm, as is shown in Figure 5(b). e cables were prestressed by tightening the nuts. e bolt with a large diameter in the middle of the joint was used to protect the joint from damage during installation. When the position of the tunnel was ensured, the bolt was removed. Four boxes were fabricated and installed on each shaking table to load the soil in the test. e overall dimension of the box was 7.3 m × 3.2 m × 1.2 m (length × width × height), as is shown in Figure 6(a). e first and the last boxes were 1.5 m × 3.2 m (length × width), and the two middle boxes were 2 m × 3.2 m (length × width). e box frames were welded by angle steel, with a dimension of 70 mm × 70 mm × 5 mm. e bottom of the box was a steel plate, which had a thickness of 10 mm. For the purpose of achieving nonuniform seismic excitation, the gap between each box was 100 mm. e bottom of the two adjacent boxes was lapped with steel plates, and bolts were fixed between the steel plates. Butter was smeared between the contact areas to decrease friction. Before conducting nonuniform seismic excitation tests, the bolts were removed. Both sides of the two adjacent boxes were inserted into square steel tubes, with a dimension of 100 mm × 100 mm. e square steel tubes and the boxes were connected by bolts, as shown in Figure 6(b). When the nonuniform excitation was applied, the square steel tubes were removed, as shown in Figure 6(c). To avoid large deformation at the bottom of the box, joist steel and stiffener was welded under the plate. Rubber sheet and polystyrene foam board was used to decrease the boundary effect. e thickness of the rubber sheet and foam board was 15 mm and 200 mm, respectively. e rubber sheets and box walls were fixed by multiple small bolts. e foam boards and rubber sheets were filled with spray foam. e whole model of the test was a system composed of model boxes, soil and a tunnel embedded in soil. To ensure that the vibration of the model box did not affect the dynamic response of the model soil, the natural frequency of the model box was far away from the natural frequency of the model soil; that is, the natural frequency of the model box was much higher or lower than that of the model soil. Moreover, the natural frequency of the model should be less than the maximum working frequency of the shaking table.
To verify the applicability of boxes, numerical analysis of the box was carried out by ABAQUS. Beam and shell elements were used to simulate the frame and floor of the box, respectively.
e simulation results showed that the fundamental frequency of boxes at both ends was 33.9 Hz. e fundamental frequency of the middle boxes was 22.6 Hz. e four boxes were tied together, and the overall fundamental frequency was 23.8 Hz. When the soil was taken into consideration, the frequency of the box-soil was 11.6 Hz. e frequency difference between the whole box and box-soil was nearly two times, so the box could meet the test requirements.
Model Installation.
To avoid relative sliding between the soil and box bottom, gravel concrete was prepoured on the box bottom and scratched with a broom. e model soil was compacted manually every 20 cm. When the free-field test was completed, the soil around the model tunnel was excavated, and the tunnel was buried in the soil. e top of the soil was then covered with a plastic sheet to prevent evaporation, and the soil was compacted with adhesive weight, as shown in Figure 7.
Instrumentation.
e layout of sensors is depicted in Figure 8. A stands for the accelerometer, D stands for displacement and F stands for force sensor. e accelerometers were applied to the soil and tunnel. A1∼A4 were arranged on the tunnel to measure the acceleration response of the tunnel, A5-A7 were arranged on the joints and A8-A11 were buried in different heights of the soil. Laser displacement sensors (D1∼D3) were arranged on the joints. Twelve force sensors were applied on the joints, and each joint was arranged with four force sensors.
e positions of the sensors are shown in Figure 8(c). To prevent damage to the accelerometers, they were wrapped in balloons before being buried into the soil, as shown in Figure 9(a). e force sensors were connected with the steel shell through the screw, and the two ends of the screw were fixed by bolts on both sides of the steel shell, as shown in Figure 9(b). e laser displacement sensor was fixed on a horizontal shear key on one side of the joint, and a baffle was placed on the other side of the joint to receive the laser signal, as is shown in Figure 9(c) and 9(d).
Test Case.
To simulate time lag in the arrival of the waveform at each box bottom, considering the wave passage effect, it was assumed that the wave motion propagated from shaking table 1 to 4, meaning that the time difference in the wave motion between adjacent shaking tables is [15] Δt where D is the distance between the adjacent shaking tables along the direction from shaking tables 1 to 4, and V p is the apparent wave velocity. Chi-Chi ground motion acceleration was applied to wave velocities of 100 m/s, 200 m/s, 300 m/s, 600 m/s, and infinity (uniform). e magnitude was gradually increased from 0.06 g to 0.4 g. According to the scaling law, the frequency and dynamic time in the shaking table tests were n time and 1/n those of the prototype, respectively. e sampling frequency of the data acquisition was set at 1000 Hz. Figure 10 shows the variation of peak value of A10 with the change of input peak ground acceleration (PGA). It was seen that for PGA< 0.19 g the soil had good linearity. When PGA >0.19 g, the peak value exhibited distinctly nonlinear behavior. Figure 11 shows the amplification factor (AF) of soil and tunnel under different apparent wave velocity in PGA � 0.12 g and 0.26 g. As can be observed that when PGA � 0.12 g, AF of the soil and tunnel were 1.63 and 1.65 under uniform excitation. However, for PGA � 0.26 g, AF of those were 1.42 and 1.56, the difference was larger than those in PGA � 0.12 g. It is also found that the AF of tunnel decrease with the high input PGA. e reason is that the higher of the input PGA, the more obvious of the soil nonlinearity is shown. When considering the wave passage effect, the AF of tunnel was larger than that of soil under input PGA � 0.12 g. However, for input PGA � 0.26 g, the AF of the tunnel was smaller than that of soil (except for the case of V p � 300 m/s). is is caused by the nonlinear characteristics of soil, and the wave passage effect, which will lead to the sliding between soil and tunnel.
Shock and Vibration 7
Due to space limitations, only input PGA � 0.12 g is presented in this paper. It is interesting to compare the actual input excitation of the shaking table with that developed in the soil and tunnel. For this purpose, the time history and corresponding autospectrum of sensors A4 and A10 are depicted in Figure 12. It is seen from these figures that the acceleration time-history curves of A4 and A10 had a large amount of synchronism under uniform seismic excitation. When considering the wave passage effect, the movement of the tunnel (A4) was slower than that of soil (A10). e autospectrum figures show that the curve of A4 was similar to that of A10 under uniform seismic excitation.
However, autospectrum figures present multiple peak values under wave excitation, making the spectral composition much richer than those in uniform excitation.
Axial Force of Joints.
Because the initial pretension could not be measured in dynamic experiment, so the initial value of each force sensor was set to zero. Under uniform seismic excitation, the joint 1 (J1) mainly undertake the pressure, and the J3 always undertake the tension. Compared to the J1 and J3, J2 experience less force, as is shown in Figure 13 e range of force varies within 120 N. It is explained that the tunnel essentially "follows" the uniform excitation. However, when considering the wave passage effect, the axial force was amplified about 10 times. Central joint experienced less force than the terminals and the maximum value of tension was occurred in J1. J3 experienced the biggest pressure in all the case. e maximum of tension was occurred in the case of V p � 100 m/s, the value of which was 1436 N. According to the similitude relations, the maximum axial force of the prototype was 20822 KN. e mean tensile stress of prestress tendon was 20.9 MPa, adding the initial stress 2 MPa. e stress was within acceptable limits in all cases. Figure 14 illustrates the bending moments M z and M Y in different apparent wave velocity cases. It is shown that J1 and J3 had identical bend direction in uniform excitation. When considering the wave passage effect, the bending moment decreased along the wave propagation direction. M z was larger than M Y . Changing the V p led to a slight difference of the bending moment.
Deformation of Joints.
Deformation time histories of the immersion joins are portrayed in Figure 15 and the maximum value of deformation is shown in Figure 16. e gaskets experience a slightly initial compression by the bolt. We set the initial compression is zero. e seismic oscillation caused successive decompression and recompression of the gaskets. Considering the wave passage effect, joints experienced more relative displacement than the ones in uniform excitation. While for V p � 200 m/s, the decompression Δ max � 0.047 mm. Converting into the prototype tunnel, the deformation was 28.2 mm, less than precompressed value (50 mm), so the deformation within acceptable limit.
Finite Element Model of the Immersed Tunnel.
e finite element model (FEM) was used to perform nonlinear dynamic transient analysis of the tunnel by ABAQUS. e model layout is depicted in Figure 17. Tunnel segments were simulated using beam elements. Each immersion joint was modeled with six nodes, which were rigidly connected to the single end beam with special transitional rigid elements [2,16]. Adjacent nodes were connected to each other with single-degree-of-freedom nonlinear springs, representing the stiffness of the joint. All beams were connected to the soil through interaction springs and dashpots. e first and last segments were also connected with springs and dashpots. e analysis was conducted in two stages. First, static pressure was applied to the end of each segment to simulate the initial hydrostatic longitudinal compression. At the second stage, the model was subjected longitudinal dynamically to earthquake shaking. e acceleration time histories, which were measured by A10, were applied to the supports of springs and dashpots with the experimental time lag.
Soil-Tunnel Interaction Parameters.
To obtain proper values of the longitudinal (x) and vertical (z) supporting spring and dashpot constants, a rigid long rectangular foundation on half-space was utilized [15]. For the surface of a homogeneous half-space, the vertical static stiffness is determined according to Longitudinal stiffness is where χ � (A b /4L 2 ); G is the shear modulus that get from Table 3; 2L � 5.75 m is the length of the model tunnel; 2B � 0.6 m is the width of the tunnel; ] is the Poisson's ratio of soil and the A b is the area of tunnel. e dynamic stiffness is
Shock and Vibration 13
where K is the static stiffness of "spring"; k � k(ω) is times the dynamic stiffness coefficient. k y � k y ((L/B), υ; α 0 ) is plotted in Figure 17. e vertical radiation dashpot coefficient is determined by where V La � (3.4/π(1 − υ))V s , V s is the shear wave velocity of soil, and ρ is the density of soil. e value of V s and ρ were selected from table 3. c y � c y ((L/B), υ; α 0 ) is plotted in Figure 18. e longitudinal radiation dashpot coefficient is e total dashpot is determined using Dynamic stiffness and radiation dashpot coefficient are shown in Figures 18 and 19.
For the fully embedded tunnel in homogeneous halfspace, the static stiffness is K y,emb � K y,sur 1 + 1 21 where D � 0.16 m is height of the model tunnel and A ω is actual sidewall-soil contact area. e vertical dynamic stiffness is obtained by k y,emb ≈ k y,sur 1 − 0.09 D B (3/4) Longitudinal static stiffness is e longitudinal radiation dashpot coefficient is
Joint Spring Parameters.
e joints between the tunnel segments were modeled with nonlinear springs. When they compress, the springs refer to the Gina gasket. When the springs are in tension, the stiffness of springs is equivalent to the rigidity of prestressed tendons. e force-deformation is shown in Figure 20. Assuming that the stiffness of the Gina gasket is denoted by k 0 , different Gina gasket stiffness (1/4 k 0 , 1/2 k 0 , 3/4 k 0 ) is considered in this paper.
Comparison of Experimental and Numerical Results.
To verify the rationality and reliability of the model, the data of A10 in the test was regarded as the input ground motion in FEM. e baseline of the data was corrected, the data was filtered, and 0-50 Hz was reserved. e wave was input along the longitudinal direction of the model, and the time lag was the same as the experiment. Figure 21 shows the deformation of the test and FEM in the case of PGA � 0.12 g. e trend of the two curves was consistent, and the maximum tensile value of the two curves was close. However, there was permanent compression deformation in the test curve, and it was shifted downward after 2 s. If the effect of permanent deformation was ignored, the curve of the experiment and FEM were consistent with each other.
Results of Different Apparent
Velocities. Different apparent velocities from 300 m/s∼600 m/s were input into the model. e maximum deformation of J1 is shown in Figure 22. When the apparent wave velocity was less than 1000 m/s, the value of tension was obviously greater than that of compression. When the velocity was greater than 1000 m/s, the difference between tension and compression was small. Besides, with the increase of wave velocity, the maximum value decreased.
Joint Stiffness.
To consider the deformation of joints under different stiffness, four kinds of stiffness were considered in the FEM. e apparent wave velocity was 600 m/s. e results are shown in Figure 23. When the joint stiffness was k 0 , the maximum compression deformation was at the J1, and the minimum compression deformation was at the J3. When the stiffness of the joint changed to 1/4 k 0 , the maximum compression deformation appeared at the J2. As the stiffness decreased, the compression and tensile deformation of joints increased gradually, indicating that the flexible joints could withstand greater deformation.
Multijoints.
To investigate the effect of the number of joints on the seismic performance of the immersed tunnel, three joint and four-joint cases were considered in the numerical analysis. e same ground motion was input to the model, and the apparent wave velocity was 600 m/s. Figure 24 illustrates the deformation time history curves of each joint. e time history curves indicate that the shape of curves with three joints and four joints are quite different. When there are three joints in the tunnel, the maximum compression deformation is 0.1702 mm, and the maximum tension is 0.252 mm. When there are four joints, the maximum compression is 0.166 mm, and the maximum decompression is 0.184 mm. erefore, increasing the number of joints in the same length immersed tunnel can improve the seismic performance of the joint.
Conclusion
is paper presented a series of shaking table arrays tests that were performed on a microconcrete tunnel model embedded in the soil. e prototype tunnel was constructed under the Pearl River in China. e soil was made of clay mixed with sawdust. e joints between the tunnel segments were simplified by a rubber ring, and a box was designed to contain the soil and tunnel. e tests took into account the uniform and wave passage effect in different apparent wave velocity of longitudinal seismic excitation. Compared to the existing research, the joints were redesigned in this test, so as to research the response of segment joint force and displacement under nonuniform seismic excitation. e results showed that the tunnel and the soil maintained synchronous motion under uniform seismic excitation. However, when the wave passage effect was considered, the tunnel and soil experienced sliding at the interface. e longitudinal wave passage effect and its input direction can, therefore, have a great impact on the axial force, bending moment and deformation of joints. e wave passage effect will make the joint deformation tend to nonuniformity. e comparison shows that structural response under nonuniform seismic excitation is larger than that under uniform excitation. erefore, the effect of nonuniform seismic excitation should be considered in the design of immersed tunnels. e simplified model was established in ABAQUS for numerical analysis. e soil around the tunnel was simplified as spring damper, and the tunnel was taken as the dynamic foundation to calculate the dynamic stiffness coefficient of the spring. e tunnel was simplified as a beam element, and the joint was simulated by a nonlinear spring. e numerical simulation results were in good agreement with the experimental data. e immersed tunnel was analyzed by changing the input apparent wave velocity, joint stiffness and joint number. e results showed that the deformation of joints was smaller, and the deformation of flexible joints was greater under high apparent wave velocity. e increase in the number of joints reduced the deformation of joints. It is suggested that the response of different segment lengths under earthquake should be considered in the design of immersed tunnel, so as to determine the best segment length.
Although this research was prompted by the needs of a specific project, many of the conclusions in this study are sufficiently general and may apply to the design of other immersed tunnels.
Data Availability
Data are available upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2022-03-23T15:30:36.285Z | 2022-03-20T00:00:00.000 | {
"year": 2022,
"sha1": "df63331ab8bc7acdde8a40a96409086e1052fb61",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/sv/2022/1095986.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "691f2c408ebf6de273b53a8ded0f3fca72576219",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
125842793 | pes2o/s2orc | v3-fos-license | Electronic and magnetic properties of the cation vacancy defect in m-HfO 2
The electronic and magnetic properties of cation vacancies in m-HfO2 are predicted using density functional theory. The hafnium vacancy is found to introduce a series of charge transition levels in the range 0.76–1.67 eV above the valence band maximum associated with holes localized on neighboring oxygen sites. The neutral defect adopts a S = 2 spin state, and we compute corresponding g tensors to aid electron experimental identification of the defect by electron spin resonance spectroscopy. We find that separated vacancies exhibit weak ferromagnetic coupling and the interaction is highly anisotropic—being much stronger when mediated by planes of threecoordinated oxygen ions. Further, we characterize the process of thermal detachment of a hole from a neutral vacancy providing an atomistic model for the p-type conductivity observed experimentally at high temperature. These results provide invaluable information on the electronic and magnetic properties of cation vacancies in HfO2 and can aid future experimental identification of these complex defects.
I. INTRODUCTION
Vacancies are one of the most commonly occurring defects in metal oxide materials and are responsible for diverse optoelectronic phenomena of both fundamental and practical significance.For example, their presence is responsible for modifying the optical properties of minerals [1][2][3], intrinsic electrical conductivity in semiconducting oxides [4][5][6][7][8], charge trapping in microelectronic devices [9][10][11][12][13], and electronhole recombination centers in photovoltaic or photocatalytic materials [14][15][16].Oxygen vacancy defects have received considerable attention since many natural and synthesized metal oxide materials are oxygen deficient.While cation vacancies are also usually present, their concentration and properties have been characterized for very few materials [17][18][19][20][21][22][23].On one hand, this is because widely used experimental probes such as absorption and luminescence spectroscopy and deep level transient spectroscopy cannot easily discriminate cation vacancy defects from other intrinsic and extrinsic defects that may be present in materials.On the other hand, first principles theoretical methods that have proved invaluable for isolating and resolving the properties of oxygen vacancy defects are challenging to apply due to errors associated with the presence of localized holes [24,25].The uncertainty that remains has fueled considerable speculation on the possible role of cation vacancies in oxides.One such example is provided by hafnium dioxide (HfO 2 ), a material which finds a number of applications in microelectronics [26,27].It has been suggested that cation vacancies can induce high temperature p-type conductivity [28][29][30] and contribute to the so far unexplained ferromagnetism in HfO 2 [31][32][33][34][35].However, our knowledge of the fundamental electronic and magnetic properties of cation vacancies in HfO 2 remains extremely limited, presenting an obstacle to deeper understanding of these complex effects.
In this paper we employ density functional theory (DFT) to predict the electronic and magnetic properties of cation vacancies in monoclinic hafnium dioxide (m-HfO 2 ) from first principles.To ensure accuracy of the results we employ two different approaches that eliminate, at least in part, the self-interaction (SI) error present in widely used DFT approximations [24,25].We show that these two SI corrected approaches predict properties that are very consistent with each other, giving us confidence in the results.On the other hand, standard semilocal approximations to exchange and correlation yield a qualitatively incorrect description of the defect.We show that the neutral hafnium vacancy involves four holes localized on neighboring oxygen sites associated with a series of charge transition levels in the range 0.76-1.67eV above the valence band maximum.The defect is predicted to exhibit a net magnetic moment of 4 μ B , being 20 meV more stable than the zero moment state.However, the ferromagnetic interaction between separated vacancies is very weak even at high concentrations due to the localized nature of the holes suggesting cation vacancies are an unlikely candidate for explaining the observed ferromagnetism.We find that although in the ground state four holes are localized near the neutral vacancy, one of the holes is relatively weakly bound and can detach as a free polaron providing an atomistic model for the observed p-type conductivity at high temperature [28][29][30].We compute g tensors associated with the cation vacancy defect to aid electron experimental identification of the defect by electron spin resonance (ESR) spectroscopy and also discuss implications of the predictions for other experimental probes such as absorption spectroscopy.Altogether, these results provide deep insight into the electronic and magnetic properties of cation vacancies in HfO 2 and will aid future experimental identification of these complex defects.
The rest of the paper is organized as follows.In Sec.II we discuss previous experimental and theoretical studies on cation vacancy defects in oxides as well as the challenges involved in theoretical prediction of their properties.In Sec.III we present results on the electronic and magnetic properties of cation vacancies in HfO 2 .Finally, in Secs.IV and V we discuss the results and present our main conclusions.
II. BACKGROUND
Cation vacancies in most metal oxide materials are able to trap one or a number of holes.In many cases these holes are found to localize on individual oxygen ions surrounding the vacancy forming O − ions [20,23,25].The localization of holes in this way in many cases breaks the point group symmetry of the defect through an asymmetric lattice distortion.Therefore, such a defect is perhaps better considered as an acceptor defect with a number of bound small hole polarons [20].Cation vacancies of this type are thought to exist in most metal oxide materials, but relatively few materials have been well characterized experimentally.A good example is the V − center in MgO which consists of a missing Mg 2+ ion with a single hole localized on a neighboring oxygen ion.The localization of the hole has been detected directly for this defect by ESR spectroscopy [18,19].An associated optical absorption band peaking near 2.3 eV has also been identified, which is attributed to electronic transitions to the unoccupied hole state above the valence band maximum [17,20].There is experimental evidence that similar defects exist in a wider range of metal oxide materials including CaO [36], NiO [37], ZnO [38], TiO 2 [39], HfO 2 [28][29][30], and a number of perovskites [40], but in most cases the understanding of associated electronic properties is incomplete.
In the absence of clear experimental information, first principles calculations of defect properties using DFT can be invaluable.However, modeling the structure and properties of cation vacancy defects using DFT remains extremely challenging because common semilocal approximations to the exchange-correlation functional, such as the generalized gradient approximation (GGA), suffer from SI error which tends to artificially delocalize holes [24,25].Nevertheless there are numerous examples of GGA-DFT calculations for cation vacancies in oxides which generally predict very shallow hole states at odds with experiment [41][42][43].Such defects are often predicted to lack any symmetry breaking polaronic distortion, even for defects which are known to adopt a lower symmetry structure such as the V − center in MgO [16].Predictions regarding the magnetic interaction between cation defects are also likely to be inaccurate due to the artificially delocalized hole states which will exhibit a longer range direct exchange interaction with other holes.Approaches such as DFT + U [44], cancellation of nonlinearity [25,45], and hybrid DFT functionals incorporating nonlocal exchange [8,46,47] eliminate at least part of the SI error and so offer a route to more accurate prediction of cation vacancy properties.Many of these approaches involve a parameter which controls the strength of SI correction (e.g., U in DFT+U or the percentage of Hartree-Fock exchange in hybrid functionals) which in principle can be determined unambiguously by ensuring that the correct linear behavior of the total energy with respect to fractional occupation number is obtained.However, there are still relatively few examples where SI corrected approaches have been used to model the properties of cation vacancy defects in oxides.These include vacancies in MgO [23], NiO [7], Cu 2 O [21,48], ZnO [49], and several mixed oxides with the spinel structure [50].
Relatively little is known experimentally about the properties of cation vacancies in HfO 2 .There is some experimental evidence pointing to the fact that they may serve as donors of mobile holes at high temperatures leading to p-type conductivity [28][29][30].Early studies on the temperature dependence of conductivity in HfO 2 identified two regimes of p-type conductivity.At low temperatures hole conduction was found to be associated with an activation energy of 0.7 eV, whereas at higher temperatures the associated activation energy was significantly reduced.This was interpreted in terms of the thermal liberation of holes from cation vacancies, i.e., V × Hf → V Hf + h • [28,29].This idea is corroborated by more recent results which also identify cation vacancies as a source of high-temperature p-type conductivity in HfO 2 [30].In a previous theoretical study we showed how the activation energy for small hole polaron diffusion in m-HfO 2 is rather low, 0.14 eV, consistent with the high temperature conductivity observations [51].There has also been some speculation on the possible role cation vacancies may play in explaining observed ferromagnetism [31][32][33]35].However, DFT calculations for m-HfO 2 employing local or semilocal functionals found relatively weak ferromagnetic coupling between cation vacancies insufficient to explain experimental observations [34,52].
III. METHODS
Density functional theory (DFT) calculations are performed using the projector augmented wave method as implemented within the Vienna ab initio simulation package [53,54].The 5p, 6s, and 5d electrons of Hf and the 2s and 2p electrons of O are treated as valence electrons and expanded in a plane wave basis with energies up to 300 eV (400 eV for cell optimization).For the conventional cell of monoclinic HfO 2 , which is the most stable phase in ambient conditions, an 8 × 8 × 8 Monkhorst-Pack k-point grid is used, and structural optimization is performed until forces are less than 0.01 eV/ Å.Using the Perdew-Burke-Ernzerhof (PBE) exchange correlation functional we obtain lattice parameters within 0.6% of experiment for m-HfO 2 (a = 5.142 Å, b = 5.192 Å, c = 5.250 Å, and β = 99.65 • ).
To correct the SI error in the PBE exchange correlation functional we employ the cancellation of nonlinearity (CON) method [25,45] that has been demonstrated previously for modeling small hole polarons in HfO 2 [51,55].This method has also been successfully applied previously to model a range of hole centers in oxides including acceptor defects in transparent conducting oxides [45,56].The CON approach applies a local occupation dependent potential to the oxygen p states of the following form: where n m,σ is the fractional occupancy of sublevel m of spin σ in the oxygen p orbital.The reference occupation n host is the anion p-orbital occupancy in the absence of holes, as determined from the neutral defect-free system.The only free parameter in the potential, λ hs , is determined by ensuring that the correct linear behavior of the total energy with respect to fractional occupation number is obtained [25,57].In a previous paper we described the parametrization of this approach for hole polarons in HfO 2 yielding λ hs = 3.8 eV [51].Defect calculations for the PBE and CON approximations are performed using a 3 × 3 × 3 supercell (324 atoms).We also perform calculations using the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional which includes nonlocal exchange in order to correct the SI error [46].Using the HSE functional we obtain lattice parameters for m-HfO 2 again in good agreement with experiment (a = 5.144 Å, b = 5.190 Å, c = 5.330 Å, and β = 99.66 • ).Defect calculations for the HSE functional are performed using a smaller 2 × 2 × 2 supercell (96 atoms) owing to the increased computation expense of these calculations.In all cases total energies are corrected for potential alignment and image-charge interactions as described in Ref. [25].
The ESR properties of the Hf vacancy in its different charge states are calculated by means of an embedded cluster model described in detail in Refs.[58] and [59].This model has been used successfully to calculate the optical properties and ESR parameters of oxygen vacancies, electron and hole polarons, and excitons in pure and doped m-HfO2 [47,58,60,61].In this model, a quantum-mechanical (QM) cluster including the vacancy, the distorted region accommodating the defect, and its vicinity is embedded into the rest of the crystal represented by a lattice of classical rigid ions.In order to design this model, we construct an approximately spherical nanocluster using as a building block a 324-atom supercell of bulk m-HfO 2 with zero charge and dipole moment, with its geometry obtained from the periodic DFT calculations.This nanocluster contains 8748 classical ions and has a radius of about 26 Å.Then we create the vacancy environment by removing one Hf atom in the center of the nanocluster and modifying the local geometry around the defect to exactly match the lattice relaxation obtained in the periodic calculations up to a radius of about 5 Å (displacements of ions induced by the defect beyond this distance calculated in the periodic model are negligibly small).This computational scheme is implemented in the GUESS computer code, [59] which employs the GUASSIAN09 package [62] for calculating the electronic structure of the QM cluster in the electrostatic potential of the rest of the lattice.
The QM cluster surrounding the Hf vacancy includes 33 hafnium ions and 68 oxygen ions.All Hf ions outside the quantum cluster and within the radius of 11 Å from the center are represented by large-core Hay and Wadt relativistic effective core potentials [63], which substitute all but four electrons of a hafnium atom.This prevents an artificial polarization of the electron density toward positive point ions outside the quantum cluster.The point ions outside the quantum cluster carry formal charges and contribute to the electrostatic potential on the quantum cluster ions (see Refs. [59,61], and [64] for more detail).As in previous studies, we use Gaussian basis sets on oxygen and hafnium ions optimized for the m-HfO 2 case (see Ref. [65] for further details on the basis sets used).We use the HSE density functional to calculate the electronic structure and the g tensor for the vacancy environment.
A. Formation energies
The m-HfO 2 crystal structure is characterized by two types of oxygen ion which differ in their coordination to hafnium ions, either three-coordinated (3C) or four-coordinated (4C).The hafnium ions are each coordinated to seven oxygen ions, three 3C and four 4C.Viewed in the [010] projection, the 3C and 4C oxygen ions are arranged in alternating twodimensional layers separated by layers of hafnium (Fig. 1).Previous calculations using the B3LYP, CON, and HSE approaches have shown that hole polarons localize on both 3C and 4C oxygen sites with the former being more stable by about 0.4 eV [47,51].This effect is explained by the difference in electrostatic potential between the sites and different strain energy associated with polaronic distortion [51].
The bare cation vacancy that is created by removing a positive hafnium ion from the lattice has a formal charge of −4.A range of other charge states are also possible corresponding to localization of holes on anions adjacent to the vacancy.In Kröger-Vink notation the defects we consider are V Hf , V Hf , V Hf , V Hf , and V X Hf , corresponding to localization of zero, one, two, three, or four holes near the vacancy.To characterize the stability of such defects we calculate the defect formation energy, where E q def is the total energy of the supercell containing a defect in relative charge state q, and E ideal is the total energy of the ideal bulk supercell.n i is the difference between the number of atoms of species i in the defective and ideal supercells, μ i is the chemical potential of species i, and E F is the electron Fermi energy.
Defect formation energies are calculated at the CON, HSE, and GGA levels of theory using μ O = E(O 2 )/2 (i.e., half the FIG. 2. Formation energy of Hf vacancy defects in m-HfO 2 calculated using the CON, HSE, and GGA approaches.The formal charge of the defects and the charge transition levels (dotted lines) are also indicated.
energy of an oxygen molecule) corresponding to oxygen rich conditions (Fig. 2).Once μ O is fixed the chemical potential of Hf is also defined since μ HfO 2 = μ Hf + 2μ O .From these results one can read off equilibrium charge transition levels (CTLs) which are defined as the Fermi energy for which two defect charge states q and q have equal formation energy.We note that since the CON approach is not designed to describe O 2 , it leads to significant underbinding of the oxygen molecule.The consequence of this is that absolute formation energies are significantly underestimated, for example when compared to HSE.Similar effects have been found previously in DFT + U calculations, and one approach to remedy the issue is to add a constant correction term to the oxygen chemical potential through comparison to experiment or high level quantum mechanical calculations [66].Any such correction will only lead to a uniform vertical shift of the formation energies shown in Fig. 2 and will have no effect on the predicted CTLs.Since the main focus of this study is on electronic and magnetic properties, we do not attempt such a correction here.The SI corrected CON approach predicts a series of CTLs in the range 0.76-1.67eV above the valence band maximum.This prediction is also consistent with the results obtained using the HSE method (0.96-1.89 eV).GGA, on the other hand, predicts a series of CTLs in the range 0-0.26 eV, with the charge neutral defect being marginally unstable for all positive Fermi energies.This prediction, which is at odds with the known tendency of holes to form small polarons in m-HfO 2 , is a direct result of the SI error.
B. Electronic properties
The neutral Hf vacancy has a S = 2 ground state associated with four holes which are localized on neighboring anion sites.The number of holes associated with the Hf vacancy defects depends on its charge.The optimized structure of each of the defects carrying a net spin along with isosurfaces of electron spin density are shown in Fig. 3.For the V X Hf defect three holes are localized on 3C oxygen sites neighboring the vacancy with a fourth localized on a 4C oxygen site.The preference for hole trapping on the 3C sites is consistent with the increased stability of hole polarons on this sublattice.For the V Hf defect, the addition of an electron eliminates the hole on the least stable 4C oxygen site.Addition of subsequent electrons leading to the V Hf , V Hf , and V Hf defects eliminates the remaining holes one-by-one reducing the total spin of the defect in steps of 1/2.The holes are removed in order of their distance from the Hf vacancy (furthest away first) consistent with the stabilizing electrostatic interaction between the positive hole and the q = −4 charged vacancy.
The calculated electronic density of states for the Hf vacancy defects are shown in Fig. 4. The curves are aligned with respect to the bulk valence band maximum using the average electrostatic potential over ions far from the defect in different supercells as a common reference.The V X Hf defect is associated with four localized electronic states in the gap between 1.5 and 2.3 eV above the valence band.The eigenfunction associated with each electronic state in the gap is associated with a localized hole on one of the anions.As electrons are added the lowest unoccupied states are eliminated.In the case of the V Hf and V Hf defects the occupied electron states drop below the valence band maximum meaning the highest occupied electronic states have a bulklike character.However for the V Hf and V Hf defects the occupied states appear slightly above the bulk valence band maximum.The band gap within the CON approach is the same as within GGA and is underestimated with respect to experiment.However, the HSE calculations indicate that the position of the hole levels maintains the same relative position, but the gap increases by 1.6 eV in much better agreement with experiment.The observation that the electronic properties of hole defects can be described accurately using the CON approach even though the band gap is underestimated reflects that the defect levels are derived from the valence band rather than the conduction band.
C. Thermal detachment of holes
The predicted strong localization of holes producing deep electronic states in the gap would seem to be at odds with the experimental evidence that cation vacancies can give rise to p-type conductivity (Sec.II).To assess the possibility that at elevated temperatures holes may detach from cation vacancies and become free small polarons, we perform calculations of the energetics associated with the process using the CON approach.We consider different initial guesses for the atomic structure and charge density in order that self-consistent minimization of the total energy yields a number of different metastable hole configurations.In this way we have identified the energetically favored pathway for detachment of a hole from a neutral vacancy.Figure 5 shows the spin density following diffusion of a single hole neighboring the vacancy from a 4C site to a nearby 3C site.The change in energy associated with this transformation is 0.34 eV.We also find that complete separation of this hole from the vacancy requires only a further 0.24 eV (0.58 eV in total).This small binding energy is a result of the high dielectric constant of hafnia as well as the energy gained by localizing the hole on the preferred 3C site rather than a 4C site.Subsequent detachment of holes from the V Hf defect costs an increasing amount of energy (starting from 1.01 eV for the first hole and increasing to 1.49 eV for the last).This suggests that under realistic conditions each vacancy can provide on average one hole carrier rather than the four that might be expected on the basis of the formal ionic charge.
D. Magnetic properties
The presence of a net magnetic moment means cation vacancies may be amenable to identification by ESR spectroscopy.To assess the stability of this magnetic moment we also compute the energy of the most stable antiferromagnetic configuration in which two of the holes have their spin aligned antiparallel to the other two.We find this S = 0 spin state is only 20 meV less stable, suggesting the high spin state should be stable at low temperatures.To aid possible characterization we calculate the ESR g tensors for cation vacancies using an embedded cluster approach as described in Sec.III.As discussed in the previous section, V X Hf and V Hf are the most likely charge states of cation vacancies in m-HfO 2 , and so we focus our attention on these defects.The principal components of the g tensors are shown in Table I together with g tensors for other spin defects in m-HfO 2 calculated previously using a similar approach [47,58].The magnitude of the g-tensor components for cation vacancies are significantly larger and more anisotropic than any other defect, suggesting they could be clearly discriminated by ESR spectroscopy.The defects with the most similar g tensors are those associated with small hole polarons as expected.The much higher anisotropy is consistent with the much lower symmetry of the cation vacancy defects compared to hole polarons.It is also notable that there is a significant difference between the g tensor of the neutral and negatively charged vacancy reflecting their very different electronic structure and spin density [e.g., see Figs.We also employ the CON approach to assess the strength of magnetic coupling between separated cation vacancies in HfO 2 .As shown in Fig. 6 we consider two neutral cation Hf and V Hf defects.Previously calculated ESR g tensors for other spin defects in m-HfO 2 are also included including small hole polarons, the electron polaron, and oxygen vacancies on both oxygen sublattices [47,58].vacancies separated by 8.3 Å in the [001] direction (i.e., parallel to the 3C oxygen planes).Since the defects are treated in periodic boundary conditions, this system corresponds to a one-dimensional chain of vacancies.The ground state is found to be ferromagnetically aligned, and the energy to switch to antiferromagnetic alignment E FM/AFM = 42 meV, giving an exchange constant of J = E FM/AFM /4 = 10 meV.However, for defects separated by a similar distance in the [100] direction (perpendicular to the 3C layers) the exchange constant is reduced to J = 0.2 meV.The strong anisotropy in magnetic coupling is a direct consequence of the layered anion structure since the electronic states at the top of the valence band are comprised almost exclusively of 3C Op character [51,67].Importantly even within the 3C oxygen layers the magnetic coupling between Hf vacancy defects is far weaker than is obtained using a standard semilocal exchange correlation functional which is a result of the much increased localization (e.g., applying GGA in this case gives J = 28 meV).As a result of the weak coupling, cation vacancy related ferromagnetism would only be expected at very high defect concentrations.For example, considering an effective two-dimensional Ising square lattice model we can estimate the Curie temperature using the analytic formula T c = 2J /k B ln(1 + √ 2) [68].For J = 10 meV we predict T c = 263 K for a very high defect concentration of the order of 10 21 cm −3 .
V. DISCUSSION
The CON approach employed here employs a correcting potential [Eq. ( 1)] to eliminate the SI error in the PBE functional.The only free parameter in this potential was fitted previously to ensure linearity of the total energy with electron occupation number for small hole polarons in m-HfO 2 [51].In applying this approach to model cation vacancies, we have checked the linearity of the functional by comparing the vertical CTLs with the energy of the unoccupied one-electron eigenstates.For all defect charge states these two quantities are equal to within 0.1 eV indicating the Koopmans condition is very close to being satisfied across the entire range of defect charge states.Therefore, we conservatively estimate that this approach is able to predict CTLs with an accuracy of the order of 0.2 eV.The consistency between the CTLs obtained with the nonlocal HSE functional and CON is further confirmation that the approach is accurate.The fact that the calculated density of states and spin density indicates holes are removed one-by-one by electron addition is a good indication that the CON approach is SI free.This is not found to be the case in the GGA calculations, for example, which delocalize the hole across several anions due to SI error.On the other hand, the GGA functional gives a qualitatively incorrect picture of hole localization and predicts CTLs too close to the valence band maximum.
We note that the HSE calculations are performed on a smaller supercell (96 atoms versus 324 atoms) due to their high computational expense, and so results are likely to be more affected by artificial interactions between periodic defect images.The much smaller computational expense of the CON approach allowing larger supercells to be considered is an important advantage of the approach and essential for considering effects such as hole diffusion and magnetic interaction between vacancies.One disadvantage of the CON approach is that it is designed to eliminate SI specifically for holes in HfO 2 , therefore it cannot be transferred to other types of defects or systems without reparameterization.It is also not designed to describe the oxygen molecule, and application of the CON method leads to a significant underbinding of the oxygen molecule.Importantly, this has no effect on the predicted CTLs, electronic or magnetic properties which are the main focus of this study.
We have made a number of predictions regarding the electronic and magnetic properties of hafnium vacancies which merit further discussion: (1) We predict cation vacancies are associated with a series of CTLs in the range 0.76-1.67eV above the valence band maximum.These energies are significantly lower than those associated with oxygen vacancies (closer to mid gap) and so should be distinguishable using techniques such as deep level transient spectroscopy or capacitance-voltage profiling [69].The presence of such levels suggests cation vacancies could play an important role as charge traps with relevance to microelectronics, for example in bias temperature instability [13].
(2) While we do not attempt to calculate spectroscopic properties of the vacancies, we can make some semiquantitative predictions based on the calculated electronic structure.In particular, electronic transitions from the valence band to unoccupied hole states localized at the cation vacancy are expected for photon energies in the range 1.5-2.3eV.This absorption band is distinct from that predicted for neutral and positively charged oxygen vacancies (2.5-4.9 eV [58]) and so could provide another tool to quantify the vacancy content HfO 2 samples.For the V Hf defect both the highest occupied and lowest unoccupied electronic states are localized suggesting the transition close to 2.4 eV would have a higher oscillator strength.The V Hf defect however has no unoccupied states in the gap, and the lowest electronic transitions are expected to occur at energies similar to the bulk band gap (>5 eV).
(3) The calculation of the binding energy between free hole polarons and cation vacancies provides an atomistic model for the experimentally observed p-type conductivity at high temperatures.In particular we find that the energy required to separate a hole polaron from a neutral hafnium vacancy is about 0.6 eV.This is consistent with the activation energies extracted from measurements of p-type conductivity at high temperature in HfO 2 [28,29].However, we predict liberating additional holes becomes increasingly more difficult suggesting that each vacancy provides on average one hole carrier rather than the four that might be expected on the basis of formal ionic charge.
(4) We find that cation vacancies are associated with a net magnetic moment in agreement with previous studies.We predict ferromagnetic coupling between separated vacancies, which is highly anisotropic, being stronger when mediated by holes localized in the same 3C oxygen layer.However, the strength of the coupling is weak due to the strong hole localization, suggesting cation vacancies are an unlikely candidate to explain the unexpected ferromagnetism observed in HfO 2 .Only if very high concentrations of vacancies (corresponding to defect separations of the order of 10 Å) can be realized, e.g., by segregation to two-dimensional defects such as surfaces or grain boundaries, could hafnium vacancies realistically contribute to the observed ferromagnetism.We also calculate the ESR g tensors associated with cation vacancies which we find are quite distinct from those due to other intrinsic defects in HfO 2 providing an additional experimental route to defect identification.
VI. CONCLUSIONS
In summary, we have performed a first principles investigation into the electronic and magnetic properties of hafnium vacancies in m-HfO 2 .We show how widely used semilocal approximations to exchange and correlation describe these defects in a qualitatively incorrect way, predicting hole states which are too shallow due to the self-interaction error.To obtain more accurate predictions we employ a self-interaction corrected approach which ensures linearity of the total energy with electron occupation number.We predict a series of charge transition levels in the range 0.76-1.67eV above the valence band maximum connected to small hole polaron states localized on oxygen ions neighboring the cation vacancy.These holes are magnetically aligned giving rise to a net magnetic moment of up to 4 μ B .However we find the magnetic coupling between separated vacancies is extremely weak, suggesting they are unlikely to give rise to a ferromagnetic state.We show how small hole polarons can be detached from cation vacancies with activation energies consistent with high temperature p-type conductivity measurements [28][29][30].We also show how the spectroscopic properties of cation vacancies are distinct from that due to other intrinsic defects, suggesting techniques such as optical absorption spectroscopy and electron spin spectroscopy could provide a means to quantitative defect characterization.The predicted charge trapping properties of cation vacancies suggest they could play a role in microelectronics, where HfO 2 is widely used as a gate dielectric and as the active layer in resistive switching memories [13,70,71].While this investigation has focused on m-HfO 2 , very similar properties are expected for m-ZrO 2 , which finds a wide range of different applications [72,73].Altogether these predictions provide detailed insight into the properties of cation vacancies in HfO 2 and will be invaluable to help interpret results of experimental characterization.
FIG. 1 .
FIG. 1. (Color online) Crystal structure of m-HfO 2 highlighting the layered nature of the three-and four-coordinated (3C and 4C) oxygen sublattices.A particular Hf site with its neighboring oxygen ions is highlighted (black).Large green spheres and small red spheres represent Hf and O ions, respectively.
FIG. 3 .
FIG. 3. (Color online) Optimized structure of the V X Hf , V Hf , V Hf , and V Hf defects in m-HfO 2 .Large green spheres and small red spheres represent Hf and O ions, respectively, and the black circle indicates the location of the missing Hf ion.Electron spin density isosurfaces are shown in blue.
FIG. 4 .
FIG.4.Electronic density of states of the Hf vacancy defects in HfO 2 calculated using the CON method.The curves are aligned with respect to the bulk valence band maximum and offset vertically for clarity.
FIG. 5 .
FIG. 5. (Color online) Diffusion of a hole from a neutral cation vacancy in m-HfO 2 forming a negatively charged vacancy and a hole polaron on a 3C oxygen site.The black circle indicates the location of the missing Hf ion, and the dotted circle indicates the position of the hole before diffusion.Large green spheres and small red spheres represent Hf and O ions, respectively.Electron spin density isosurfaces are shown in blue.
FIG. 6 .
FIG. 6. (Color online) Two V X Hf defects forming a onedimensional chain in the [001] direction.Large green spheres and small red spheres represent Hf and O ions, respectively, and the black circles indicate the location of the missing Hf ions.Electron spin density isosurfaces for the case of ferromagnetic alignment are shown in blue.
TABLE I .
Principal components of the ESR g tensors for the V X | 2019-04-22T13:04:19.944Z | 2015-11-23T00:00:00.000 | {
"year": 2015,
"sha1": "4fe5e5aba43d90e6be6263cb7550ba8695d0e0c5",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevB.92.205124",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4fe5e5aba43d90e6be6263cb7550ba8695d0e0c5",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
251712359 | pes2o/s2orc | v3-fos-license | The moderating role of sex in the relationship between executive functions and academic procrastination in undergraduate students
The objective of the study was to determine if sex plays a moderating role in the relationship between executive functions and academic procrastination in 106 university students of both genders (28.3% male and 71.7% female) between the ages of 18 and 30 years (M = 19.7; SD = 2.7). The Academic Procrastination Scale and the Neuropsychological Battery of Executive Functions and Frontal Lobes (BANFE-2) were used to measure the variables. The results of the study showed that the degree of prediction of the tasks linked to the orbitomedial cortex (involves the orbitofrontal cortex [OFC] and the medial prefrontal cortex [mPFC]) on academic procrastination is significantly moderated by the sex of the university students (β3 = 0.53; p < 0.01). For men, the estimated effect of the tasks linked to the orbitomedial cortex on the degree of academic procrastination is −0.81. For women, the estimated effect of the tasks linked to the orbitomedial cortex on the degree of academic procrastination is −0.28. In addition, it was shown that sex does not play a moderating role in the relationship between the tasks linked to the dorsolateral prefrontal cortex (dlPFC) and academic procrastination (β3 = 0.12; p > 0.05). It was also determined that sex does not play a moderating role in the relationship between the tasks linked to the anterior prefrontal cortex (aPFC) and academic procrastination (β3 = 0.05; p > 0.05). It is concluded that only the executive functions associated with the orbitomedial cortex are moderated by the sex of the university students, where the impact of the tasks linked to the orbitomedial cortex on academic procrastination in men is significantly greater than in women.
The objective of the study was to determine if sex plays a moderating role in the relationship between executive functions and academic procrastination in 106 university students of both genders (28.3% male and 71.7% female) between the ages of 18 and 30 years (M = 19.7; SD = 2.7). The Academic Procrastination Scale and the Neuropsychological Battery of Executive Functions and Frontal Lobes (BANFE-2) were used to measure the variables. The results of the study showed that the degree of prediction of the tasks linked to the orbitomedial cortex (involves the orbitofrontal cortex [OFC] and the medial prefrontal cortex [mPFC]) on academic procrastination is significantly moderated by the sex of the university students (β 3 = 0.53; p < 0.01). For men, the estimated effect of the tasks linked to the orbitomedial cortex on the degree of academic procrastination is −0.81. For women, the estimated effect of the tasks linked to the orbitomedial cortex on the degree of academic procrastination is −0.28. In addition, it was shown that sex does not play a moderating role in the relationship between the tasks linked to the dorsolateral prefrontal cortex (dlPFC) and academic procrastination (β 3 = 0.12; p > 0.05). It was also determined that sex does not play a moderating role in the relationship between the tasks linked to the anterior prefrontal cortex (aPFC) and academic procrastination (β 3 = 0.05; p > 0.05). It is concluded that only the executive functions associated with the orbitomedial cortex are moderated by the sex of the university students, where the impact of the tasks linked to the orbitomedial cortex on academic procrastination in men is significantly greater than in women.
Introduction
In the university context, one of the most recurrent problems is academic procrastination where the student delays the development of their academic occupations voluntarily, usually doing it at the last minute (Steel et al., 2018). This construct can be defined as the voluntary delay of a planned, necessary and important activity, despite expecting possible negative consequences that outweigh the positive consequences of the delay (Steel, 2007;Klingsieck, 2013). In addition, this voluntary delay implies carrying out an alternative activity to the intended one and, therefore, is not synonymous with inactivity (H. C. Schouwenburg, 2004). Klingsieck (2013) raises seven characteristics of procrastination: (a) an intentional activity is delayed, (b) it is intended to start or end an activity, (c) The activity is necessary and of personal importance, (d) the delay is voluntary, (f) the delay is unnecessary, (g) the delay is made despite the negative consequences of the delay, and (h) the delay is accompanied by subjective discomfort or other negative consequences.
Several studies have shown that academic procrastination is present in all cultures, at all academic levels, and between genders (Steel, 2007;Klassen et al., 2008Klassen et al., , 2010Özer and Ferrari, 2011). In Turkey, a study on 784 university students showed that 52% frequently procrastinate (Özer et al., 2009). In China, a study of 1,184 university students reported that 74.1% procrastinate in at least one academic activity (Zhang et al., 2018). In Mexico, a study carried out on 521 psychology students from a public university showed that 57.9% have moderate academic procrastination (Chávez and Morales, 2017). In Peru, a study conducted on 517 psychology students from a private university showed that 14.1% have a high level of academic procrastination (Dominguez-Lara, 2017). The differences observed in the prevalence of academic procrastination could be explained by the sample size, the type of instrument used, and the methodology used to collect the data.
Regarding sexual differences in academic procrastination, there is an extensive discussion in the scientific literature due to the heterogeneity of the findings found in the different studies. Thus, several studies have found that there are significant differences between men and women in the level of academic procrastination (Özer et al., 2009;Steel and Ferrari, 2013;Mandap, 2016;Balkis and Duru, 2017). For example, a study conducted in Turkey on 441 university students found that men have higher levels of academic procrastination than women (Balkis and Duru, 2017). Another study in the Philippines with 200 university students showed that men procrastinate more than women (Mandap, 2016). Similarly, another study conducted in Turkey on 2,784 university students reported that men procrastinate more often than women (Özer et al., 2009). Another study on 16,413 English-speaking people showed that men are more likely to procrastinate (Steel and Ferrari, 2013).
However, other studies have not found significant differences between men and women (Sepehrian and Lotf, 2011;Zhou, 2020;Amoke et al., 2021). For example, a study conducted in China on 251 university students found no sex differences in academic procrastination (Zhou, 2020). Another study in Iran on 310 university students reported no significant differences between men and women in academic procrastination (Sepehrian and Lotf, 2011). Similarly, another study conducted in Nigeria on 804 people showed that gender does not significantly affect academic procrastination (Amoke et al., 2021). The heterogeneity of the results could be associated with methodological factors such as the size of the sample, the type of sampling used, and the measurement approach used. It could also be associated with cultural aspects.
That said, it was found that academic procrastination negatively affects the emotional well-being (Stead et al., 2010), life satisfaction (Özer and Saçkes, 2011) and even physical health (Sirois, 2015) of students. It is also related to the presence of anxious symptoms (Wang, 2021), high academic stress (Khalid et al., 2019), low self-esteem (Yang et al., 2021), and a greater presence of fraudulent academic behavior (Patrzek et al., 2015).
However, it is striking that despite the significant negative consequences of delaying their academic activities, most university students continue to procrastinate (Liu et al., 2020). This conduct could be explained by a failure to plan, regulate and control their behavior since they prioritize other secondary activities that imply more immediate gratification (Steel, 2007;Klassen et al., 2010;Park and Sperling, 2012). This behavior could also be explained by failing to self-regulate thoughts and emotions to maintain long-term behaviors such as studying for an exam or doing academic work (Steel and Ferrari, 2013). In this sense, emotional determinants such as impulsivity, emotional regulation, self-efficacy, motivation, and reward processing affect the level of academic procrastination (Wu et al., 2016;Wypych et al., 2018;Zhang et al., 2018;Mohammadi Bytamar et al., 2020). Cognitive determinants also affect this construct, such as planning, goal setting, metacognitive skills, and cognitive flexibility (Tan et al., 2008;Rabin et al., 2011;Ziegler and Opdenakker, 2018;Sutcliffe et al., 2019). Therefore, there are emotional and cognitive determinants that affect the level of academic procrastination. These determinants depend directly on the prefrontal areas of the brain associated with Executive Functions; specifically, these areas allow coordinating, selecting, and organizing various behavioral options to achieve goals that following procedures or rules can only obtain (Diamond, 2013). The review of the scientific literature shows that various components of Executive Functions such as self-control, planning, working memory, organization of materials, and task monitoring predict procrastination (Rabin et al., 2011). Also, impulsivity (Rebetez et al., 2018), self-efficacy, and self-control (Przepiórka et al., 2019) predict the level of procrastination. Likewise, evaluationfocused self-regulation is positively related to procrastination, Vilca 10. 3389/fpsyg.2022.928425 and action-focused self-regulation is negatively related to procrastination (Choy and Cheung, 2018).
The Model of Hot and Cold Executive Functions could explain the emotional and cognitive determinants of academic procrastination since it distinguishes two domains of executive functions (Ward, 2020). (a) Hot functions are mostly related to emotional and motivational aspects (Salehinejad et al., 2021). It is also closely linked to reward processing, such as reward sensitivity and delay discounting (tendency to choose a smaller but more immediate reward over a larger but later reward) (Poland et al., 2016;Poon, 2018). Furthermore, it is linked to affective decision-making, social skills, theory of mind, empathy, and social cognition (Chan et al., 2008;De Luca and Leventer, 2008). Hot executive functions are associated with the medial and orbital regions of the prefrontal cortex (Salehinejad et al., 2021), which includes the orbitofrontal cortex (OFC) (McDonald, 2013;Baez and Ibanez, 2014) and the ventromedial prefrontal cortex (mPFC) (Zimmerman et al., 2016;Gazzaniga et al., 2019). Also, the medial region of the prefrontal cortex is crucial for emotional and motivational processing because it has connections with subcortical structures such as the limbic system, the amygdala, and the insular cortex (Sharpe and Shoenbaum, 2016;Matyi and Spielberg, 2021).
On the other hand, (b) cold functions are related to purely cognitive information processing, where their processes do not involve much emotional arousal and instead require a great deal of logical and critical analysis, where there is the conscious control of thoughts and actions (Chan et al., 2008;Rubia, 2011). In this domain, cognitive flexibility, inhibition, planning, working memory, verbal fluency, and problem-solving are involved (Poland et al., 2016;Nejati et al., 2018;Salehinejad et al., 2021). Attentional flexibility, concept formation, and the ability to monitor and adapt behavior according to changing social circumstances are also involved (Wood and Worthington, 2017). Cold executive functions are associated with the lateral region of the prefrontal cortex, which includes the dorsolateral prefrontal cortex (dlPFC) and the ventrolateral prefrontal cortex (Gazzaniga et al., 2019;Ward, 2020). A meta-analysis study carried out in 193 studies that used the magnetic resonance technique showed that the lateral region of the prefrontal cortex, the anterior cingulate cortex, and the parietal cortex were activated in the main domains of cold executive functions: working memory, inhibition, flexibility, and planning (Niendam et al., 2012). Another study also shows that these three regions are connected and are part of the fronto-cingulumparietal network (FPN) that allows cognitive control, where the dlPFC plays a fundamental role (Salehinejad et al., 2021). It is important to note that both domains work together to perform adaptive functions, where emotional, social, and cognitive activities are involved (Zelazo and Carlson, 2012;Ruiz-Castañeda et al., 2020).
Then, understanding the fundamental role of executive functions in the initiation and maintenance of complex behaviors, it could be hypothesized that executive functions predict the degree of academic procrastination. However, the review of the literature also shows that the performance of executive functions in men and women is not the same (Silverman, 2003;Li et al., 2009;Nolen-Hoeksema, 2012;Weis et al., 2013;Weafer and de Wit, 2014;Gaillard et al., 2021b).
Several studies show that women have a greater capacity for delayed gratification (Weafer and de Wit, 2014) and greater behavioral self-regulation than men (Weis et al., 2013). In addition, women have a greater ability to use executive skills associated with controlling emotional reactions, cognitive reappraisal, and emotional coping (Nolen-Hoeksema, 2012). Women also use emotional regulation strategies to a greater extent and are more flexible in implementing these strategies (Goubet and Chrysikou, 2019). In contrast, men tend to avoid or repress emotional experiences (Barrett and Bliss-Moreau, 2009) and have greater problems with impulsivity (Riley et al., 2016). A meta-analysis study showed sex differences in the delay discount task (Gaillard et al., 2021a). Specifically, they found that women performed better than men, with a high effect size (Hedges' g = 0.64). Women had a greater ability to discriminate and choose larger and later rewards than smaller and more immediate rewards. Similarly, another meta-analysis study conducted in 102 studies showed that women outperform men in delay capacity (Hedges' g = 0.25-0.26) (Silverman, 2021).
However, in the scientific literature, there are also metaanalysis studies that show that there are no sex differences in tasks associated with executive functions, such as the study by Cross et al. (2011), where it was shown that there were no sex differences in the late discount. Similarly, another meta-analysis showed no sex differences in the ability to delay gratification (Silverman, 2003). Another systematic review study found little support for significant differences between men and women in executive function performance (Grissom and Reyes, 2019). The heterogeneity of these results could be associated with aspects such as the type of measurement used, cultural aspects, and specific characteristics of the sample.
On the other hand, sexual differences have also been studied using neuroimaging techniques. A study by Li et al. (2006) showed that men need more neural resources (greater activation of the bilateral medial frontal cortex, cingulate cortex, globus pallidus, thalamus, and parahippocampal gyrus) to have a similar performance to women in stop sign tests, which suggests greater impulsiveness in men. Another follow-up study by the same authors found that women have greater performance control and a greater effective response to error (Li et al., 2009). Also, several studies found sex differences in the middle, superior, and inferior frontal gyrus and OFC, which are involved in response inhibition capacity (Li et al., 2009;Gaillard et al., 2020).
On the other hand, neurological structures such as the mPFC and the amygdala, associated with emotional processing and decision making, follow different patterns of functional Frontiers in Psychology 03 frontiersin.org lateralization in men and women (Reber and Tranel, 2017). In women, decision-making and emotional processing are linked to the left side of the mPFC, while in men, it is linked to the right side of the mPFC (Reber and Tranel, 2017). In addition, women show a greater volume of mPFC and right OFC (Welborn et al., 2009). A possible explanation for sex differences in the performance of executive functions can be partially explained by sex differences in the controllability of structural brain networks (Cornblath et al., 2019). A systematic review study of twenty-one neuroimaging studies showed sexual differences in the neural networks that underlie all executive control tasks (Gaillard et al., 2021b). This result suggests that men and women use different strategies depending on the task's demands. Similarly, other studies have shown that the sex differences observed in executive functions could be partly explained by the experiences and cognitive strategies used by women and men (Satterthwaite et al., 2015;Wierenga et al., 2019).
For all the above, it can be affirmed that there is more evidence in the scientific literature in favor of differences between men and women in the performance of executive functions. Also, the functional and structural differences associated with executive functions could explain why men who procrastinate have higher levels of impulsivity (Strüber et al., 2008), lower levels of self-regulation (Higgins and Tewksbury, 2006), lower levels of self-motivation (Franklin et al., 2018) and greater problems in planning, monitoring and evaluating academic tasks (Limone et al., 2020). Unlike women who procrastinate, who have greater problems regulating cognitive and meta-cognitive processes (Limone et al., 2020).
Based on the above, it could be hypothesized that executive functions significantly predict the degree of academic procrastination and that gender plays a moderating role in the relationship between both variables (see Figure 1).
It is important to mention that most studies that assess the relationship between executive functions and academic procrastination use self-report scales to assess executive functions (Rabin et al., 2011;Sabri et al., 2016;Gutiérrez-García et al., 2020), which constitutes an important limitation since they depend directly on the perception that those evaluated have of their capacities. Also, although most studies use samples of university students, they do not precisely measure academic procrastination since they use scales that measure procrastination in general. Responding to this need, this study proposes the following specific hypotheses: (1) the functions linked to the orbitomedial cortex significantly predict academic procrastination.
(2) Sex plays a moderating role between functions linked to the orbitomedial cortex and academic procrastination.
(3) Functions linked to the dlPFC significantly predict academic procrastination. (4) Sex plays a moderating role between functions linked to the dlPFC and academic procrastination. (5) Functions linked to the anterior prefrontal cortex (aPFC) significantly predict academic procrastination. (6) Sex plays a moderating role between functions linked to the aPFC and academic procrastination.
Participants
In the present study, the sample consisted of 106 university students of both sexes (28.3% men and 71.7% women) between the ages of 18 and 30 years (M = 19.7; DS = 2.7) who were in the first and second year of Psychology at a private university in Lima, Peru. For data collection, a non-probabilistic convenience sampling was used, and the following inclusion criteria were used: (a) students who have signed the informed consent, (b) students over 18 years of age, and (c) students who are enrolled in the academic cycle of the university. The following exclusion criteria were also used: (a) Students who did not complete the two evaluation sessions, (b) Students who had some physical or sensory limitation that prevented them from answering the instruments on their own, and (c) Students who did not complete both tests. A post hoc procedure was performed to estimate statistical power, for which the following criteria were used: (a) effect size, (b) probability of error, (c) sample size, and (d) number of predictors. The statistical power was 0.98, considered adequate to estimate the regression models.
Measures Neuropsychological battery of executive functions and frontal lobes
The battery was developed by Flores Lázaro et al. (2008Lázaro et al. ( , 2012 to evaluate functions associated with the orbitomedial cortex (formed by the OFC and the mPFC), the dlPFC and the aPFC. In addition, the authors of the battery, following anatomical-functional criteria, selected a set of tests to measure these functions. For the OFC and the mPFC, the following tests were used: Stroop Effect (form A and B), Card Game, Mazes (traversing), and Card Classification (Maintenance Errors). The Stroop test measures inhibitory control capacity. In addition, several neuroimaging studies have shown that this test is associated with OFC and CPFM (Adleman et al., 2002;Jourdan Moser et al., 2009;Song and Hakoda, 2015;Cipolotti et al., 2016). The card game test is an adaptation of the Iowa Gambling test and assesses the ability to detect and avoid risky selections and to detect and maintain good selections. Several studies have found this test to be associated with OFC and mPFC (Bolla et al., 2004;Aram et al., 2019;Zha et al., 2022). On the other hand, the maze test assesses the ability to plan, respect limits, and follow the rules. The test primarily involves orbitofrontal and dorsolateral areas (Stevens et al., 2003;Thonnard et al., 2021). To evaluate Hypothetical model: moderating role of sex.
the orbitofrontal areas, the traversing qualification criterion was used. The other qualifying criteria were used to measure CPFM.
For the dlPFC, the following tests were used: Self-Directed Pointing, Visuospatial Working Memory, Alphabetical Ordering of Words, Card Sorting (perseveration and timing), Mazes (planning and timing), Tower of Hanoi (three and four disks), Consecutive Addition and Subtraction (form A and B), Verbal Fluency and Semantic Classification. The Self-Directed Pointing Test assesses the ability to use visuospatial working memory to self-directed point to a series of figures. It mainly involves dorsolateral prefrontal areas, especially their ventral portions (Lamar and Resnick, 2004). The visuospatial working memory test assesses the ability to maintain the identity of objects located in a specific order and space. It is based on the Corsi cube test but introduces the variant proposed by Goldman-Rakic et al. (1996) and Petrides (2000) of pointing to figures that represent real objects. The test is associated with the dlPFC (Ross et al., 2007). The alphabetical order of words tests measures the mental ability to manipulate and order verbal information in working memory. Performance on this test is also associated with the dlPFC (Tsukiura et al., 2001;Tsujimoto et al., 2004). The Card Sorting Test is based on the Wisconsin Card Sorting Test and assesses a person's mental flexibility. Performance on this test is directly related to dlPFC (Lie et al., 2006;Gläscher et al., 2019).
The maze test also allows the ability to systematically anticipate (plan) visuospatial behavior, which is associated with dlPFC (Kaller et al., 2011;Kronovsek et al., 2021). Specifically for this test, time and dead-end planning errors are considered. The Tower of Hanoi test assesses the ability to plan a series of actions that only together and in sequence lead to a specific goal (sequential planning). Performance on this test is associated with the dlPFC (Ruiz-Díaz et al., 2012;Niki et al., 2019). The addition and subtraction task evaluates the ability to perform simple calculation operations in reverse sequence both within and between tens. Performance on this test is associated with the dlPFC (Burbaud et al., 1999;Barahimi et al., 2021). Finally, the verbal fluency test measures the ability to efficiently select and produce as many verbs as possible within a limited time.
For the aPFC, the following tests were used: Semantic classifications (number of abstract categories), Selection of proverbs, and Metamemory. The Semantic Classification test measures the ability to produce the greatest number of abstract categories (abstract attitude). The performance in this test mainly involves areas of the aPFC (Koenig et al., 2005;Matsumoto et al., 2021). The Proverbs Selection Test assesses the ability to understand, compare, and select figurative responses. Performance on this test is associated with aPFC (Thoma and Daum, 2006;Ferretti et al., 2007). Finally, the metamemory task measures the ability to develop a memory strategy (metacognitive control), as well as to make performance prediction judgments (metacognitive judgments) and adjustments between performance judgment and actual performance (metacognitive monitoring). Performance on this test is linked to the aPFC area (Kikyo et al., 2002;Chua et al., 2014).
The qualification process of the BANFE-2 battery was carried out in two stages: First, the scores of each one of the tests were obtained following the qualification norms given in the test manual. That is, a score was obtained for the criteria of each test. Only in some criteria was the original score coded in a range of 1 to 5 points depending on the age and schooling of the person evaluated. Second, the scores by cortical area (Orbitomedial, Dorsolateral, and aPFC) were obtained by adding the associated criteria for each area. These scores are the ones used for the regression models. It is important to mention that the entire qualification process was carried out following the instructions given in the test manual (Flores Lázaro et al., 2012). A detailed description of the associated areas, domains, tests, and their grading system can be seen in Table 1.
Academic procrastination scale (APS)
The instrument was developed by Busko (1998) to measure the degree of academic procrastination in university students. For the study, the version adapted to Peru was used, where the two-dimensional model presented adequate fit indices Correct answer: words read correctly. The maximum possible score is 84 The evaluator points to the columns of words printed in color and asks the subject to read what is written, but when the evaluator says the word "color, " the subject must name the color in which the words are printed not what is written.
Stroop errors: when the color in which the word is written is not mentioned in a column where it was instructed to mention the color.
min
Time: time in seconds used to complete the test.
Correct answer: words read correctly. The maximum possible score is 84 Follow rules Maze test It is made up of five labyrinths that increase their level of difficulty. The subject is asked to solve the mazes in the shortest time possible, without touching the walls or going through them, and to try not to pick up the pencil once he has started. The number of times he touches the walls passes through them, and enters a dead end (planning error) is recorded. Likewise, the execution time is recorded.
Go through: number of times it goes through walls. It is considered that a wall has been crossed when the pencil line crosses any wall of the maze.
min
Card Sorting It consists of a base of four cards with four different geometric figures (circle, cross, star, and triangle), which have two properties: number and color. The subject is provided with a group of 64 cards with these same characteristics, which he has to accommodate under one of the four base cards presented on a sheet using a criterion that the subject has to generate (color, shape, or number). Any card has the same possibility of relating to the three criteria since no perceptual pattern guides decision-making.
Maintenance errors: When the correct sequence is not maintained, and it is decided to change the classification criteria after at least three consecutive hits.
Risk-Taking processing
Card game This test consists of choosing each card according to its criteria, taking into account the risks and benefits of the choice to achieve the greatest number of points possible. The stimuli of the cards are numbers that go from 1 to 5 and represent points. Cards 1, 2, and 3 have minor penalties and appear less frequently. The cards with more points (4 and 5) have more expensive and more frequent punishments. The points obtained are recorded, as well as the percentage of risk, which results from averaging the selections of cards 4 and 5.
Percentage of risk cards: it is obtained from the total number of cards that the person takes and the number of risk cards (4-point cards plus 5-point cards) taken. Maze test It is made up of five labyrinths that increase their level of difficulty. The subject is asked to solve the mazes in the shortest time possible, without touching the walls or going through them, and to try not to pick up the pencil once he has started. The number of times he touches the walls passes through them, and enters a dead end (planning error) is recorded. Likewise, the execution time is recorded. It also allows systematically assessing the ability to anticipate (plan) visuospatial behavior.
Dead-end planning: number of times the evaluated person enters a dead-end road. The choice of the wrong path does not need to lead to hitting a wall; the error is counted when the erroneous route takes more than half of the way. Time: the time is recorded since the indication to start solving the maze is given.
min
Sequential planning
Tower of Hanoi 3 and 4 disks
It is made up of a wooden base with three stakes and three or four chips of different sizes. The task has three rules: -Only one of the checkers can be moved at a time.
-A smaller checker cannot be under a larger checker. -Whenever a checker is taken, it must be deposited again before taking another. The subject has to move a pyramid-shaped configuration from one end of the base to the other by moving the tiles along with the pegs.
Movements: number of movements made until each task's final goal. The minimum number of moves to correctly complete the three-disk problem is seven; for the task with four disks, it is 14 movements. Time: Time in seconds that it takes to complete the task. Both ratings are used separately for each tower.
min
Reverse sequence
Consecutive subtraction A and B
In both cases, it is requested that from an indicated number (40 or 100), an amount be subtracted consecutively (three in three or seven in seven, respectively) until reaching the minimum number (two or one). Task A (40-3) applies from 8 years of age. Task B (100-7) only applies from 10 years of age.
Time: time in seconds elapsed from the time "begin" is said until the conclusion of the consecutive subtractions. Hits: the number of correct individual subtractions made by the person is considered. The maximum possible number of correct answers is 14 for the subtraction of 100-7 (task B) and 13 correct for the subtraction of 40-3 (task A). It is not recorded in the protocol if the person mentions 100 or 40 when starting to subtract.
min per task
Consecutive sum This task consists of developing a consecutive sum exceeding the tens limit. The following instruction is given: "we are going to do a sum. Starting from one, you have to add five by five; I will tell you when to stop." The person is instructed to stop when signaled. It is stated that he cannot use his fingers. Assesses the ability to analyze and group a series of animal figures into semantic categories in the largest possible number of categories. The subject is presented with a sheet with 30 animal figures and is asked to generate as many classifications as possible within 5 min.
Total Categories: total average number of items included in all categories. Total average of animals: the total of animals classified in some category is scored. Total score: one point is awarded if the category is Concrete (C), two are given if the category is Functional (F), and three if the category is Abstract (A). Points are awarded for each category generated, and scores are noted in the box on the left. The total score is the sum of the points given to each generated category. The maximum score is 36.
min
Self-directed visual working memory
Self-directed pointing
The self-directed working memory test (WM) is made up of a sheet with figures of objects and animals. The goal is to point your finger at all the figures without omitting or repeating any. The subject has to develop an action strategy and, at the same time, maintain in his WM the figures that he has already pointed out so as not to repeat or omit any (persevere or omit in the indications).
Perseverations: figures indicated more than once. The figure is marked with the corresponding number and will be counted as a perseveration. Time: time in seconds used to finish pointing out the figures on the sheet. Hits: the total number of hits will be the number of figures indicated in a non-contiguous manner that has not been persevered. If the person points to two contiguous figures at first, the second will not be considered correct. From 12 indicated figures, whether they are correct or not, a marked figure that is contiguous to the previous figure can be counted as a hit.
min
Verbal working memoryordering
Alphabetical ordering of words
The test consists of three disyllabic word lists, the first containing words that begin with a vowel, the second with a consonant, and the last, with vowels and consonants. The task is to reproduce each list in alphabetical order.
Assesses the ability to hold information in the WM and manipulate it mentally.
The following aspects are rated on each list: -Rehearsal number in which the list is played correctly. -Perseverations: perseverations are words that the person repeats more than once in an essay. -Intrusions: intrusions are words that the person mentions but are not on the list. -Order errors: Reproduce words whose initial vowel or consonant does not correspond to the alphabet sequence. These errors are scored on words provided and not omitted. A score is obtained for each list.
There is no time limit Visuospatialsequential working memory Visuospatial working memory The task consists of four lists that increase the number of figures from four to seven elements. The order of the figures in each list is noted in the protocol. Two trials are provided for each word list. If the correct sequence is signaled on the first trial, it goes directly to the next level. The second trial applies only in case of failure to point to the figures on the first trial. The test is over if the person fails to signal the correct sequence on both trials.
Maximum sequence: corresponds to the maximum level indicated. The test is suspended due to two consecutive tests mismarked; the maximum sequence will correspond to the maximum level correctly marked. The maximum possible level is four. Perseverations: when a figure is pointed to more than once in a trial, either a correct figure or a substitution. Order errors: when a figure is indicated in the order that does not correspond to it according to the original sequence.
There is no time limit (Continued) Frontiers in Psychology 08 frontiersin.org Time: time in seconds to finish the test. Hits: the maximum possible score is five points. Every correct answer is worth one point.
min Abstract attitude Semantic classification
Assesses the ability to analyze and group a series of animal figures into semantic categories in the largest possible number of categories. The subject is presented with a sheet with 30 animal figures and is asked to generate as many classifications as possible within 5 min.
The number of abstract categories: they define semantic-abstract properties of animals (mammals, domestic, marine, etc.).
min
a For some areas and domains, the same test is used, but different aspects of the qualification are considered.
Procedure
The standards given in the Declaration of Helsinki for the study were followed (World Medical Association, 2013). Among these, the following principles were emphasized: (a) autonomy of the people to participate in the study, (b) respect toward the participants, (c) beneficence, and (d) justice to treat the participants with fairness and transparency. In addition, the study had the approval of the Institutional Research Ethics Committee (CIEI) of a private university in Lima (204085), and informed consent was also used for the participation of people in the study.
A non-probabilistic sample was used for data collection, and the instruments were applied individually in an evaluation room. For both tests, we had the help of three fifth-year psychology students, who received training for six sessions in the application of the test. A psychologist with a Master's degree in Psychology and a specialty in Neuropsychology directed the training of the evaluators. During the evaluation process, the anonymity and confidentiality of the results were ensured, where the study's objectives were explained to the university students, doubts related to the procedure were resolved, and they signed informed consent. In addition, the tests were applied in two sessions of approximately 35 min.
Statistical analyses
To determine whether gender plays a moderating role between the relationship between executive functions and academic procrastination, a hierarchical regression analysis was used following the procedures described by Aiken et al. (1991). In addition, estimated marginal means (EMMs) of academic procrastination at different levels of executive functions by gender were calculated. Executive function effects were also tested separately for males and females with a simple slope analysis.
For the simple linear regression models, the following equation was used: Where β 1 is the slope. In model 1, it is associated with the tasks linked to the orbitomedial cortex. In model 2, it is associated with the tasks linked to the dlPFC. In model 3, it is associated with the tasks linked to the aPFC.
For the moderation analyses, the following equation was used: Where β 1 represents the estimated effect of the tasks linked to the orbitomedial cortex (model 1), the tasks linked to the dlPFC (model 2), and the tasks linked to the aPFC (model 3) on academic procrastination for the male group.
All statistical analyzes were performed using the "lm()" function for hierarchical regression and the "emmeans" package (Russell et al., 2021). The RStudio environment (RStudio Team, 2018) for R (R Core Team, 2019) was used in both cases. Table 2 shows the descriptive analysis and the relationship between the study variables. In the total sample, it is evident that the tasks linked to the medial orbital cortex have a negative relationship with university students' degree of academic procrastination (r = −0.59). However, the degree of academic procrastination does not show a relationship with the tasks linked to the dlPFC (r = 0.09) and the aPFC (r = −0.00).
Descriptive analysis
Regarding the male sample, it is evident that the tasks linked to the orbitomedial cortex negatively correlates with the degree of academic procrastination (r = −0.71). It can also be seen that the tasks linked to the dlPFC has a negative and weak relationship with the degree of academic procrastination (r = −0.22). However, the degree of academic procrastination does not show a relationship with the tasks linked to the aPFC (r = −0.03). Regarding the sample of women, it can be seen that the tasks linked to the orbitomedial cortex has a negative relationship with the degree of academic procrastination (r = −0.62). It is also seen that the tasks linked to the dlPFC has a weak relationship with the degree of academic procrastination (r = 0.14). However, the degree of academic procrastination does not show a relationship with the tasks linked to the aPFC (r = 0.03). Therefore, it can be seen that the strength of the relationship between the orbitomedial cortex and academic procrastination varies in the groups of men and women. Table 3 shows the results of the analysis of the interaction of the sex of the university students on the relationship between executive functions and academic procrastination.
Hypothesis test of the explanatory model
Regarding the first specific hypothesis, it is observed that the tasks linked to the medial orbital cortex predict a 34% variance of academic procrastination ( R2 = 0.34; p < 0.01). Furthermore, when the dlPFC (p = 0.551), aPFC (p = 0.998), and age (p < 0.05) are included in the model as covariates, the The orbitomedial cortex includes the following domains of executive functions: Inhibitory control, follow rules, and risk-taking processing. The dorsolateral prefrontal cortex includes the following domains of executive functions: verbal fluency, mental flexibility, visuospatial planning, sequential planning, reverse sequence, productivity, self-directed visual working memory, verbal working memory-ordering, and visuospatial-sequential working memory. The anterior prefrontal cortex includes the following domains of executive functions: metamemory, comprehension of figurative meaning, and abstract attitude. *p < 0.01; f 2 = Cohen's effect size.
orbitomedial cortex continues to have a significant impact on academic procrastination (p < 0.01) and the explained variance of the model remains similar ( R2 = 0.36; p < 0.01). For the second specific hypothesis, when sex is included as a moderating variable in the model, the degree of explained variance increases significantly ( R2 = 0.41; p < 0.01). It can also be seen that the regression coefficient for the interaction of orbitomedial orbital cortex × sex is significant (β 3 = 0.53; p < 0.01), therefore, the degree of prediction of the medial orbital cortex on academic procrastination depends significantly on the sex of the university students. For men, the estimated effect of the orbitomedial cortex on the degree of academic procrastination is −0.81 (β 1 ). For women, the estimated effect of the orbitomedial cortex on the degree of academic procrastination is −0.28 (β 1 + β 3 ). Simple slope analysis shows that the slope of the orbitomedial cortex for males is significantly greater than for females (p < 0.01) (see Figure 2). Then, the moderation analysis shows that the effects of tasks linked to the medial orbital cortex on academic procrastination for men and women are significantly different, which is in line with the second specific hypothesis.
Regarding the third specific hypothesis, it can be seen that the tasks linked to the dlPFC fail to predict the degree of academic procrastination ( R2 = −0.01; p = 0.558). It can also be seen that the regression coefficient for the dlPFC × Sex interaction is not significant (β 3 = 0.12; p = 0.154). In addition, the analysis of simple slopes shows that the slope of the dlPFC for men and women is similar (p = 0.123). These results provide evidence to reject the third and fourth specific hypotheses.
Regarding the fifth specific hypothesis, it can be seen that the tasks linked to the aPFC fail to predict the degree of academic procrastination ( R2 = −0.01; p = 0.986). It can also be seen that the regression coefficient for the interaction aPFC × Sex is not significant (β 3 = 0.05; p = 0.676). In addition, the analysis of simple slopes shows that the slope of the aPFC for men and women is similar (p = 0.849). These results provide evidence to reject the fifth and sixth specific hypotheses.
Discussion
Regarding the first specific hypothesis, it was shown (step 1) that the tasks linked to the orbitomedial cortex significantly predicts the degree of academic procrastination ( R 2 = 0.34; p < 0.01). To understand this result, it is essential to point out that the orbitomedial cortex refers to the mPFC and the OFC (Flores Lázaro et al., 2012). The mPFC plays a fundamental role in the processes of (a) regulation and attentional effort (Hauser et al., 2014), (b) decision making between two potentially pleasant outcomes (Saunders et al., 2017), and (c) regulation of motivational states (Fuster, 2002). The OFC also has important processes involved in (a) processing and regulation of affective states (Dixon et al., 2017), (b) behavior regulation (Jonker et al., 2015), (c) change detection (Rolls, 2004), (d) decision-making based on risk-benefit estimation (Zald and Andreotti, 2010), and (e) short-and long-term reward valuation (Peters and D'Esposito, 2016).
Then the processes involved in the mPFC and the OFC can explain the behavior of voluntarily delaying a necessary or important academic activity, despite expecting possible negative consequences that outweigh the positive consequences of the delay. Also, these processes can explain why a failure in intertemporal choice occurs in procrastination, that is, the tendency to prefer smaller rewards received in the short term to larger rewards received in the long term (Peters and D'Esposito, 2016). In addition, this first result could explain why several previous studies have found that procrastination is related to a failure in self-control (Rebetez et al., 2018;Zhao et al., 2019), in emotional regulation (Eckert et al., 2016;Ljubin-Golub et al., 2019), in the regulation of motivation (Grunschel et al., 2016;Ljubin-Golub et al., 2019) and time management (Wolters et al., 2017). Regarding the second specific hypothesis, a second analysis (step 2) showed that the tasks linked to the degree of prediction of the orbitomedial cortex on academic procrastination is significantly modulated by the sex of the university students (β 3 = 0.53; p < 0.01). The impact of the tasks linked to the orbitomedial cortex on academic procrastination in males (−0.81) is significantly greater than in females (−0.28). This difference in impact could be related to the fact that neurological structures such as the mPFC and the amygdala, strongly involved in emotional processing and decision making, follow different patterns of functional lateralization in men and women (Reber and Tranel, 2017). In women, decisionmaking and emotional processing are linked to the left side of the mPFC, while in men, it is linked to the right side of the mPFC (Reber and Tranel, 2017). It could also be related to sex differences in the volume of OFC and mPFC (Gur et al., 2002;Wood et al., 2008). Thus, women show a greater volume of mPFC and right OFC (Welborn et al., 2009). In addition, these structural differences in men and women explain the differences in the use of two emotional regulation strategies: reappraisal and suppression (Welborn et al., 2009). These functional and structural differences could also explain why male procrastinators have higher levels of impulsivity (Strüber et al., 2008), lower levels of self-regulation (Higgins and Tewksbury, 2006) and greater problems planning, monitoring, and evaluating tasks academic (Limone et al., 2020). Unlike women who procrastinate, who have greater problems regulating cognitive and meta-cognitive processes (Limone et al., 2020).
Regarding the third specific hypothesis, it was first evidenced (step 1) that the tasks linked to the dlPFC fails to predict the degree of academic procrastination ( R2 = −0.01; p = 0.558). In addition, for the fourth specific hypothesis, a second analysis (step 2) showed that gender does not play a moderating role in the relationship between both variables (β 3 = 0.12; p = 0.154). To understand these results, it is important to distinguish between hot and cold executive functions. Hot executive functions involve emotion processing and regulation, motivation, reward processing (immediate versus long-term reward), and decision-making based on the subjective value of the reward. While cold executive functions are involved in purely cognitive information processing (Ward, 2020). Several cognitive processes are linked to academic procrastination, such as cognitive flexibility, planning, goal setting, metacognitive skills, and cognitive flexibility (Tan et al., 2008;Rabin et al., 2011;Ziegler and Opdenakker, 2018;Sutcliffe et al., 2019). However, in the present study, other cognitive processes were evaluated, such as verbal fluency, productivity, visuospatial planning, sequential planning, reverse sequencing, and working memory (visual, verbal, and visuospatial). In this sense, the study shows evidence that these domains do not predict academic procrastination. It is important to mention that these domains are linked to the dlPFC (Lamar and Resnick, 2004;Tsujimoto et al., 2004;Ross et al., 2007;Gläscher et al., 2019;Niki et al., 2019;Panikratova et al., 2020;Barahimi et al., 2021), one of the cortical regions associated with cold executive functions. In contrast, the OFC is mainly associated with hot executive functions (Salehinejad et al., 2021). This would explain why the tasks linked to the dlPFC fail to explain academic procrastination, but the tasks linked to the OFC do.
Regarding the fifth specific hypothesis, it was first evidenced (step 1) that the tasks linked to the aPFC fails to predict the degree of academic procrastination ( R2 = −0.01; p = 0.986). In addition, for the sixth specific hypothesis, a second analysis (step 2) showed that gender does not play a moderating role between both variables (β 3 = 0.05; p = 0.676). These results could be because aPFC is mainly related to highlevel cognitive functions, such as meta-memory, figurative meaning comprehension, and abstract attitude (Ramnani and Owen, 2004;Flores Lázaro et al., 2008), which are purely cognitive functions. In contrast, academic procrastination is not a problem of cognitive processing but rather an eminently affective, motivational, and processing problem of perceived rewards (Damme et al., 2019).
Regarding the study's limitations, firstly, a non-probabilistic sampling was used, which limits the generalization of the results. It is recommended that future studies use representative samples to generalize the results. Secondly, the sample size was modest, although sufficient to test the regression models. It is essential to point out that the BAFE 2 allows for an objective evaluation of executive functions, for which an individual evaluation and a minimum of two evaluation sessions are required. This evaluation characteristic could justify the sample size reached in the present study, which was similar to that reported in other studies where the BAFE-2 was used (Rincón-Campos et al., 2019;Muchiut et al., 2021;San-Juan et al., 2022). Third, there was an unequal distribution of genders in the sample, where women were the majority group. Therefore, it is necessary to carry out more studies with balanced samples of men and women and larger and more representative ones to see if the present results can be replicated. Fourth, in the study of the variables, Magnetic Resonance Imaging (MRI) was not included. Therefore, it is recommended that future studies include this type of evaluation to understand the results better. Fifth, covariates such as year of study, among others, were not included in the study. It is recommended that future studies include these variables to understand the results better. Sixth, OFC and mPFC were measured with the same score and under the term orbitomedial cortex. This procedure is directed by the instrument used. Therefore, it is suggested that future studies use instruments that use separate scores for the OFC and the mPFC for a better understanding of the results. Despite these limitations, the study findings are important and promising as it is the first study to assess the moderating role of gender in the relationship between executive functions and academic procrastination using a neuropsychological battery, which allows a more objective evaluation of executive functions, unlike a self-report test.
Based on the above, it is concluded that only the tasks linked to the medial orbital cortex significantly predicts the degree of academic procrastination. In addition, the degree of prediction of the tasks linked to the medial orbital cortex on academic procrastination is significantly moderated by the sex of the university students.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Institutional Research Ethics Committee (CIEI) of the Universidad Peruana Cayetano Heredia (204085). The patients/participants provided their written informed consent to participate in this study. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2022-08-22T13:59:55.612Z | 2022-08-22T00:00:00.000 | {
"year": 2022,
"sha1": "8f96ef814fb5fbb13819cf2570de92b254b0fc35",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "8f96ef814fb5fbb13819cf2570de92b254b0fc35",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
225041326 | pes2o/s2orc | v3-fos-license | Learning Panoptic Segmentation from Instance Contours
Panoptic Segmentation aims to provide an understanding of background (stuff) and instances of objects (things) at a pixel level. It combines the separate tasks of semantic segmentation (pixel-level classification) and instance segmentation to build a single unified scene understanding task. Typically, panoptic segmentation is derived by combining semantic and instance segmentation tasks that are learned separately or jointly (multi-task networks). In general, instance segmentation networks are built by adding a foreground mask estimation layer on top of object detectors or using instance clustering methods that assign a pixel to an instance center. In this work, we present a fully convolution neural network that learns instance segmentation from semantic segmentation and instance contours (boundaries of things). Instance contours along with semantic segmentation yield a boundary-aware semantic segmentation of things. Connected component labeling on these results produces instance segmentation. We merge semantic and instance segmentation results to output panoptic segmentation. We evaluate our proposed method on the CityScapes dataset to demonstrate qualitative and quantitative performances along with several ablation studies.
I. INTRODUCTION
Panoptic segmentation [1], [2] offers ultimate understanding of a scene by providing joint semantic and instance level predictions of background and objects at a pixel level. Panoptic segmentation is usually achieved by combining outputs from semantic segmentation and instance segmentation. Examples where panoptic segmentation offers unprecedented advantage over standalone semantic or instance segmentation solutions include collective knowledge of distinct objects and drivable area around a self-driving car [3], [4], semantic and instance level details of cancerous cells in digital pathology [5], understanding of background and different individuals in a frame to enhance smartphone photography. Multi-task learning networks [6], [7], [8], [9] that jointly perform semantic and instance segmentation [1], [3] accelerated progress of panoptic segmentation in terms of accuracy and computational efficiency compared to traditional methods that use naive fusion of predictions from independent semantic and instance segmentation networks to derive panoptic segmentation output [2].
Recently, single stage instance segmentation methods have been developed [15], [16]. These major approaches use fully convolution networks so that they can be trained in an endto-end fashion. Semantic segmentation is a mature task which is well explored in literature relatively to panoptic segmentation. We make an observation that panoptic segmentation can be obtained from semantic segmentation by additionally estimating instance separating contours. Naively, the instance separating contours can be an additional class in the segmentation task. In practice, it is difficult to get good performance for this class. It is illustrated in Figure 1 where segmentation (a) and instance contour segmentation (b) contains all the information to obtain panoptic segmentation. The minimal contours needed are contours which separate two instances of the same object. However, these contours don't have sufficient information to be learnt on its own and thus we use the entire instance contours.
In this work, we present a multi-task learning network as shown in Figure 2 that learns semantic segmentation, instance contours and center regression results. Our instance contours along with semantic segmentation guide us to derive instance segmentation and eventually panoptic segmentation. Our instance contour segmentation network is a binary segmentation network that predicts instance boundaries between objects that belong to a same category. Compared to semantic edge detection networks [17], [18] our instance contour estimation doesn't ignore boundaries between instances of a same category. We refine low quality instances from our instance segmentation output using center regression results. We split large instances or merge small instances, using 2d offsets to an instance center predicted by instance center regression at a pixel level. We present a network that learns panoptic segmentation from semantic segmentation and instance contours (boundaries of things). We use a shared convolution neural network to predict semantic segmentation, instance contours and center regression. Instance contours along with semantic segmentation yield a boundary-aware semantic segmentation of things. Connected component labeling on these results produce instance segmentation and eventually panoptic segmentation.
We hope that our idea encourages a new direction in the research of panoptic segmentation which ultimately leads to learning of instance separating contours within the segmentation task. The main contributions of this paper include: 1) A novel method to learn panoptic segmentation and instance segmentation from semantic segmentation and instance contours. 2) An instance contour segmentation network that learns boundaries between objects of same semantic category.
II. RELATED WORK Scene understanding [19] has witnessed tremendous progress over the past decade with introduction of Convolution Neural Networks [20], [21], [22] that aided in development of semantic segmentation (pixel wise classification) and instance segmentation (pixel level recognition of distinct objects). Panoptic segmentation [2], a joint semantic and instance segmentation has provided complete scene understanding by categorizing a pixel into distinct categories and instances. On the other hand, semantic edge detection [17] has been widely used to learn boundaries between semantic classes.
A. Semantic Segmentation
Few years ago semantic segmentation [23] was considered a challenging problem. With the help of Fully convolutional neural networks (FCNs) [24] development of accurate and efficient solutions were made possible.
Several enhancements were made to push the performance of semantic segmentation higher by making improvements to encoder and decoder in FCNs. Dilated residual convolutions [25], Feature pyramid networks [1], [26], Spatial pyramid pooling [27] etc. are examples of improvements made to encoder while U-Net [28], Densely connected CRFs [25], [29] are examples of improvements made to decoder. We use a combination of feature pyramid networks and a lightweight asymmetric decoder presented by Kirillov et al. [1] to learn semantic segmentation.
B. Instance Segmentation
In instance segmentation, an object instance(id) is assigned to every pixel for every known object within an image. Two stage methods like MaskR-CNN [12] involve proposal generation from object detection followed by mask generation using a foreground/background binary segmentation network. These methods dominate the state of the art in instance segmentation but incur a relatively higher computational cost. Using YOLO [30], SSD [31] and other light wight object detector compared to Faster R-CNN [32] may seem promising but they still posses inevitable additional compute in generating object proposals followed by mask generation.
Other approaches in instance segmentation range from clustering of instance embedding [33] to prediction of instance centers using offset regression [13], [14]. These methods appear logically straight forward but are lagging behind in terms of accuracy and computational efficiency. The major drawback with these methods is usage of compute intensive clustering methods like OPTICS [34], DBSCAN [35] etc. In contrast to these methods, we derive instance segmentation from semantic segmentation using instance contours (boundaries of things).
C. Semantic Edge Detection
Semantic edge detection (SED) [17], [36] differs from edge detection [37] by predicting edges that belong to semantic class boundaries. In SED, edges/boundaries that separate segments of one category from another are predicted whereas, in edge detection every edge is detected based on image gradients. Holistically-nested edge detection (HED) [38] is one of the first CNN based edge detection method. Later, several methods were proposed to address different challenges in edge detection that include prediction of crisp boundaries [18], [39], selection of intermediate feature maps and choices of supervision on these feature maps [40], [41]. It is important to note that these methods ignore the boundaries Multi-scale features from the backbone are fed to a feature pyramid network and then to upsampling neck followed by a prediction head. Our network has three heads for semantic segmentation, instance contour segmentation and center regression tasks. Separate necks can be used for different heads/tasks as needed.
between instances of objects that belong to same semantic category.
Deep Snake [42] recently proposed to predict instance contours by learning contours from object detection. They replace foreground mask estimation for objects with contours to derive instance segmentation. Our instance contour segmentation however is a single stage method that directly estimates contours using a binary segmentation network.
D. Panoptic Segmentation
Panoptic segmentation [2] combines semantic segmentation and instance segmentation together to provide class category and instance id for every pixel with in an image. Recent works [1], [14], [3] use a shared backbone and predict panoptic segmentation by fusing output from semantic and instance segmentation branches. Almost every work so far uses an FCN based semantic segmentation branch with variations including usage of dilated convolutions [14] or feature pyramid networks [1]. However, choices of instance segmentation branch can vary as discussed in Section II-B.
Major challenge in generating panoptic segmentation output is merging conflicting outputs from semantic segmentation and instance branches. For example, semantic segmentation can predict that a pixel might belong to car class while instance segmentation branch may predict the same pixel as person class. Several methods [3] were proposed to handle the conflicts in a better and learned fashion. Our methods propose to derive instance segmentation from semantic segmentation using instance contours. Therefore, our method doesn't require a conflict resolution policy like other existing methods.
III. PROPOSED METHOD
Our proposed method is a multi-task neural network with several shared convolution layers and multiple output heads that predict semantic segmentation, instance contours and center regression. As shown in Figure 3, a common ResNet [22] backbone outputs multi-scale feature maps that are processed by a top-down feature pyramid network [26]. These feature maps from different levels are upsampled to a common scale through a series of 1x1 convolutions and combined before making output predictions. We refer the upsampling stages as necks and prediction layers as heads.
Outputs from instance contour and semantic segmentation branches are combined to generate instance segmentation. We refine instance segmentation output using center regression results. Later, we simply merge semantic and instance segmentation outputs to generate panoptic segmentation.
A. Model Architecture
We begin with introducing our shared backbone that outputs multi-scale feature maps as shown in Figure 3. Our backbone uses ResNet [22] as the encoder that outputs multiple scales of feature maps {1/4, 1/8, 1/16, 1/32} w.r.t to input image. Our pyramid is built using Feature pyramid network (FPN) [26] which consumes feature maps (scales 1/4 to 1/32) from backbone in a top-down fashion and outputs feature maps with 256 channels maintaining their input scale. Feature maps from pyramid are then passed through a series of 1x1 convolutions and are up sampled to 1/4 scale using 2-d bi-linear interpolation in the neck layers as proposed in [1]. These layers have 128 dims at each level. We add these feature maps from different levels and pass to prediction heads. Our semantic segmentation head contains 1×1 convolution layer with 'k' filters (to output 'k' output maps for 'k' classes) followed by a 4x upsampling. We perform softmax activation followed by an argmax function on the 'k' output maps to derive full resolution semantic segmentation output. Our instance contour estimation head is similar to semantic segmentation head except it has 1 output feature map and a sigmoid activation instead of softmax. Our center regression head has two output channels that predict offsets from instance center in x and y axis, and does not have any special activation function.
B. Loss functions
We discuss the explicit loss functions defined for semantic segmentation and instance contour branches. We chose crossentropy loss for semantic segmentation. In Equation 1, L s is segmentation loss over k classes for all pixels in the image, For instance contours, we chose weighted multi-label binary cross entropy loss [17] as shown in Equation 2, where β is the ratio of non-edge pixels to total pixels in the image.
We add Huber loss (δ = 0.3): and NMS Loss [18] terms to contour loss to predict thin and crisp boundaries.
We compute softmax response h along the normal direction of boundary pixels c as described in [18]. For center regression, we use Huber loss to compute error between y, predicted offsets andŷ, ground truth offsets with δ = 1. Our total loss function is a weight combination of semantic loss, contour losses and center regression loss.
C. Instance segmentation
Our instance segmentation is derived from semantic segmentation unlike any other instance segmentation methods as shown in Figure 4. As a first step, we generate a binary mask by searching for instance classes in semantic segmentation which we refer to as instance class mask. We subtract instance contours (generated from instance contour segmentation head) from instance class mask to derive boundary aware instance class mask. Using connected component labeling [43], we derive unique instances from boundary aware instance class mask. We map the semantic segmentation output to the instance generated. We assign the most frequent label found inside an instance as its category and average the softmax predictions over the area of an instance to generate confidence for an instance.
D. Refining Instance Segmentation
We refine instance segmentation output using center regression results. Our refinement consists of mainly 2 stages: Split and Merge. We estimate centroids predicted by center regression head. We cluster the centroid predictions using DBSCAN in an instance and split them if distinct centroids are found. If distance between two centroids is at least 20 pixels (eps), we declare them as distinct. Our clustering stage doesn't require large computational complexity like other methods [33], [13], [14] since we perform clustering within instances that are much smaller compared to performing clustering on entire image.
After the instances are split, we estimate mean centroids for every instance using offsets predicted by center regression head. If the mean centroids are closer than 20 pixels in euclidean distance, we merge those instances. Later, we remove all instances that have an area lower than a minimum area threshold. We assign these pixels to instances whose centroids are closest to the centroids derived from offsets predicted by the center regression head.
E. Panoptic Segmentation
Panoptic segmentation is now obtained by simply merging output from semantic segmentation and instance segmentation. As discussed in Section II-D we don't use a conflict resolution since our instance segmentation is a byproduct of our semantic segmentation. Thus, we will never have conflicting predictions.
IV. EXPERIMENTS, RESULTS AND DISCUSSION
In this section, we demonstrate the performance of our proposed methods for panoptic segmentation on Cityscapes [44] dataset. We also present the performance of our semantic segmentation and instance segmentation results that helped us generate the panoptic segmentation output.
A. Experimental Setup
Cityscapes [44] is an automotive scene understanding dataset with 2975/500 train/val images at 1024×2048 resolution. This dataset contains labels for semantic, instance and panoptic segmentation tasks. We derive labels for our instance contour task by applying a contour detection algorithm on instance ground truth masks. We dilate the resulting contours to derive thick contours and serve them as ground truth for our instance contour segmentation task. Cityscapes dataset has 19 semantic object categories out of which 8 categories are provided with instance masks.
We train our network on full resolution images with a batch size of 4 images. We use Group Normalization [45] which is effective for lower batch sizes. We use an SGD optimizer with learning rate = 0.005, momentum = 0.9, weight decay = 10 −4 . We initialize our ResNet encoders with pre-trained ImageNet [46] weights and train our networks for 48000 iterations. We measure the performance of semantic segmentation using mean intersection over union (mIoU), instance segmentation using mean average precision (mAP) and panoptic segmentation using panoptic quality (PQ) [2], segmentation quality (SQ) and recognition quality (RQ) metrics.
B. Ablation experiments 1) Instance contour segmentation loss function: As mentioned before, we aim to predict thin and crisp instance contours. We study different loss functions discussed in Section III-B by evaluating the performance of instance and panoptic segmentation as shown in Table I. We used ResNet-50 encoder as our backbone and separate heads with a common neck as discussed in Section III-A. We observed that the use of Huber and NMS loss function have improved the performance of instance and panoptic segmentation results. The weighted multi-class binary cross entropy combined with the Huber loss is the best combination we found. We use this combination for the rest of the experiments in the paper. Qualitative results in Figure 5 demonstrate that the contours generated are thin and crisp when the above combination is used.
2) Instance contour ground truth dilation rate: We generate our ground truth instance contours by applying a contour detection algorithm on instance masks provided for different objects in cityscapes dataset. Number of edge pixels are comparatively lower than non edge pixels in our contour segmentation problem. We can alleviate this class imbalance using appropriate loss functions as discussed in Section III-B or by dilating the the contours and increasing their thickness. In Table II, we evaluate the performance of instance and panoptic segmentation for different dilation rates. We observed that when appropriate loss combination is used, the dilation rate doesn't have a significant impact on the performance. However, increasing the dilation rate from 2 to 3 decreases the performance. We use a dilation rate of 2 to generate our ground truth contours for all other experiments.
3) Refining Instance Segmentation: As discussed in Section III-D, we refine our instance segmentation output using center regression results. We evaluate the effects of split and merge components in our refinement process in Table III and evaluate the effect of min instance area in Table IV.
We observed that refining the instance segmentation using offsets predicted by center regression marginally improves performance of instance segmentation. However, the refinement is critical in cases where a broken contour can miss the boundary between two instances that can wrongly predicted as a single instances. Similarly, a occlusion by a pole or low width object can cause mislead connected component labeling to interpret resulting contours as separate instances. Qualitative results in Figure 5 suggests that offsets predicted by center regression head are accurate for objects that are closer while they are less accurate for objects farther away. We observed that choosing an appropriate minimum instance area threshold is critical in determining the performance of our proposed method. Lower instance area allows to remove unwanted instance generated due to artifacts in contour estimation. Such artifacts could be a result of false contours around mirrors of cars, convex hulls, occlusion etc. 4) Network Ablation: We experimented with different network architecture choices as discussed in Section III-A. We studied the impact of using a shared neck vs separate neck layer to upsample and add features from a common feature pyramid network. We also studied how the depth of ResNet Table V. We observed that higher ResNet depth and separate necks yield better performance.
C. State of the Art Comparison
In Table VI, we compare our proposed methods against other semantic, instance and panoptic segmentation methods.
1) Comparison with Two-stage methods: As discussed in Section II-B, Two stage object detection methods [1], [12], [50], [11] dominate the state of the art in instance and panoptic segmentation. However they have incur additional compute costs in generating object detection followed by foreground mask generation. Mask R-CNN [12] for instance segmentation on a high end GPU like Nvidia Titan X runs at ∼5-6 fps on a 1024×1024 image. When semantic segmentation task is executed in parallel with instance segmentation to compute panoptic segmentation the run time speed of Mask R-CNN [12] further declines. This makes the two-stage object detection based methods not suitable for real-time applications. Our proposed method with ResNet-50 encoder outputs panoptic segmentation at 3 fps on a mid grade Nvidia GTX 1080 GPU on a 1024×2048 image. We expect higher frame rates when our connected component labeling function is optimized for GPU operation as opposed to its current CPU based implementation.
2) Comparison with Instance clustering: Kendall et al. [13] was one of the early works that used multi-task learning to simultaneously learn semantic and instance segmentation. Panoptic-DeepLab [14] recently proposed an strong baseline for center regression based methods by exploiting the effectiveness of dual-Atrous Spatial Pyramid Pooling (ASPP) modules. We believe that using ASPP module in our network will improve our semantic segmentation performance and eventually lead us to better instance and panoptic segmentation results. However, ASPP modules are computationally very expensive compared to Feature pyramid networks [1].
3) Comparison with Single-stage object detection and Others: Poly YOLO [16] reported ∼ 22 fps on a 416×832 image with an AP score of 8.7. Other methods like Deep Watershed [48] and SGN [49] incur a huge computation complexity in their instance assignment techniques. Our methods are light weight compared to object detection, instance clustering based methods and better in terms of performance compared with other single-stage methods.
V. CONCLUSION
In this paper, we presented a new approach to panoptic segmentation using instance contours. Our method is one of the first approaches where instance segmentation is a generated as a byproduct in a semantic segmentation network. We evaluated performance of our semantic, instance and panoptic segmentation results on Cityscapes dataset. We presented several ablation studies that help understand the impact of architecture and training choices that we made. We believe that our proposed methods opens a new direction in research of instance and panoptic segmentation and serves a baseline for contour based methods. | 2020-10-23T01:00:49.180Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "633aed2be1a0e5d608ff37a1aaf6bf1510cdad22",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.11681",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6d9ea8ee9223adb0d008a4a93f44fbb2d9675fd6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246604071 | pes2o/s2orc | v3-fos-license | Composition of Kashkaval cheese manufactured from different levels of somatic cell counts in sheep milk
. The purpose of the present study was to investigate the influence of somatic cell count (SCC) on the composition of Kashkaval cheese. Kashkaval cheese samples were produced from three different batches of sheep milk with low (610 000 cells/ml), medium (770 000 cells/ml), and high (1 310 000 cells/ml) SCC, respectively. The main chemical parameters, such as pH, titratable acidity, moisture content, fat content in the dry matter, protein content, sodium chloride content, and microbiological parameters (lactic acid bacteria count, pathogenic microorganisms, coliforms, psychrotrophic, yeasts and molds) were studied during the ripening and storage periods. No statistically significant (P<0.05) changes were found in the values of the chemical parameters during the ripening period. At the beginning of ripening, the total lactic acid bacteria count for all cheese samples was about 4.1 log cfu/g, then increased to 6.2 log cfu/g (at 60 days of ripening) for test samples. The data collected in this study showed a slight decrease in pH values and a gradual increase in the titratable acidity, which was an indication for retarded fermentation during storage at low temperature. The lactic acid bacteria showed good survival, but higher sensitivity was observed in Lactobacillus spp. in comparison with Streptococcus spp.
Introduction
Mastitis is one of the most common diseases in dairy cattle and responsible for major economic losses. A number of authors have found that mastitis reduces the milk yield [1], quality and safety of raw milk [2] and dairy products [3; 4; 5]. The total somatic cell count (SCC) in milk is widely accepted as an indicator of udder health. This indicator is used worldwide for implementing hygienic control in the milk production process [6].
The most popular hard cheese produced in Bulgaria is traditional Kashkaval cheese made from cow, sheep, caprine, buffalo milk, or a mix of them. Furthermore, the main specific organoleptic characteristics of Kashkaval cheese depend on groups of factors related to milk quality, the cheesemaking process, ripening and storage periods. Therefore, those factors having an impact on cheese quality are of essential importance in the formation of flavor, aroma compounds and texture.
A limited number of studies are available in the literature on the influence of the high total somatic cell counts in sheep milk on the qualitative characteristics of dairy products [7]. Therefore, the aim of this study was to evaluate the changes in the chemical composition and microbial properties during ripening and storage of Kashkaval cheese produced from sheep milk with different levels of SCC.
Sample collection
Individual milk samples of 600 Black-head Pleven dairy sheep breed were pooled at the morning milking with the aim to screen for SCC. Bulk milk samples were collected from March to August. On the basis of the results obtained, milk samples were distributed in three different batches with low (610 000 cells/ml), medium (770 000 cells/ml), and high (1 310 000 cells/ml) SCC, respectively (data are published).
Cheesemaking and cheese analysis
Samples from the three kinds of sheep milk were processed into Kashkaval cheese, according to BNS 14:2010 [8] as follows: -SкL -Kashkaval cheese produced from sheep milk with low SCC; -SкM -Kashkaval cheese produced from sheep milk with medium SCC; -SкH -Kashkaval cheese produced from sheep milk with high SCC.
Kashkaval cheese samples were analysed in dynamics during of ripening and the storage periods. Chemical analyses were performed for: fat content in the dry matter [9]; sodium chloride content [10]; moisture content and dry matter [11]; total nitrogen by the Kjeldahl method [12], and then the protein content was calculated as the total nitrogen multiplied by 6.38; titratable acidity https://doi.org/10.1051/bioconf/20224501001 BIO Web of Conferences 45, 01001 (2022) FoSET 2021 (TA) by the Thorner's method [13], and potentiometric pH measurement: pH meter-7110 WTW (Germany).
Statistical analysis
The resulting data were processed using the program Microsoft Excel 2010 (ANOVA). The results are presented as mean values ±SD (n=3).
Results and Discussion
The changes in the chemical composition of Kashkaval cheese samples during the ripening and storage periods are given in Table 1 and Table 2, respectively.
The changes that occur in the chemical parameters during ripening at 9±1°С for 60 days are of crucial importance for the quality of Kashkaval cheese. The results show that in the study period the moisture content of the SkL, SkM and SkH samples varied between 42.0-44.5%, and the dry matter content between 55.0-57.5%. A similar trend was observed for the indicators proteins, fat content in the dry matter, and salt content, whose values did not undergo significant changes. The results of this study correspond with the results of Hachana et al., [20] where no significant differences were observed in moisture, fat, and total protein contents in mozzarella cheese samples prepared with milk consisting of different levels of SCC. For the purposes of the experiment, the storage process was carried out at 3±1°C for 12 months. During the storage period, the experimental samples of Kashkaval cheese had standard chemical parameters.
Despite the fact that Kashkaval cheese was produced with milk containing different SCCs (SkL, SkM and SkH), no statistically significant (P>0.05) changes in the values of the chemical parameters were established in this study. This was probably due to the fact that ripening and storage took place under vacuum in an oxygen-free environment.
The dynamics of the fermentation process of the Kashkaval samples during ripening and storage are presented in Fig. 1 and Fig. 2, respectively. Fermentation of lactose to lactic acid by the lactic acid bacteria is an essential process that is vigorous in the early stages of the cheesemaking process (biological maturation of milk, curdling, processing, heating, and cheddaring) and continues at a slower rate during ripening and storage. The resultant growth and activity of the lactic acid microflora correlated with the decrease in the active acidity (pH) and increase in the titratable acidity. As the data in Figs. 1a and 1b show, at the beginning of the ripening process the microbial population was in the range of 4.0 -4.1 log, rising to 2 log by the end of the period. The pH values also decreased at a steady rate by about 0.5, and TA increased by approximately 20 °T. There no significantly larger (P<0.05) increase in the starter microflora and TA was observed in the SkH samples (Fig.1c) and lower pH compared to samples SkL and SkM. The increase in the titratable acidity and the decrease in pH in the cheeses during ripening have been reported by a number of other authors [21; 22; 23]. The results of this study are not in agreement with those of Mazal et al., [24] who found that the pH value of milk was not affected by the SCC, therefore, the cheese produced from high-SCC milk presented significantly higher pH values during manufacture and a longer clotting time.
During storage, fermentation is slow due to the slow growth of the lactic acid microflora composed mainly of thermophilic microorganisms. Somalis et al., [25] reported that in the production of hard Greek cheese made from sheep: goat milk mixture not only theorization of the milk at 60-67 °C for 30 s but also stretching the cheese curd in hot brine significantly reduce lactic acid bacteria. Figs. 2 a, b and c show that the increase in the titratable acidity was more intense than pH. probably due to the buffer capacity of ripe Kashkaval cheese. There were no statistically significant (P<0.05) changes in the number of lactic acid bacteria in the SkL and SkM samples, compared to their lower number in SkH.
The changes in the nonstarter microflora in samples of Kashkaval cheese from sheep milk during ripening and storage are given in Table 2. It was found that during the ripening period the number of psychrotrophic microorganisms increased by about 1 log in the SkL and SkM samples, and by 2 log in SkH. The more vigorous growth of this type of microorganisms was most likely due to the retarded lactic acid process, which correlated with the higher titratable acidity values and lower pH value. Furthermore, it was found that the inability of the starter microorganisms to grow and adapt to the medium during ripening also substantially contributed to the growth of psychrotrophs. Tripaldi et al., [26] reported that stretching the cheddared cheese curds in a saline solution resulted in low contamination of fresh cheese with pathogenic and hygiene marker microorganisms and no presence of Enterobacteriaceae or coliforms were found.
During storage, the number of psychrotrophic microorganisms increased by about 2 log in the three samples of Kashkaval cheese, with the highest value in the SkH sample. The growth of this type of microorganisms was favored by the fact that storage was carried out at temperatures optimal for their development. According to Farkye, [27] psychrotrophs (of the genera Pseudomonas, Aeromonas and Acinetobacter) are undesirable microorganisms in cheese, as they may contaminate the cheese via the technological equipment or post-processing contamination might occur leading to inferior quality of the final product, deviation of cheese colour and/or texture, or reduced shelf life.
In the experimental samples of Kashkaval cheese, no coli bacteria, molds, yeasts or pathogenic microorganisms were detected during the ripening and storage periods. The reported low levels of adventitious microorganisms in the samples were the result of good manufacturing and hygiene practices followed throughout the cheesemaking process. The absence of undesirable adventitious microflora in the unripened product is an important prerequisite for the proper course of the ripening process. A study by Baruzzi et al., [28] found that storing mozzarella cheese at 4.0±1.0°C significantly increased the number of nonstarter microorganisms. The authors found that cheeses provide a good environment for the development of microorganisms, and the high moisture content and storage temperature above 1°C favour the growth of their population. Pappa et al., [29] reported that in fresh cheese the amount of detected yeasts was below 100 cfu/g but with the progress of the ripening process their amount increased and at the end of the study period reached a value not higher than 3 log cfu/g. However, the established values for molds remained negligibly low <50 cfu/g.
Conclusion
The results of this study have led to the conclusion that during ripening at 9±1°С for 60 days and storage at 3±1°C for 12 months somatic cell counts did not affect the chemical parameters (such as moisture content, dry matter, fat content in the dry matter, total protein and salt content) of Kashkaval cheese samples.
During the same period SkH cheese samples had a significantly lower lactic acid bacteria count and pH values, and higher titratable acidity compared to SkL and SkM samples. The number of psychrotrophic microorganisms in Kashkaval cheese from sheep milk with high SCC (SkH) was highest compared with SkL and SkM samples. No coli bacteria, molds, yeasts or pathogenic microorganisms were detected in all samples during the ripening and storage periods. Further studies are needed to determine the influence of SCC on the dynamics of proteolysis and lipolysis during ripening and storage, and the sensory acceptability of the final cheese product. | 2022-02-06T16:23:50.108Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "0807712cf6700d4f5feb04647623d65cdf9dd822",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2022/04/bioconf_foset2022_01001.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fef7c9b34465e86a266fd615ba7b5ffd3633e1b3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
220064972 | pes2o/s2orc | v3-fos-license | Repeated assessment of work-related exhaustion: the temporal stability of ratings in the Lund University Checklist for Incipient Exhaustion
Objective Screening inventories are important tools in clinical settings and research but may be sensitive to temporary fluctuations. Therefore, we revisited data from a longitudinal study with the Lund University Checklist for Incipient Exhaustion (LUCIE) that comprised occupationally active individuals (n = 1355; 27–52 years; 57% women) and one initial paper and pencil survey and 10 subsequent equally spaced online surveys. In the present study we examine to what extent the LUCIE scores changed across 3 years (11 assessments) and whether episodes of temporary elevated LUCIE scores (LTE) coincided with reports of negative or positive changes at work or in private life. Results In the total sample, the prevalence rates for the four LUCIE classifications of signs of increasing exhaustion (from no exhaustion to possible exhaustion disorder) ranged from 65.4–73.0%, 16.6–20.9%, 6.2–9.6%, and 3.4–5.0%. Of 732 individuals screened for LTE episodes, 16% had an LTE episode. The LTE episodes typically coincided with reports of adverse changes at work or, to a lesser extent, in private life. Thus, LUCIE classifications appear reliable and lend themselves to repeated use on the same individuals, or group of individuals. Even single episodes of elevated LUCIE scores seem appropriately to indicate adverse reactions to the work situation.
Introduction
Screening inventories are important tools in occupational health care and research settings. However, for practical and economic reasons, they are typically applied only once and may thus be sensitive to temporary fluctuations related to the individual, the context, or statistical phenomena (e.g., regression to the mean) [1]. During repeated assessment, the complexity of the test, the number of administrations and the time between assessments is also a concern [2,3]. Because re-test effects can create ambiguous results and contribute to unreliable classifications of various medical and psychiatric conditions, it is essential to understand the temporal stability of test scores [2,4,5].
To further the knowledge on repeated assessment of work-related exhaustion, we re-visited a validation study entailing the Lund University Checklist for Incipient Exhaustion (LUCIE) and 11 assessments across 3 years [6][7][8]. LUCIE is intended to assess behaviors, feelings and symptoms associated with prodromal stages of exhaustion disorder (ED) [6,7]. As such, it aligns with clinical experience and research that suggest that early detection/intervention is important [9,10]. The present objective was to examine how stress and exhaustion warning scores changed across the study period and whether episodes of temporary elevations in LUCIE was associated with personality trait scores or coincided with reports of negative or positive changes at work or in private life. Presumably, temporary elevations that coincides with reported changes in work and/or private life would indicate that LUCIE has an appropriate sensitivity to real life changes. The research questions were: • To what extent is the point prevalence of stress and exhaustion warnings in LUCIE stable across 11 consecutive measurements? • Are temporary stress or exhaustion warnings commonly occurring and are they preceded, or concurrent, with reports of changes at work and/or in private life? • Do individuals with temporary elevated stress or exhaustion warnings differ from individual's never displaying stress or exhaustion warnings, regarding demographic characteristics, personality traits and descriptions of work and private life stressors.
Measures LUCIE entails 28 items covering six domains that make up two supplementary scales: the Stress Warning Scale (SWS) (0-100) and the Exhaustion Warning Scale (EWS) (0-100). Using pre-defined cut-off scores on each scale, the SWS and EWS are combined into a four-step ladder of incremental stress symptomatology: STEP 1-GG (normal: SWS green zone and EWS green zone), STEP2-YG (SWS yellow zone and EWS green zone), STEP 3-RG (SWS red zone and EWS green zone), and STEP 4-RR (possible ED: SWS red zone and EWS red zone). For details on the scoring and development of LUCIE see Persson et al. [7]. Passing episodes of elevated SWS and EWS scores (i.e., LUCIE Temporary Elevation [LTE]) were identified for each individual. An LTE episode/case was defined by temporarily scoring in the red zone on either scale (i.e., Step 3-RG or Step 4-RR) while scoring at Step 1-GG or Step 2-YG in the assessment before and after. Given this definition and study design, up to 5 LTE episodes per individual could be achieved.
Personality traits were assessed in five dimensions at T0 with a Swedish 44-item version of the Big Five Inventory (BFI) [12,13].
Two forced choice items asked: "Has your situation at work (alternatively in your private life) changed in a positive or negative direction during the past couple of months?" [6]. Participants were also encouraged to complete an optional free-text field (480 signs).
Data management, statistical analysis and analysis of free-text answers
LTE cases were drawn from the control group sample (n = 745) in a previous study [6]. None of these participants (n = 745) had showed a sustained stress or exhaustion warning (i.e., over several consecutive quarters) in the previous longitudinal study [6] but some, however, displayed intermittent elevations in LUCIE scores (i.e., only one quarter). Thus, we targeted only control group participants with intermittent LTE episodes. In this group, 82% had a completed all 11 surveys, 17% failed to reply to 1 to 3 surveys, and < 1% failed to respond to ≥ 4 surveys [6].
Because the items "Changes in the situation at work and in private life" were introduced at T1, the search of LTE cases entailed waves T1 to T10 and 732 individuals. When LUCIE scores across three consecutive quarters (Q) confirmed an LTE for the first time, the elevation phase was set to Q2, the preceding phase to Q1 and the return phase to Q3. The LTE data was compiled into a new data set and merged with the data from non-LTE participants at T8 to T10.
Statistical analysis applied traditional non-parametric and parametric testing using the IBM/SPSS software version 25 (two-tailed alpha level was set to ≤ 0.05). Sensitivity analyses evaluated potential effects of participant dropout. Thematic analyses of free-text commentaries sufficed using the categories established in our previous study [6].
Results
Both the participation rate and the median SWS scores declined slightly between T0 and T4, but stabilized thereafter (Table 1). Sensitivity analyses entailing the subset of participants that had complete data across the 11 assessments (n = 670; 49%) indicated a similar pattern of decline in SWS scores. The median EWS score exhibited mostly a floor-effect throughout the study (Table 1).
Table 1 Distribution of prevalence rates and median scores (Mdn) with accompanying 95% confidence intervals [95% CI] for LUCIE classes (Step 1 GG to Step 4 RR) and the SWS and EWS scales across the 11 assessments rounds for the total study sample at each round
Cut-off scores for the SWS score: Step 1 GG
The SWS and EWS scores were generally higher in the LTE group than in the control group across all three quarters (p < 0.001; Mann-Whitney U-test; Additional file 3), and most clearly so at Q2 (Elevation phase).
Ratings of both negative and positive changes at work were more frequent among LTE cases (71% and 54%, respectively) than among controls (39% and 46%, respectively) (χ 2 : p < 0.001; Fig. 1; Additional file 4). For both type of ratings, the largest difference occurred at Q2, at which 19% among controls, and 58% among LTE cases, reported a partly or highly negative change at work (χ 2 : p < 0.001). Contrariwise, 27% of the controls reported a partly or highly positive change at work whereas only 15% of the LTE cases did (χ 2 : p < 0.001).
Ratings of negative and positive changes in the private life were more frequent among LTE cases (41% and 49%, respectively) than among controls (23% and 38%, respectively) (χ 2 : p < 0.001; Fig. 1; Additional file 4). For ratings of negative changes, the largest difference occurred at Q2, at which 10% among controls and 28% of LTE cases reported a partly or highly negative changes in their
Table 2 Baseline demographical characteristics and personality traits according to the Big Five Personality Inventory (BFI) of the participants identified as having a LUCIE temporary elevation (LTE) and participants without any LTE across the 11 assessments (controls)
An LTE episode/case was defined by temporarily scoring in the red zone on the LUCIE SWS or EWS scales (i.e., Step 4-RR) while scoring at Step 1-GG or Step 2-YG in the assessment before and after. Comparisons with categorical data were made with Pearson Chi Square tests. Comparisons involving continuous outcomes were made with one-way analysis of variance F-tests (ANOVA) private situation (χ 2 : p < 0.001). For ratings of positive changes, the largest difference occurred at Q3, at which 18% among controls and 29% of the LTE cases reported a higher rate of positive changes in the private situation (χ 2 : p = 0.006). The analysis of the free-text commentaries gave a deeper understanding of complaints, and delineated the interplay between work life and private life. See Additional files 5 and 6 for a listing and in depth analysis of free-text answers, respectively. Noticeably, however, when analyzing the 45 free-text answers from the in total 48 LTE cases that had rated negative changes in private life on the forced choice item, it became clear that some had misattributed a negative impact from work as a "negative change in private life". Thus, if discounting reports like "feeling worn out due to work" and reports flagging spillover from work to family as a private burden, only 29% had a solely (genuine) private burden unrelated to work in the total group of 116 participants with an LTE, in contrast to the 41% reported above (see Additional file 6 for computation details).
LTE LUCIE temporary elevation
Reports of simultaneous negative changes at work and in the private sphere were infrequent among LTE cases at Q1(7%) and Q3(3%) but rose to 20% at Q2. Some 20% of LTE cases did not report any negative change at work or in the private sphere during Q1 to Q3, see Additional file 7 for further details.
Discussion
The prevalence rates for the stress and exhaustion warnings in LUCIE (i.e., Step 1-GG to Step 4-RR) were essentially stable throughout the study period, although the median SWS scores declined between T0 and T4 indicating a weak drift towards better health. Conspicuously, the participation rates declined in parallel. However, the sensitivity analyses rejects participant dropout as an explanation for the decreasing SWS scores. Noticeably, only 16% displayed an LTE, and women were overrepresented with a ratio of 2:1. Despite a minute effect size, the higher neuroticism scores among LTE cases corroborates previous cross-sectional and longitudinal findings suggesting that personality traits and stress reactions to some extent are related [6,7,14]. More importantly, however, is that the LTE episodes coincided more frequently with ratings of changes in the work situation, and predominantly so during the elevation phase (Q2), when compared with changes reported to occur in the private life sphere. The analysis of the free-text commentaries strengthened this view. Indeed, some LTE cases misattributed work exposures as being private life stressors. Thus, even a short-term impoverishment of the work situation appears to be associated with the reporting of stress and exhaustion symptoms in LUCIE. In accordance with previous findings in cases of longterm elevation of LUCIE-scores [6], LUCIE appear to be a sensitive measure of short-term stress symptoms/ signs related to primarily the work situation and, as such, is probably a useful tool in the clinical screening of early signs of stress symptomatology and exhaustion in working populations.
Although LTE cases more frequently reported both negative and positive changes at work and, to a lesser extent, in the private situation, 20% of the LTE cases did not report any negative change whatsoever. This puzzle remains even after analyzing the LTE episodes in relation to a control question, documenting the occurrences of circumstances that in theory could have biased the replies in the original survey (e.g., pregnancy, Fig. 1 Ratings of changes in the work situation (left graph) and in the private situation (right graph). Within each graph the left panel shows ratings during the three quarters of fulfillment of the criterion among LUCIE temporary elevated cases (LTE; n = 116), whereas the right panel shows the corresponding data for controls (n = 616) menopause, pain, somatic disease, disturbed sleep due to small children or late habits, or other unspecified private life burdens; data not shown). Yet, humans sometimes display symptoms without being able to attribute them to a specific external or internal factor. Such unknown, or random, variation underlines that results from screening instruments on the individual level is only fully understood in a confident dialogue with the person screened. Since temporary fluctuations in mood and performance may occur even in the absence of any identifiable factor known to the individual, single temporary elevations in LUCIE scores should be conceived as possible indications of increased stress symptoms.
Conclusions
Participation rates and median stress warning scores declined independently from each other during the first five assessments rounds but stabilized thereafter. The overall pattern of results suggest that LUCIE classifications are reliable and lend themselves to repeated use on the same individuals, or group of individuals. Thus, even single episodes of elevated LUCIE scores seem appropriately to indicate adverse reactions to the work situation.
Limitations
Since the participants had long education and all were healthy when entering the study, the results may underestimate population levels of stress and exhaustion warnings and the occurrence of temporary elevations (LTE episodes). The calculations of 95% confidence intervals (CI), and analysis of LTE data, did not account for clustering within individuals. Thus, the CI's may be too narrow due to an underestimation of the standard errors. | 2020-06-26T14:48:34.347Z | 2020-06-26T00:00:00.000 | {
"year": 2020,
"sha1": "1e5e2b31407a56241ec6170ffe6ac876e1d78496",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-020-05142-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c8b45b6489fc0061700a0e8128304ab661a6408",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237372078 | pes2o/s2orc | v3-fos-license | Cultivating Science Teachers’ Understandings of Science as a Discipline
Current visions of science education advocate that students should engage with science in the classroom in ways that mirror the work of scientists in order to develop science proficiency. For this goal, teachers are tasked with the complex responsibility of supporting students in understanding not only the conceptual knowledge of science, but also its disciplinary practices, norms, and epistemologies. In order for teachers to teach in such ways, they must be afforded opportunities to develop and reflect on their own disciplinary understandings about science. Research Experiences for Teachers’ (RETs) programs, in which teachers engage in research with scientists, may be fertile contexts for the development of teachers’ robust understandings about science. As such, the purpose of this naturalistic single-case study is to explore the ways in which one elementary teacher (Ava) describes shifts in her disciplinary understandings about science after participating in a 6-week summer Research Experience for Teachers’ program. Through examination of interviews and observations, this study takes a critical event narrative analysis approach to unpack the ways in which Ava interprets certain disciplinary understandings about science in light of events during her research experience that to her had lasting and important impact on her understandings of science. We conclude by discussing the implications of this work for research and professional development design.
Introduction
Current visions for science learning in K-12 classrooms advocate that students should engage with science in ways that mirror the work of scientists in order to develop proficiency in the discipline (National Research Council [NRC], 2012). From this lens, the goal of science education is not only to provide students with the conceptual knowledge of science, but also to give them opportunities to practice the doing of science and to gain epistemological insights about the discipline (Duschl, 2008;Engle & Conant, 2002;Ford, 2008;Hodson, 2014;Kelly, 2018).
Ideally, such learning will also move beyond simply learning content knowledge to include overturning assumptions about who is allowed access to the scientific community or viewed as capable in science (Chambers, 1983;Sharkawy, 2009) and complexifying the notion that science is straightforward and procedural in nature (Harwood et al., 2005;Stroupe, 2014). It should also include supporting students in coming to know and use the discursive practices of constructing explanations from evidence and evaluating claims through argumentation (Ford, 2008;Ryu & Sandoval, 2012;Zembal-Saul et al., 2013). Further, the development of science proficiency would help students in navigating the emotions and feelings they encounter as they participate in science work in the classroom (Jaber & Hammer, 2016a, b;Davidson et al., 2020).
To be effective in supporting students in learning the conceptual knowledge of science and its disciplinary practices and epistemological underpinnings, teachers themselves must have opportunities to develop and reflect on their own understandings about science (Passmore, 2014;Reiser, 2013). Yet, few teachers have such opportunities, such as being involved in scientific research, to refine their disciplinary understandings. This unfamiliarity with doing science makes it difficult for teachers to translate the practices and epistemologies of the discipline into their classroom (Banilower et al., 2013;Capps et al., 2012;Hodson, 2014).
One context that holds great potential for engaging teachers in scientific work lies in Research Experience for Teachers' (RETs) programs. Such programs have been instituted at national laboratories and universities as professional development venues wherein K-12 teachers participate in extensive research through collaborative work with scientists (Enderle et al., 2014;Dixon & Wilke, 2007;SRI International, 2007). Because of their immersive nature, RET programs can be fruitful access points for teachers to develop refined understandings about science.
Much of the prior research around RET programs has attended to the ways in which such immersive experiences can promote change in teachers' views about science inquiry and the nature of science. For example, some studies have noted that some teachers' understandings have shifted from naïve to more sophisticated conceptions of science after participating in RET programs (Anderson & Moeed, 2017;Blanchard et al., 2009;Buxner, 2014;Grove et al., 2009;Varelas et al., 2005). However, these studies also note that, while measurable through surveys and pre-post interviews, changes in teachers' understandings were often small in nature or were mostly seen in those who entered the program with more nuanced understandings to begin with (Blanchard et al., 2009;Buxner, 2014). Moreover, studies have suggested that if substantial changes in teachers' understanding about science are sought, the research experience needs to have specific characteristics to engender such change. Possible characteristics include the degree of agency and choice teachers have over their research experience and its relation to their own interests (Southerland et al., 2016) and the nature of the social interactions that the teachers have with scientists and others in their lab (Southerland et al., 2016;Davidson & Hughes, 2018).
In sum, as "intuitively pleasing" (Blanchard et al., 2009, p. 322) as RET programs can seem, participation in such a program does not guarantee that teachers will come away with more sophisticated understandings about disciplinary content, practices, or epistemologies in science. When teachers do experience and articulate lasting shifts or new realizations in their disciplinary understandings as a result of RET participation, we argue that it is important to understand how these changes have come about in order to create future research experiences that are more supportive of teacher learning and change.
Within the context of RET programs, teachers experience many moments, events, and interactions in the field or laboratory that are designed to deepen their understandings of science. Conversely, teachers may also experience events and interactions that are unplanned but nonetheless become critical in shaping their disciplinary understandings. By attending to and taking seriously those events which teachers themselves report as important for their disciplinary understandings, an approach known as critical event narrative analysis (Avraamidou, 2016;Webster & Mertova, 2007), researchers can gain insight into teachers' emergent understandings of science that may otherwise remain unnoticed.
To our knowledge, no studies have explicitly examined what teachers themselves identify as particularly productive components in their research experiences and how certain events within their RET participation might shape their understandings about the discipline of science. This study begins to address this gap by exploring how one elementary teacher describes shifts in her understandings about science in light of personally relevant and meaningful events that occurred during her participation in scientific research.
In the remainder of this work, we draw on philosophical arguments in science education to discuss the ways in which our field has framed epistemic understandings about the discipline of science that inform current views of science education. We then examine how RET programs may serve as fertile contexts for teachers to develop disciplinary understandings through encounters with "critical events" during their RET participation. Building on this, we present our study of one elementary teacher's emergent and shifting understandings about science in a 6-week RET program and discuss the ways in which particular events were consequential for her understandings. We conclude by discussing the implications of this work for research and professional development design.
Views of Disciplinary Understandings About Science
The epistemological underpinnings and assumptions of what science is, how it is done, and who practices science have long been a matter of debate for philosophers, historians, and sociologists of science and even scientists themselves-not to mention researchers in science education concerned with those aspects of science that form an important part of students' wider scientific proficiency (Duschl, 2008;Ford, 2008;Hodson, 2014;Schweingruber et al., 2007). Because this study is largely focused on how teachers come to understand aspects of the discipline of science, we draw from examples within science education to discuss various views on "disciplinary understandings about science." Before moving on, however, it is important to describe what we mean by disciplinary understandings about science in the context of this study.
Research in the fields of philosophy of science and sociology of science has typically explored the discipline of science through two distinct lenses. Philosophy of science is generally concerned with that which is unique about the discipline (Curd et al., 2013) in terms of how it differs from other human endeavors and activities and the epistemological tenets which separate science from non-scientific pursuits. The sociology of science, on the other hand, has traditionally focused on the work of scientists as socially and culturally situated and asks questions about scientists' practices, dispositions, and qualities they bring to their scientific work as they interact within a community of science practice (Grinnell, 2009;Zuckerman, 1988). Taken together, research from these fields has served to further the understanding that science as a discipline is a complex human endeavor wherein culture, practice, epistemic orientations, and knowledge constructed about the natural world intersect in ways that push beyond essentialist or siloed views of what science is and how it is done.
For this work, we refer to disciplinary understandings about science from a holistic view to mean those constructs that include considerations for not only the epistemic dimensions of science but also for the social, conceptual, material, and affective dimensions of science as it is practiced in laboratory and field settings. In this way, "disciplinary understandings about science" is meant to encompass the characteristics and features of scientific work; the norms, practices, and tools of the community in which that work occurs; the sociopolitical and cultural contexts that influence science, the humanity, individuality, and identities of scientists; and the conceptual knowledge and epistemological reasoning scientists use to develop new understandings about the natural world. As such, we view disciplinary understandings as interwoven constructs that, while possible to parse, are still connected. However, this view has not always been the view of the field.
From a historical view of science education in the USA, disciplinary understandings about science were seen as more positivistic in nature (Burbules & Linn, 1991;Rudolph, 2002) during Cold War era science education, wherein curricula implicitly asserted science as a fully objective, logic-driven, and truth-oriented endeavor for those who would eventually become scientists. With an aim toward placing students into a science career "pipeline" (Duschl, 2008;Rudolph, 2002), science content was largely presented as factual information to be memorized through textbooks and lectures, and doing science was presented as procedural laboratory experiences with predetermined outcomes (Rudolph, 2002).
Subsequent science education reform efforts in the late twentieth century began to consider more broadly the import of "science for all," wherein the goal of science education was to create a more scientifically literate population (Rutherford & Ahlgren, 1991) for the purposes of developing a citizenry that would be more able to engage with "the economic and democratic agendas of our increasingly global market-focused science, technology, engineering, and mathematics (STEM) societies" (Duschl, 2008, p. 268). This more recent orientation toward science education included discussion of disciplinary understandings about science that move beyond the largely positivistic views previously emphasized (Rutherford & Ahlgren, 1991); some of these included the nature of scientific inquiry, the notion that science is a social endeavor, the importance of evidence and explanation, and the tentative nature of scientific knowledge.
While there was a newfound consideration for such aspects of science in science education, consensus as to what aspects should be included and how these should be taught were not originally addressed. Over time, some scholars suggested that disciplinary understandings about science should include specific tenets of the nature of science or of scientific inquiry as described by some philosophers of science (Lederman et al., 2002;Schwartz et al., 2008). Others have critiqued this view of disciplinary understanding as essentialist and have called instead for more practice-oriented and contextually relevant views of science (Hodson & Wong, 2017). Moreover, other views may extend on these tenets of science to include considerations of culture, language, historical and political contexts, economic aspects of science, and the personal relevance of science knowledge (Dagher & Erduran, 2016;Hodson & Wong, 2017;Lave & Wenger, 1991;Longino, 2002). Such views may acknowledge how particular political, cultural, and social structures influence the types of questions scientists are able to pursue. These views also recognize that it is the people in the disciplinary community of science that mutually decide and agree upon the norms and practices of the community.
In alignment with this broad view of disciplinary understandings, current reform efforts assert that students should be afforded opportunities to think, act, behave, and feel as scientists do (NGSS Lead States, 2013;NRC, 2012). While there are limits imposed by the differences between the epistemic aims of practicing scientists and those of students in science classrooms, students are certainly capable of engaging in the disciplinary norms, practices, epistemic orientations, and knowledge-building work of science in ways that mirror the disciplinary engagement of scientists. Through learning opportunities that center scientific engagement in these ways, students can come to understand that scientists hold themselves accountable to their counterparts within a scientific community (Lave & Wenger, 1991;Traweek, 1988), they can participate in knowledge construction through argumentation (Okasha, 2002), and they can practice habitual stances of skepticism, tenacity, and curiosity toward their work (Gauld, 2005;Wenning, 2009). Additionally from this view, understanding the discipline of science also includes acknowledgement of the humanity of scientists as they manage the affective experiences-such as frustration, puzzlement, wonder, and joy-that accompany scientific endeavors (Arango-Munoz, 2014). Navigating feelings and emotions in science may be implicit, but it is important to recognize that such navigation is a necessary part of doing science (Jaber & Hammer, 2016a, b). Scientists must, for example, manage frustration in the face of a setback in order to persevere and problem-solve or temper the excitement of a potential new discovery or breakthrough with skepticism toward their findings.
Disciplinary understandings about science-as we have described them-have driven the ways in which the science education community has oriented to what matters for students' science learning, and this orientation greatly influences what and how science teachers teach. In the next section, we discuss how RET programs may be important contexts to support teachers in developing disciplinary understandings of science.
RETs as Contexts for Promoting Teachers' Disciplinary Understandings
Current visions for science learning (NRC, 2012) necessitate that teachers' instructional planning and classroom practice position students as learners, doers, and thinkers in science. In order to effectively design and support such learning opportunities for students, we argue that K-12 teachers should be afforded opportunities to become familiar with scientific work and to grasp how epistemic underpinnings and assumptions influence scientific research in communities of practice (Ford, 2008).
Yet this is a difficult expectation to place on many teachers who may have had little experience in doing science for themselves. Much of in-service teacher knowledge of science is developed in the context of undergraduate science coursework or laboratory experiences wherein scientific work is often portrayed as confirmatory labs with prescribed procedures, featuring limited opportunities for explicit reflection about the discipline of science (AAAS, 2012;Banilower et al., 2013;Fulp, 2002). In some cases, K-12 science teachers may have had some opportunities for laboratory experiences that more closely parallel the work of "real-world" scientists, but this is a fairly rare occurrence for in-service secondary teachers and even more so for elementary teachers who are particularly critical for shaping students' earliest science experiences (Banilower et al., 2013).
One potential context that could support teachers-elementary and secondary alike-in experiencing and reflecting on their disciplinary understandings about science is that of RET programs. Because RETs often require sustained participation in research over multiple weeks, such programs may serve as important contexts in which teachers not only engage in scientific work but interact with and experience a science community of practice (Davidson & Hughes, 2018). As such, RETs may allow teachers to recognize, reflect upon, and refine their understandings about science, understandings on which this present study focuses using the lens of critical event analysis. In the next section, we describe critical event analysis as a useful approach for examining the ways in which RET participants come to experience shifts in their disciplinary understandings about science.
Critical Events and Teachers' Research Experiences
Research on teacher learning around RETs suggests that teachers often benefit from their participation in terms of increased content knowledge, changes in conceptions of the nature of science, changes in beliefs about certain aspects of scientific research, and improvements in their abilities to communicate and participate in scientific discourse (Anderson & Moeed, 2017;Buck, 2003;Dixon & Wilke, 2007;Dresner & Worley, 2006;Faber et al., 2014;Hofstein & Lunetta, 2004;McLaughlin & MacFadden, 2014). However, while RETs are largely considered to be impactful experiences for teachers, there is still much to be understood about why and how RET participation is such a profound experience. RET professional development programs are often treated as a "black box" (Southerland et al., 2016, p. 3), and such "black box" investigations do little to shed light on how changes in disciplinary understanding occur for teachers.
At the outset of this work, we conjectured that over the course of a sustained research experience such as an RET, teachers encounter particular moments or events-critical events-that are fundamentally important to how they understand and view the discipline of science. Examining such events for teachers through the lens of critical event narrative analysis may offer key insights into how teachers come to view the discipline of science in more nuanced ways through their research participation.
Critical event narrative analysis contends that all people have lived experiences that shape the narratives they hold about their beliefs, attitudes, and understandings about the world and themselves (Webster & Mertova, 2007). Lived experiences are considered "critical" when they serve as an anchoring event by which beliefs, attitudes, and understandings of the world and oneself are upended (Webster & Mertova, 2007;Woods, 1993). Critical event analysis assumes that (a) what makes an event "critical" is the impact it has on the person to whom it has happened; (b) it is only in retrospect that the event can be seen as critical; (c) the more time that passes between the event and continued recall of the event by the experiencer, the more impactful the event has been; and (d) critical events almost always become "change events" in which some worldview or belief has been challenged and must be accommodated by the experiencer (Avraamidou, 2016;Webster & Mertova, 2007). With these assumptions in mind, drawing on such events as focal data in research can provide "valuable and insightful tools for getting at the core of what is important in that research" (Webster & Mertova, 2007, p. 71).
Critical events, while always relevant to the individual in their own story and meaning-making, may originate in contexts that are more collective in nature (Measor, 1985;Webster & Mertova, 2007;Woods, 1993). Critical events may be "extrinsic" in that they are produced by historical and political events at-large (e.g., the 1969 Apollo 11 lunar landing or the global COVID-19 pandemic); they may be "intrinsic" as related to and occurring within the natural or typical progression of a career or lived trajectory (e.g., entering one's first year of teaching or experiencing the process of retirement); they may also be entirely personal and only relevant to the individual (e.g., a particular family event or dealing with illness). Critical events might also be bounded to a particular time or context-such as experiences occurring within a professional development program as is the case with the current study. No matter the origin, however, a personally relevant critical event can only be identified by the experiencer who has lived the event and knows the lessons from which he or she has carried away from the experience.
We contend that by attending closely to in situ experiences that teachers recall and reflect upon as most salient in their RET participation, we gain insight into teachers' shifting and emergent disciplinary understandings about science as a result of RET participation and, by proxy, identify aspects of the RET program that might be powerful for shaping such understandings. With this in mind, using a critical event narrative analysis, we ask: How did one elementary teacher develop emergent disciplinary understandings about science in light of her firsthand participation in science research?
Methods
This naturalistic case study takes a critical event narrative approach (Avraamidou, 2016;Webster & Mertova, 2007;Woods, 1993) to examine the experiences and shifting views of science of one elementary teacher, Ava (all names are pseudonyms) during her 6-week RET experience.
Research Context
The RET professional development program (which began in 1999) is held at a national interdisciplinary laboratory (the Lab) with over 600 scientific faculty and staff from science-related fields that include engineering, physics, biochemistry, chemistry, and materials research. The Lab is made up of smaller lab groups composed of research scientists, technicians, postdocs, graduate students, and occasionally undergraduate students participating in internships or undergraduate research experiences. The RET hosted in the Lab is designed to provide K-12 teachers with an opportunity to participate in cutting-edge scientific research within these smaller lab groups with the hope that these experiences will influence their classroom instruction (Enderle et al., 2014;Southerland et al., 2016;Davidson & Hughes, 2018).
The RET program is designed so that pairs of teachers work with a scientist mentor as part of that mentor's lab group for 6 weeks, and these pairings are decided by the program director based on teachers' science fields of interests as noted in their application materials. The teachers are selected by the program director so that (a) the summer cohort will have a combination of teachers from each grade band-elementary, middle, and high school-and (b) at least half of the selected teachers work in schools that primarily serve underrepresented and historically marginalized populations.
During the 6-week RET program, teachers spend the majority of their participation directly involved in research-related activities with their mentor scientist and others-such as lab assistants, graduate students, and postdocs-affiliated with their lab. The teachers are typically given explicit roles and responsibilities within their mentor scientist's lab and are active participants in the ongoing research. Participation for some participants includes, for example, preparing samples for experimental testing, reading and discussing background research, running experiments and collecting data, assisting with data analysis, helping in the writing of lab reports, troubleshooting and solving problems with equipment, and participating in lab group meetings.
While the structure of the RET program is designed so that the majority of participant time is spent engaged in active research work within science laboratories, the aims and goals of that work and the specific practices, procedures, and discussions teachers will have around their work with their mentor scientists and others can vary greatly. The program director selects mentor scientists that are typically known in the Lab to be open, patient, knowledgeable, and have a willingness to include teachers in their lab groups as full participants. This careful selection on the part of the director also typically allows teachers to feel supported by their mentor scientists and to have a more positive experience in the program (Davidson & Hughes, 2018;Hughes et al., 2012).
In addition to their daily research work, the teachers participate in regularly scheduled weekly meetings focused on science pedagogy and the nature of their research work at the Lab. These meetings include sessions around the inclusion of engineering or argumentation practices in the science classroom; teachers' sharing of favorite science lessons for feedback; and "lab crawls" in which pairs of teachers give a brief presentation about their research and take the cohort on a tour of their research lab. These sessions provide the teachers an opportunity to learn about one another's experiences at the Lab and serve as a shared space for camaraderie and commiseration in regards to teachers' research efforts. The final week of the program is dedicated to the preparation of a poster for the culminating research presentation session that occurred on the last day of the RET. Occasionally, teachers participate in other optional and non-research-related events-such as lectures from guest speakers, impromptu meetings with members from other research groups, or informal social gatherings occurring outside of the Lab.
Study Participant
This study draws on data from the 2017 cycle of the RET program. Ten teachers participated in the RET summer program, four of whom were elementary teachers. From the four elementary teachers, we selected Ava as the focal participant for this study because of her unusual tendency of conveying her epistemic insights about science. At the time of this study, Ava was a kindergarten teacher with an interest in science teaching and 18 years of teaching experience at the early elementary (K-3) level. She was working at a Title 1 school that predominantly served students from underrepresented populations who were also English language learners. Ava grew up in Puerto Rico where she attended university; she moved to the mainland of the USA after earning her degree in elementary education. She identified as a native Spanish speaker and described English as her second language. During this research experience, Ava was paired with another elementary teacher, Carrie, in a materials science laboratory, and both teachers worked with Dr. Ji, a mentor scientist who had been a scientist at the Lab for more than 10 years at the time of this study.
Because this study required a particularly reflective and articulate informant in order to explore shifts in disciplinary understandings through the lens of critical events, Ava was an ideal choice as a focal participant given her ability to deeply reflect upon and clearly articulate her new understandings about science in light of her RET experiences. Ava served as a key informant (Patton, 2002) in that she allowed us access to her thinking and experiences in ways that other participants did not, including, for example, her lab partner, Carrie. This informed our choice of focusing our analysis on Ava's experiences instead of Carrie who was overall less descriptive and articulate about her experiences at the Lab and more reserved in her explanations and reflections during interviews.
Data Sources
Multiple interviews conducted by the first author during and after the RET program served as the primary data source for this study. In addition, the first author conducted multiple observations of Ava, in her laboratory setting, in her work with other teachers throughout the program, and in her classroom in the months immediately following her RET participation. These observations served as secondary data sources, providing essential contextual information that allowed the researcher to ask additional questions about Ava's experiences in the program.
Semi-structured Interviews
Five semi-structured interviews were conducted with Ava during the program. These interviews typically consisted of eight-to-ten guiding questions and focused on multiple aspects of the RET experience including Ava's research project focus; her relationships with her mentor, teacher partner, and others at the Lab; her understandings about science; her feelings and emotions in the context of the research work; and other reflections on her experiences. Because the interviews took a semi-structured approach, the guiding questions for each interview were open in nature and included general questions about Ava's RET experiences (e.g., how would you describe your experiences so far in the RET program?; What is your mentor scientist like?; What is it like to work in your lab?) as well as her understandings of science as a discipline (e.g., what is science all about?; What do you think are the goals of science?; How would you describe what scientists do in their work?). Based on Ava's responses, the interviewer would then follow-up with more specific questions. All questions were asked in "plain language," free from technical jargon such as "epistemology" (Patton, 2002). Each interview lasted between 15 and 40 min and were audiorecorded and transcribed. The first interview took place during the first 2 days of the RET program, after Ava had met her mentor scientist, Dr. Ji. Following this, the next four interviews took place at fairly regular intervals of 1.5 weeks throughout the RET.
In addition to the observations and semi-structured interviews during the RET, the first author interviewed Ava again approximately 4 months after Ava's RET participation. The interviews occurred in Ava's kindergarten classroom at the end of 2 consecutive school days and were focused on Ava's reflections of her RET experience, her classroom instructional practices, and her understandings about the discipline of science. These interviews were more conversational in nature because the interviews took place primarily after school hours when Ava was not constrained by time requirements and because of the comfortability and relationship that was built over time between Ava and the first author. Both interviews lasted approximately 45 min and were audio-recorded and transcribed.
Direct Observations and Field Notes
In order to better identify critical events and resulting shifts in Ava's disciplinary understandings about science, the first author shadowed Ava through the program, conducting real-time observations and audio data collection during her research work two to four times per week over 6 weeks. Each observation was audio-recorded and lasted between 20 min and 2 and a half hours. Field notes were also taken during each observation (Patton, 2002) to create rich descriptions of interesting moments of activity, interaction, and/or discussion within participants' lab groups including direct quotations. As such, the field notes and direct observations were essential for informing the first author's interview questions for Ava. Additionally, these field notes and audio-recordings were used as a secondary data source for triangulation and, when possible, to develop richer understandings of the critical events that Ava described in interviews. In total, over 30 h of naturalistic observations and corresponding audio-recordings and field notes were collected.
Data Analysis
This work takes a critical event narrative approach to data analysis. The three events classified as "critical" in this study were identified based on the assumptive criteria of critical event analysis as described by Avraamidou (2016) and Webster and Mertova (2007). For the first phase of analysis, the first author read through all transcripts from the classroom interview data set to (1) identify references to particular events that might emerge as "critical" with further analysis and (2) note the ways in which Ava described aspects of her disciplinary understandings about science. The classroom interview set was chosen for the first pass because of its temporal distance from Ava's RET participation, since the classroom interviews occurred several months after the conclusion of the program. This is in line with critical event analysis which notes that the more time that has passed between an event and recall of the event, the more impactful it may be considered (Webster & Mertova, 2007).
Once events were identified within the classroom interview data set, the first author read through the rest of Ava's interview transcripts in chronological order (from earliest occurring week 1 to most recent occurring week 6) to cross-reference those mentions of events with Ava's discussions of them during the RET. To be considered a critical event for this study, the event in question needed repeated mention (at least four times) across both interview data sets (the classroom set and the RET set), and Ava must have shared a reflection, description, or comment about her understanding about the discipline of science in relation to the event.
From these criteria and the process of cross-referencing events between the two data sets, three events were identified as "critical": (1) "The Composite Image Puzzle," (2) "Lessons Learned from Dr. LG," and (3) "Meeting Real and Accessible People." These events, which will be discussed further in the findings, stood out because of their saliency to Ava both during and after the RET and their correlation to Ava's shifts in her understandings about science. Once these events were identified, we carefully examined how Ava described her understandings about science in relation to these events to develop an account for how these events acted as catalysts for shifts in Ava's understandings. Evidence of shifts were identified through discursive indicators that marked some change in ideas (e.g., "I never thought of it like that before") or presented a juxtaposition of ideas that may have been held in competition with one another (e.g., "he is a genius" and "he is just a person" when talking about a particular scientist). Such instances were cross-referenced in both interview data sets. When possible, field notes and audio-records were examined to gain more insight into the context, setting, and other participants in the event.
Trustworthiness and the Position of the Researcher
As a member of the RET program support staff, the first author held an "insider" position as a researcher in this study. Having worked as support staff for this particular RET program for more than 3 years, she was familiar with the Lab community and context, as well as the programmatic structure and elements of the program. The first author was responsible for collecting data, conducting interviews and focus groups with participants, leading participants in pedagogy workshops, and acting as assistant to the program director. The first author had no directorial or supervisory role toward teacher cohorts but instead was positioned as a "resource person" and "curious researcher" by other staff and scientists in the Lab, a distinction that was made explicit to teachers each year.
The first author and Ava established a friendly association early on which was maintained during all points of data collection. Throughout the 6-week RET, the first author was a constant presence for Ava and other participants; she was present at orientation, afternoon sessions, in and out of research laboratories, attended several social gatherings, and made herself available to talk with teacher participants whenever questions or concerns arose during their experience. This consistency of visibility and availability allowed her to develop rapport and establish trust with those in the RET cohorts, overall, and to continue to develop a safe working relationship with Ava, specifically. This familiarity and trust likely furthered Ava's willingness to share openly and honestly about her experiences.
From this perspective, the relationship between the first author and Ava can be considered to be an asset to this work as it has allowed for a depth of access to Ava's thinking, experiences, and reflections. However, such closeness to the data and the participant may create bias in interpretation if not held in check. Therefore, various measures were taken to ensure trustworthiness of the analysis (Guba, 1981;Shenton, 2004). First, the primary interview data sources were cross-referenced and triangulated with observational data and field notes to ensure consistency across data sources. Additionally, analytical memos and raw data were shared with the co-authors as external researchers not affiliated with the Lab in order to discuss and negotiate possible alternative interpretations regarding the findings and in turn reduce bias. Most importantly, Ava has had opportunity to review the findings and found the interpretations of her experiences and disciplinary understandings about science to be accurately accounted for.
Findings
Based on an analysis of Ava's reflections and descriptions of her research experience at the Lab, three critical events were identified as having a lasting impact on Ava's disciplinary understandings about science: (1) "The Composite Image Puzzle," (2) "Lessons Learned from Dr. LG," and (3) "Meeting Real and Accessible People." Close examination of Ava's reflections on these events allowed the research team to identify shifts in her disciplinary understandings of science. Below, we begin with a brief description of each critical event; and then, we discuss the shifts in Ava's understandings of science as described by her in the post-RET interviews in light of each event. Afterwards, we examine instances within her 6 weeks at the RET in which Ava referred to this event and her understanding of science as related to it. Lastly, we explore how shifts in Ava's disciplinary understandings about science in light of the critical event were consequential to her orientations toward her teaching.
Critical Event 1: the Composite Image Puzzle
During her time in Dr. Ji's laboratory, Ava was involved in his ongoing research work around superconducting materials. Specifically, Dr. Ji and his research team were investigating how different cooling rates will affect the structural integrity of a certain type of high-temperature superconducting metal that has been bundled in a wire formation and coated with a silver magnesium sheath. After heating samples of the wire to more than 850 °C, Dr. Ji's team would slowly cool the samples at varying rates in order to determine how the cooling might influence the structure of the wire; once the wires were cooled, they were cross-sectioned and examined under a high-powered scanning electron microscope (SEM) for analysis.
While this was the "big picture" of Ava's research context, her participation involved a number of activities, including preparing wire samples for the furnace and the SEM, collecting data from experimental trials in the form of "checking in on" the furnace and the slow-cool rates from time to time, using the SEM with guidance from Dr. Ji to take images of the wires, and using computer programs to run computation data analysis and to create composite images of wire samples taken from the SEM. This last activity provides the context for this critical event.
In this event which occurred early on in the RET program (week 2 to day 2), Ava and two other members of her research team-her RET teacher peer, Carrie, and a graduate student who was new to the Lab, Kevin-struggled to create a single composite microscope image of a wire sample cross-section from many individual image files using a photo imaging computer program. This team spent more than 40 min engaged in a trial-and-error style troubleshooting approach to solving the problem but was unsuccessful at each pass. While Dr. Ji had given them an overview of written procedural steps and had previously demonstrated creating a composite image on the software, all three were stumped by the problem. They could not figure out why the image would emerge jumbled rather than as a single composite at each attempt. Eventually, after much frustration and many failed attempts, Dr. Ji returned to the Lab and offered the missing information that seemed to solve the composite issue.
From a researcher perspective, this episode did not seem particularly interesting or important when observed in real time. However, to Ava this episode became critical to her understanding of science in that she repeatedly called upon her memories of this experience to describe her realization that science is replete with puzzles and complexities and to describe the importance of scientists' perseverance through trials and errors. These two realizations were evidenced not only in Ava's RET interviews with the first author and her exchanges with others during the RET, but also when she recalled this event more than 3 months after the RET program during the researcher's visit to her classroom.
"Puzzles" and Complexities as Inherent to Scientific Work
For Ava, the Composite Image Puzzle event helped her recognize that puzzles-or problems to figure out and solve-are an inherent part of doing science. In her reflective interviews post-RET, she thought back on her firsthand experiences with what she called "a puzzle" in light of this event: When we worked on the sample image-you remember?-The computer and all the separate pictures together, Oh my God! It was a puzzle!--and we had so, so, so much trouble, I still can't believe how crazy that was to me -we had to keep trying and try a different way and try another way and it didn't work at all. Then later on when Dr. Ji came, he just did it. [Semi-structured interview, classroom visit day 1] Ava's recollections portray the sense of frustration and vexation that she experienced as she and her team attempted to resolve this "puzzle" and her firsthand experience of the complexity of scientists' work, even with something seemingly as simple as creating an image.
During the RET itself, there was evidence of the emergence of such understandings as Ava referenced this critical event later in the week of its occurrence, noting her surprise at the amount of effort and work needed to create the composite image of the sample. In this reflection, she recognized that even though sample preparation might seem "simple" and procedural, it is an important, difficult work: I tell you, I could not believe how hard we had to think to do something so simplewell, it's not simple -we didn't know how to do it! But you have to practice and try and try and try, you have to see it as a challenge and you have to-I don't knoweven though it's just a photo that we're making-it's something that will help Dr. Ji and others understand something new about the [sample material]. I never thought about [science] like that before -so many people have to do work that might look simple, but it's not really and it has to get done for something big to happen or be discovered or something like that. [Semi-structured interview, week 2-day 4] Indeed, in order for scientists to create new knowledge, data must be translated from raw form into something useful for analysis and-despite the potential "simplicity" of this task in some contexts such as this-Ava's reflection highlights that such procedural work is often times both critical and complex for scientists. From this standpoint, Ava and the research team were engaging in this important pre-analysis work in their attempts to create the composite image. Having this firsthand experience allowed Ava to consider the complex and puzzling nature of science in ways that she had "never thought about" before.
Perseverance Through Mistakes and Trial-and-Error
Related to Ava's understanding of "puzzles" and complexities as an everyday aspect of science, another important shift in Ava's disciplinary understandings comes in the form of an emergent recognition of the necessity for perseverance through mistakes and trial-anderror approaches to problem-solving in science. In recalling the "Composite Image Puzzle" event, Ava described what her mentor said about mistakes: that they happen "all the time." As Ava reflected: After he came into the Lab and helped us complete the composite image, Dr. Ji] said, 'I had to learn and try, too, and make many mistakes.' He said he makes mistakes all the time. And I was like, YOU?! Wow. [Semi-structured interview, classroom visit day 1] This move on the part of Dr. Ji to normalize and persevere through mistakes and trialand-error was taken up by Ava in an impactful manner. This event shifted not only her understanding of science, but also the ways in which she began discussing science with her students in her own classroom: So now, I tell my students, too, mistakes are okay. We all make mistakes, even teachers, even scientists. That's part of learning, I think. Don't you think? You have to keep trying. [Semi-structured interview, classroom visit day 1] That she felt a new compulsion to share with her students that "mistakes are okay" gives a glimpse on how this shift in Ava's understanding of science influenced her classroom instruction as she became more intentional about normalizing mistakes as part of doing science.
Critical Event 2: Lessons Learned From Dr. LG
Another event identified as critical for Ava's understandings about science occurred later in the RET program (week 5 to day 2) when a prominent researcher in the physics community and a lead scientist at the Lab, Dr. LG, gave a talk for visiting undergraduate and graduate researchers at the Lab. In addition to her presentation about her personal work in physics and a brief overview of the history of the development of superconductivity theory, Dr.
LG also discussed the importance of diversity in science, how global and local political and social structures can influence scientific research, and how understandings of new ideas-particularly in new fields of study-take a long time within the scientific community. The RET participants were not expected to attend this talk; however, Ava had heard about the talk from a visiting undergraduate and decided under her own volition to attend. When asked about "a favorite memory" from her RET experience during the first author's classroom visit, this event immediately came to mind for Ava. The resulting shifts in disciplinary understandings that she described that were tied to this critical event include her realization that science occurs within complex global and sociopolitical contexts and a recognition of the importance of diversity of both people and perspectives in scientific endeavors, as we discuss below.
Science Occurs Within Global and Sociopolitical Contexts
As part of her talk, Dr. LG described how the production and supply chain of liquid helium-a substance critical to maintaining the extremely low temperatures necessary for superconductive states in some materials-had been interrupted in some Middle Eastern countries because of political disputes in the region. The helium shortage had serious consequences for nuclear magnetic resonance and superconductor research worldwide. Referencing this part of Dr. LG's talk, Ava pointed out that she had new understandings of how science and geopolitics are related. As she shared that many of her new students had only recently come to the USA from Puerto Rico because of the extreme devastation caused by Hurricane Maria in 2017, Ava noted: While the connections between hurricanes, climate change, and liquid helium production may seem tenuous, Ava's reflection on the related nature of scientific work and the global and political structures that influence such work comes through soundly as she considers both Dr.
LG's commentary on stalled progress in superconducting science due to political strife and the circumstances of her displaced students from Puerto Rico who have experienced a catastrophic weather event. Importantly, Ava's reflection points to a shift in understanding as she notes seeing science "differently than before" in that it is "connected" within global, political, and societal structures. Moreover, Ava recognized how these contexts can influence the livelihoods of people at the individual level, whether scientists attempting to conduct research or students whose families have been affected by politicians as they decide whether to "believe" or accept evidence for controversial issues such as climate change.
Importance of Diversity of People and Perspectives in Science
Another shift in Ava's disciplinary understandings of science resulting from Dr. LG's talk was her emergent understanding that diversity is essential in science-not only in terms of diversity of people and their cultural backgrounds, but also in terms of the diversity of perspectives that serve to strengthen the construction of explanations through critique and argumentation in science. In reflecting on Dr.
LG's statements about diversity in science, Ava stated: In this comment, Ava reflects on the importance of diversity in science, not only from an equity lens, but also from an epistemological standpoint of strengthening knowledge construction through diverse ideas and approaches. She also notes that differences in "personality. or perspective can shape the scope and interpretations of research in science. This same sentiment came up for Ava in an interview during the RET where she expanded on her lessons learned from Dr. LG's talk in connection to her students' experiences in the science classroom: At the beginning [of RET] in orientation, [the RET director] said, 'At this Lab, there are people from all over the world working here' and to me, that's fascinating!! Because that's science. Like Dr.
LG was saying, science is global and you have to have an open mind to other's ideas. Even in your classroom, you have to have an open mind. My classroom is always diverse-every year-Spanish speakers, kids from Haitian communities, a lot of my students don't have a lot and they don't always know how to share their ideas-but they have good ideas and they learn how to work together. It's like [Dr. LG] was saying, science is diverse. I didn't think about it before in that way, but she's right-People from many parts of the world are here [at the Lab] contributing to many discoveries together. [Semi-structured interview, week 5-day 4] Here, Ava described the need to have "an open mind" to the ideas of others and draws on Dr.
LG's talk to consider the students in her own classroom and their positions as diverse learners. Her shifting understanding about science regarding the importance of many people from "all over the world" working together to further research parallels the ways in which Ava comes to view the learners in her classroom as capable thinkers with "good ideas" who are able to collaboratively work together even though they are quite different from one another.
Critical Event 3: Meeting Scientists at the Lab
One event that Ava continually reflected upon during post-RET interviews was in reality a collection of smaller critical events rather than a singularity: meeting individual scientists and getting to know them. We chose to present this as a composite, singular critical event because of the ways in which Ava consistently referred to these encounters as if they were one event, often referencing multiple people in the same reflective moment to make illustrative points about how important these meetings were to her. Shifts in Ava's disciplinary understandings about science that occurred as a result of meeting scientists at the Lab include a realization that scientists are "real and accessible" people and that scientists were once students in K-12 classrooms.
Scientists as "Real and Accessible" People
When asked during the first researcher's classroom visit to share her motivations for participating in the RET program, Ava noted: You know me, I love people. So, to me, science is people, right?-trying to figure things out about-about the world, about nature, the physics. At least I think. That was part of my purpose in coming [to RET]-to meet scientists. To meet them as real and accessible people. Because, you know, my days in school-every day is structured and I have to do things a certain way at certain times, but at the Lab […] I got to take every opportunity that appeared to meet people and to know about all the things going on at the Lab and what the scientists do and, you know, who they are. I really wanted to know who they are. [Semi-structured interview, classroom visit day 2] Ava's description of scientists as "real and accessible" demonstrates a shift in her thinking from previous descriptions of scientists early on in the RET program. During the first day of the RET after meeting her mentor, Dr. Ji, and talking to him for the first time about the research work for the summer, Ava shared this aside with the first author: I think Dr. Ji is so sweet, but you can just tell he is a genius and it's all about the science. I don't think he knows how to explain to people that don't already know about the research. But scientists are just like that, no? [Informal conversation, week 1 to day 1] In this excerpt, there is a sharp contrast between Ava's initial generalization that "scientists are just" genius-like and do not know how to talk to laypeople about their work and her later views of scientists as "real and accessible." This shift began to take form early on in her RET experience, starting with her developing relationship with Dr. Ji. When asked during one of the interviews during the first week of the RET to describe what it is like to work with her mentor, Ava shared this reflection: He's so helpful and patient, but he's a genius. He's a super smart person -and not just with science. We were talking about politics in my country [Puerto Rico] and he knew everything about the [debt] crisis-I was shocked because unless you are a local-I don't know-there are things people don't know about and he knew about it! He surprised me. He's so smart about his work, but he knows about what's going on in the world, too and that surprised me a lot. I don't know why-I think you just think it's going to be all about the science work only with someone so brilliant.
[Semi-structured interview, week 1-day 4] At first, Ava was caught off guard at Dr. Ji's knowledge of the current events in Puerto Rico because of her expectation that scientists' interests and knowledge might only be "about the science work only." But Ava's notion that someone like Dr. Ji would only be interested in his research was challenged by their discussion of the political and economic situations of Puerto Rico-a topic of specific importance to Ava given her strong cultural and familial ties.
In another reflection of her encounters with scientists, Ava describes two younger graduate students she met at the lab-Gladiola and John: I'm very fascinated by people and their lives. For like, Gladiola-she is a young African woman that-she's a scientist. She's doing her [graduate] studies here at the Lab and she's been to Russia and speaks Russian and she has so many stories and experiences from Russia and Nigeria and now here. And John-he's from Wisconsin I think-he's been showing me how to use the machines and the different polishing with the [fine-grained polishing] paper. He is trying to figure out if he likes science enough to stay in science, but he also plays music and we talk about his dog and he's a person. [Semi-structured interview, week 3-day 5] Here, Ava is expressing her developing understanding that scientists' lived experiences include not only their academic or professional research endeavors but also their personal lives and interests. She describes Gladiola and John in ways that demark their belonging to the scientific community (as a graduate student and scientist, as a technician using the sample polishing machinery) and also highlights her knowledge of their interests and experiences that may be outside of science, such as speaking multiple languages and traveling internationally, playing music, and having a dog.
Echoing a similar sentiment, when asked to describe what she would likely take away from her participation in the program toward the end of her RET experience, Ava noted: The experience [of RET participation] has been a great experience for me. It will last my whole life. I'm not afraid of science or scientific people. I used to be, I thinkyou know, intimidated or really afraid because they're so, so smart-brilliant. But meeting scientists and getting to know them-like Gladiola and John, Dr. Ji-about their families and pets and kids, their countries, what they like to do-there's so much more than science. I think I used to think, oh scientists are just-you know, like only geniuses. Do you know what I mean? But they're just people, too. [Semistructured interview, week 6-day 2] Encountering scientists "as people" demystified them for Ava and, in turn, shaped her views of who scientists are and who they can be: that they are more than "just" geniuses, but are also "real and accessible" people with varied knowledge, interests, and experiences.
Scientists Were Once K-12 Students
Related to this shift in understanding that scientists are people with diverse backgrounds, interests, and experiences, Ava had an eye-opening realization relating to her getting to know scientists at the Lab. For Ava, coming to see scientists as "real people" allowed her to understand that all scientists have not always been scientists-rather they were also K-12 students at one time before pursuing their research trajectories: I just loved meeting all the scientific people. Because I love people. Like Gladiola-Remember? And Kevin and-do you remember John? And Charlie-Charlie was the one from Colombia in the lab and, oh my God, I remember he said to me, 'my thirdgrade teacher made me love science.' I learned something-That really stuck with me because my third-grade teacher in Puerto Rico-she loved teaching us science. And it made me think-everyone I met at the Lab. Every scientist-they had teachers. They had high school, middle school, third grade-and kindergarten too, right? [Semi-structured interview, classroom visit day 2] She continued: You know, other teachers [at this school], they don't really teach science every day. It's not a priority [within the school or the team]. But I teach it every day. Every single day. Because, you know what? Because John and Charlie. Because Gladiola, Kevin, Dr. Ji-they all had kindergarten teachers-you know, they learned to love science somewhere maybe-maybe not kindergarten but still. It's my responsibility to share with my students-to teach science. To help them love it. [My students] could do it-they might be like John or Charlie someday. I take that so seriously. Very seriously. [Semi-structured interview, classroom visit day 2] That Ava describes feeling the "responsibility to teach science" "every single day" in relation to her encounters with scientists at the Lab is powerful. For Ava, her realization that scientists were once K-12 students had a profound impact on her prioritizing science in her teaching and compelled her to view her own students as potential scientists of the future.
Conclusion and Implications
In this research, we set out to explore shifts in one elementary teacher's understandings about science in light of critical events that occurred during her participation in a six6week research-intensive professional development program. We identified three critical events-The Composite Image Puzzle, Lessons Learned from Dr. LG, and Meeting Real and Accessible People-that were particularly salient in shifting Ava's disciplinary understandings about science. While other events-both from her RET experience and otherwise-may have had important roles in influencing Ava's views of science as a discipline, we chose to focus on these particular events because of their enduring importance to Ava even several months after her RET participation and because of the ways in which she carefully and clearly articulated shifts in her understandings about science in light of these events.
To our knowledge, this is the first study to closely examine shifts in disciplinary understandings about science through the lens of critical events in an RET program. Ava herself did much of the reflective work that marked these events as critical as she recalled these experiences and connected them to her understandings of science without overly specific prompting during interviews. By taking Ava's perspective seriously and carefully examining the ways in which she draws upon these critical events, we were able to understand how Ava's RET experiences engendered nuanced understanding about specific aspects of science, scientists, and the work that they do. In what follows, we discuss the noted shifts in Ava's understanding about science as connected to the critical events and their implications to her science instruction. We also discuss the contributions of this work in terms of methodological and design considerations for professional development programs such as RET or those that position teachers as "doers" of science.
Discussion
A primary goal of the RET program at the center of this study is to support teachers to develop more robust understandings of science through immersive and collaborative research participation with the aim of influencing their classroom instruction in productive ways. To this end, we argue that taking a critical event analysis approach allowed us to see how aspects of this goal were met for Ava in ways that perhaps a more traditional methodological approach would not have captured. Related to each of the critical events described in the findings, Ava came to experience shifts in her understandings about science, shifts toward understandings that resonate with those held by the fields of history, philosophy, and sociology of science, as well as science education. While her insights are not new to these fields, they were new to Ava and held important implications for how she came to view aspects of the discipline and how she came to orient to her students as capable thinkers and doers of science as we discuss in this section.
In light of the "Composite Image Puzzle" event, Ava began to understand the prevalence of puzzles and mistakes in scientific research, as well as the need for scientists to develop a stance of perseverance as they navigate uncertainty in their work. Certainly, procedural error, flaws in experimental design, equipment difficulties, and ambiguous or confounding anomalies in data all might create opportunities for scientists to wrestle with "a puzzle" in their research in similar ways that Ava experienced as she and her colleagues attempted and repeatedly failed to create the composite image (Allchin, 2012;García-Carmona & Acevedo-Díaz, 2018). These opportunities to problem-solve in science-when framed in productive ways-might be seen as chances to develop perseverance and tenacity in the work (Pickering, 1995) for scientists in their endeavors, for teachers in their research participation, and in turn, for students in their classroom science learning (Davidson et al., 2020;Manz & Suarez, 2018).
While Ava and her team were unsuccessful in solving the puzzle of the composite image, their efforts were still acknowledged, their mistakes and frustration normalized, and their perseverance praised by Dr. Ji who shared his own stories of struggle when ambiguities in his work have created puzzling conundrums to push through. Dr. Ji's work to normalize aspects of the "puzzle" event echoes the findings of Hughes and colleagues (Hughes et al., 2012) who note that teachers who work with supportive, handson mentors are more likely to come away with nuanced understandings about science.
Learning to normalize ambiguity, frustration, and perseverance as essential aspects of doing science became important to the ways in which Ava oriented to her classroom teaching. After the RET program, Ava described how she wants to help her students see "mistakes" as part of learning something new. Teachers may be concerned about allowing students to experience uncertainty in science or to grapple with "puzzles" that arise in investigations, yet these are part and parcel to the work of scientists. As such, students need opportunities to encounter them in the science classroom.
In light of the "Lessons Learned from Dr.
LG" critical event, Ava came to understand that the scientific community is situated within sociopolitical contexts that influence how scientific research is conducted (Dagher & Erduran, 2016;Longino, 2002;Pickering, 1992) and came to see the importance of diversity for the development of science. Global and sociopolitical contextual factors are important to take into consideration when accounting for how scientists come to decide the lines of research to be pursued and how such research is carried out. That Ava connects these realizations to her students' experiences through the lens of politics and climate change suggests her broadening understandings about science to include the notion that individuals are impacted by the ways in which political structures take up or reject scientific research.
Along with these lessons, Ava also recognized that the scientific community can beand more importantly, should be-diverse and such diversity brings with it different perspectives. Ava relates this idea of diversity to her goal of having her students work together and share ideas in the classroom. However, more than this, the diversity of perspectives is a critical aspect of scientific research if scientists are to hold themselves to the regulatory ideal of "strong objectivity" in scientific knowledge production (Harding, 1992;Keller, 1992;Longino, 2002). As scientists are-to quote Ava-"just people," they bring with them their personal backgrounds, experiences, cultures, race, gender, sexual orientations, and any number of other factors associated with one's identity, and these serve to shape the lenses with which they view and interpret their research. Many perspectives on the same data sets may yield different interpretations of said data, which lead to discussion and critique within the community and, in turn, allow for more robust understandings through the social construction of knowledge (Harding, 1992;Longino, 2002). In this way, diversity is not only an important issue of equity and access in science, but also an epistemological imperative. As Ava comes to understand this notion, she begins to orient to her kindergarten students as collaborative and capable thinkers in science who bring a diversity of experiences, knowledge, and cultural backgrounds to the classroom. In the same way that science in a laboratory or field setting is enriched by the diversity of perspectives and experiences, science learning in the classroom is enriched by students' diverse resources.
Related to this notion of diversity in science, Ava's encounters at the Lab with multiple scientists from varied backgrounds allowed her to shift her generalized view of scientists as brilliant geniuses unable to communicate their work to non-scientists toward seeing scientists as relatable people with diverse interests and experiences within and outside of science. Indeed, scientists are people with a full cadre of experiences and viewpoints about the world, and-as Ava observes-scientists were once young students in a science classroom. To her, this has important implications for her own classroom practice as she views her own kindergarten students as potential future scientists, a view that compels her to teach science every day even when others around her do not have this same priority.
Research suggests that opportunities to learn science are often overlooked in early childhood and elementary science classrooms, yet these contexts are also often the first opportunities students will have to cultivate positive attitudes toward science and to have their curiosities about the natural world piqued and affirmed (Banilower et al., 2013;Czerniak & Mentzer, 2013;Gopnik, 2012;Grinell & Rabin, 2017;Mantzicopoulos et al., 2008). From this view, it is no small thing that Ava has taken seriously the responsibility of teaching her students science every day and internalized this responsibility as a result of getting to know scientists at the Lab.
Methodological and Design Implications for Teacher Education Programs
This work is predicated on the notion that an essential aspect to supporting teacher learning is taking teachers' own experiences and meaning-making seriously. It is important to recognize that teachers-as individuals with their own prior experiences, worldviews, and predispositions-enter professional development spaces such as RET with their whole selves in tow and not just their "teaching selves (Blanchard et al., 2009). From this lens, we acknowledge that Ava's typically positive attitude and upbeat demeanor, her self-identification as someone who "loves people," and her willingness to attend optional activities such as Dr.
LG's lecture are a few examples of inclinations that likely had noteworthy impact in determining what Ava would participate in and do during the RET, and in turn, what she would come to see as salient to her learning about science. For another teacher, it is likely that a critical event analysis would highlight very different critical experiences and, therefore, different lessons learned.
In light of this consideration, we argue that an important contribution of this work is methodological in nature. This study is exploratory in terms of examining what a critical event analysis methodology might afford researchers to understand about teacher experiences in professional development contexts such as RET; taking a critical event analysis approach allowed us to see one teacher's learning in ways that have not been previously captured. We argue that it is necessary to attend to teachers' interpretations of critical experiences and events that have personal relevance to them in order to better understand and support their learning in professional development experiences-particularly when the aims of such experiences are to challenge teachers' assumptions or understandings of the discipline. Accordingly, we examined the shifts in disciplinary understandings that became salient for one elementary teacher through the lens of three critical events to which these understandings were deeply tied. However, while the specific critical events identified as most salient for Ava's disciplinary understandings were individual and personal to her, the kinds of experiences that shaped her understandings have potential to be more universal and accessible to all teachers through RET participation with thoughtful planning on the part of professional development programmers.
For example, including opportunities for teachers in RET to experience "puzzles" in scientific work and to wrestle with uncertainty may be an important step in helping teachers to understand that it is normal for practitioners of science to experience vexation and frustration in research (Davidson et al., 2020). Teachers who understand these aspects of the discipline of science may be in a better position to plan for and leverage moments of uncertainty as they support students' science learning in the classroom. As such, developers of teacher research experiences might explicitly design and plan for opportunities that allow teachers to experience, recognize, and discuss productive struggle and perseverance as aspects of science and, in turn, support teachers in translating these ideas toward their classroom instructional practices (Ford, 2008;García-Carmona & Acevedo-Díaz, 2018;Kelly, 2018;Manz, 2015;Manz & Suarez, 2018). As mentioned before, Ava's feelings of frustration and her "puzzlement" in light of the "puzzle" event were normalized by Dr. Ji which allowed her to internalize those experiences as part and parcel of doing science. It is then important to examine how scientist mentors in research experiences support teachers to normalize struggle in science (Hughes et al., 2012) and to design ways within RET programs to help teachers understand the affective and epistemic learning that takes place when uncertainty arises.
Additionally, it is important for researchers and program developers to plan time and space for teachers to interact with scientists on a regular basis. Indeed, learning to see scientists as more than geniuses but as "real people" may not have been so powerful for Ava had she only been interacting with Dr. Ji throughout here RET research experience, instead she interacted with a wide range of scientists. Through this, Ava came to understand scientists as "real and accessible" people with diverse experiences and backgrounds that they leverage to strengthen their research because of the opportunities for social and professional interactions with many different people working across the larger Lab setting. These interactions, which became a critical part of Ava's experience and learning, may not have happened as frequently or been held with such importance had she not been granted agency to interact with others in the Lab. The Author(s) blinded for review (2016c) have noted the critical importance of social interaction in RET for teachers in terms of changing their beliefs, practices, and knowledge around science and science teaching. Moreover, the development of relationships with multiple scientists and her choice to attend Dr. LG's lecture afforded Ava opportunities to reflect on her own students. This allowed her to come to view them as potential future scientists with diverse backgrounds and experiences who are capable of thoughtfully reasoning and collaborating with one another in the science classroom.
While not directly related to the programmatic features of the RET, Ava was given explicit opportunities to reflect upon, connect, and unpack her laboratory research and other experiences at the Lab during interviews. From these explicit reflections, we were able to identify emergent and shifting views of Ava's understandings about science that seemed more aligned with a vision of science as outlined in current science education reform efforts (NGSS Lead States, 2013;NRC, 2012). Approaching this work from a critical event lens allowed us to (a) capture aspects of the RET experiences that held particular importance for Ava and (b) examine how these events shaped her understanding of science. This was made possible through the various opportunities that Ava had to share and reflect on her own RET experiences. With this in mind, we argue that it is critical for researchers and program developers to engage teachers in explicit and reflective discussions on their experiences within scientific research and to build in time during professional development to build trust and rapport with participants and to allow for these kinds of rich discussions. Such opportunities may support teachers to develop nuanced and productive understandings about science as a discipline-in terms of both their understandings related to science epistemology and understandings about the nature of scientists and their practice.
In sum, this work suggests that it is necessary for researchers, program developers, and other stakeholders to take seriously the perspectives of teachers in light of their research experiences. To do so, it is important to provide teachers spaces for reflecting on and discussing their disciplinary understandings, to plan for and leverage opportunities for uncertainty in productive ways, to allow agency and choice over aspects of teachers' program participation, and to provide opportunities for social interaction between teachers and scientists. These considerations might allow teachers to develop refined understandings about science, which in turn could productively translate into their teaching and learning in the science classroom.
Limitations and Future Research
This work focused solely on one teacher's developing views of science through the lens of critical events that occurred during her participation in research. However, there are several limitations to the study. While only three events were identified from the data sets as "critical" (Avraamidou, 2016;Webster & Mertova, 2007), it is likely that other events that influenced Ava's understandings about science were not captured in this study. Additionally, it is worth noting that some activities-Dr. LG's guest lecture, for example-were quite specific in nature and may not be a mainstay of every RET iteration. This means that some experiences and opportunities for teacher learning in programs such as RET may vary from year-to-year. Future research endeavors should consider the potential of consecutive years of participation in research-based professional development such as RET, as well as how experiences and opportunities for learning may differ across years, and the ways in which teachers' disciplinary understandings begin to shift and take shape over time through multiple interactions within the community of science as peripheral novice participants (Davidson & Hughes, 2018).
Relatedly, we recognize that critical event analysis is only one way to approach examining teachers' understandings and learning in a professional development context such as RET, and this approach may not capture all lessons learned or shifts in understandings for participants. Indeed, there may be new or shifting ideas that teachers hold as a result of their RET participation that are less obvious, less salient, or invisible to researchers through the lens of critical events but nonetheless have an impact on teacher attitudes, understandings, and classroom practice. Additionally, it is possible that within cases, participants might describe shifts in their understandings that are not specifically tied to particular events or experiences-that is, for some participants, a critical event analysis would be less revealing than taking another methodological approach. With this said, we maintain that the shifts in Ava's disciplinary understandings of science articulated in the findings resonate with the critical event analysis approach and are clear and cogent within the data. Nonetheless, it is important for the field to continue the work of examining teacher change in professional development such as RET and to approach this examination from a multitude of directions-including that of critical event analysis, which-in Ava's case-allowed for the illumination of novel links between her experiences and her learning. Likewise, while this study takes a single-case study approach to understand one teacher's learning in relation to particular events that were important to her, we see potential for more expansive approaches to the use of critical event analysis for understanding how groups of teachers may think, feel, and learn in relation to collectively experienced events in professional learning settings.
Another consideration of this work recognizes that some of the shifts in Ava's disciplinary understandings may have resulted from the reflective work that she engaged in during interviews as a result of the researcher's questioning and pressing for reflection instead of directly emerging from specific research experiences. However, we see this as less of a limitation and more of an inherent aspect of qualitative research work. In fact, as we note in the implications section, we also consider this aspect of our findings as motivating the need to embed such reflective opportunities within RET and other professional development programs.
Finally, this present study does not explicitly examine the ways in which Ava's disciplinary understandings about science manifested in her classroom instructional practice in action, specifically in terms of her instructional planning and enactment. Instead, we touch upon the ways in which Ava described connections between her disciplinary understandings of science and her orientations toward her students and her classroom, which can be an important first step in shaping instructional practice. It is essential that future research examines how teachers' disciplinary understandings about science as a result of science research experiences may shape classroom practices in action if the field is to more fully understand the role of RETs and critical events as catalysts for teacher learning and, in turn, science teaching.
Author Contribution All authors contributed to the study conception and design. Data collection and initial analysis were performed by Shannon G. Davidson; subsequent rounds of analysis and refinement were performed by all authors. The first draft of the manuscript was written by Shannon G. Davidson, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding Aspects of this study were made possible by the National Science Foundation Division of Materials Research, Grant/Award Numbers, DMR 1157490 and 1644779, and the State of Florida.
Availability of Data and Material Not applicable.
Code Availability Not applicable.
Ethics Approval
The authors obtained approval for this study from the Florida State University Office of Human Subjects Protection and the Institutional Review Board.
Consent to Participate
Informed consent was obtained from all participants involved in this study in accordance with the Florida State University Office of Human Subjects Protection and the Institutional Review Board.
Consent for Publication
Pending acceptance of the manuscript, the authors give consent for publication in Science & Education.
Conflict of Interest
The authors declare that they have no conflict of interest. | 2021-09-02T05:33:45.087Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "d10cec87876b561ce94a59d451072580fcafc7cf",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11191-021-00276-1.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d10cec87876b561ce94a59d451072580fcafc7cf",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117056899 | pes2o/s2orc | v3-fos-license | Finite matrices are complete for (dagger-)hypergraph categories
Hypergraph categories are symmetric monoidal categories where each object is equipped with a special commutative Frobenius algebra (SCFA). Dagger-hypergraph categories are the same, but with dagger-symmetric monoidal categories and dagger-SCFAs. In this paper, we show that finite matrices over a field K of characteristic 0 are complete for hypergraph categories, and that finite matrices where K has a non-trivial involution are complete for dagger-hypergraph categories.
Introduction
Multigraph categories enrich the language of traced symmetric monoidal or compact closed categories by allowing many inputs and outputs of a morphism to be connected together. We can represent this in a minimal, algebraic way by equipping each object in the category with a special commutative Frobenius algebra (SCFA). Intuitively, SCFAs endow an object with the ability to 'split', 'merge', 'initialise', and 'terminate' a wire. Furthermore they have extremely well-behaved normal forms, in that any connected diagram is equal. Thus, in addition to being able to interpret normal string diagrams, we can naturally interpret their multigraph variations, for example: These structures play an important role in categories whose morphisms can be written as matrices over unital semirings, though they were perhaps overlooked for quite some time due to a certain failure of naturality, which concretely manifests itself as a 'basis dependence'. However, in 2006 Coecke, Pavlovic, and Vicary pointed out that this basis-dependence is no accident, but rather that SCFAs characterise bases in categories of linear maps [3], as well as provide a useful set of building blocks for the types of maps one would define using a basis. This feature has been exploited numerous times, particularly within the program of categorical quantum mechanics [1,2,4].
This paper adds another piece to the story connecting SCFAs and bases. Namely, that the axioms of dagger-multigraph categories are complete for the category of finite-dimensional vector spaces where each object has a chosen basis, i.e. the category of finite matrices.
Preliminaries
Definition 2.1. A symmetric monoidal category C is called a dagger-symmetric monoidal category if there exists a monoidal functor (−) † : C → C op such that † • † = 1 C and: for α, λ, γ the associativity, unit, and symmetry natural transformations of C.
Definition 2.2. A special commutative Frobenius algebra (SCFA) in a symmetric monoidal category C consists of a tuple (A, µ, η, δ, ǫ) satisfying the following equations: If C is a dagger-symmetric monoidal category, a dagger-SCFA (dSCFA) additionally satisfies the equations From hence forth, we will always be working in a symmetric moniodal category, so we will use string diagram notation for morphisms. In diagrammatic notation, the axioms of an SCFA become: The following is a well-known folk theorem about SCFAs. Theorem 2.3 (Spider). Suppose f and g can be written as a connected string diagram from A ⊗m to A ⊗n consisting just of the morphisms from a single SCFA, then f = g.
Proof. The proof proceeds by induction on the size of diagrams, showing that any connected diagram can be rewritten into a canonical diagram consisting of a tree of multiplies followed by an upside-down tree of comultiplies: ⇒ See e.g. [6], Theorem 3.2.28. For an alternative derivation via distributive laws see [7], §5.4.
We express the fact that there is only one connected diagram with m inputs and n outputs by collapsing connected diagrams into a single dot: = Definition 2.4. Canonical tree/co-tree morphisms are called spiders. We let: is called a multigraph category. Similarly, a dagger-symmetric monoidal category where every object is equipped with a dSCFA is called a dagger-multigraph category.
A strong (dagger-)multigraph functor is a strong (dagger-)symmetric monoidal functor that preserves the multigraph structure. A strong monoidal functor is triple (F, p F , U F ) consisting of a functor F : C → D and natural isomorphisms: Preserving symmetries can be written using 'functorial box' notation (c.f. [8]): A strong dagger-symmetric monoidal functor additionally satisfies F (f † ) = (F f ) † , and preserving the multigraph structure means: Dagger-multigraph functors are important in particular because they define models of the free daggermultigraph category into a semantic category. We will make use of the following construction in building the particular models we use to show completeness in Section 6. Let MGCat be the 2-category of multigraph categories, strong monoidal functors preserving the SCFA structure, and monoidal natural transformations. Let MGCat † be the 2-category of dagger-multigraph categories, dagger-strong monoidal functors preserving the dSCFA structure, and monoidal natural transformations.
Note how we only assume a multigraph category category is symmetric monoidal, rather than traced symmetric monoidal or compact closed. That is because this extra structure comes for free. Theorem 3.3. A (dagger-)multigraph category is (dagger-)compact closed, with a coherent choice of selfdual compact structure for each object A, where by coherent we mean for all objects A, B, the following equations are satisfied: We define the compact structure on A in terms of its SCFA.
We can use the Frobenius axioms to show this cap and cup satisfy the snake equations.
The coherence equations follow from (co)commutativity of the Frobenius algebras and the definition of A⊗B as: Remark 3.4. This notion is very close to that of 'compact closed with a coherent self-duality', as introduced by Selinger in [10]. When a category C has a monoidal product that is free on objects, the two notions coincide. We simply let the (non-self) dual of A = B 1 ⊗ . . . ⊗ B n be the 'reversed' object A * := B n ⊗ . . . ⊗ B 1 and define caps, cups, and the self-duality A ∼ = A * in the obvious way.
If C is not free on objects, we can pass by a standard construction to a new category C that is free on objects then define the requisite structure.
for a (dagger) signature Σ consists of the following data: • a finite set D F of multi-edges, or dots, Satisfying the conditions: Dot-diagrams are much like string diagrams, except that rather than requiring each wire to be associated with precisely one input and output (either to a box or the diagram as a while), we allow a single 'wire', which we now called a dot, to be connected to many inputs/outputs. Thus, dots serve as multi-edges in dot-diagrams. Normal string diagrams can be seen as a subset of dotdiagrams, where we write dots with one input and one output just as wires: It will simplify to proof to first restrict to the case where diagrams have no (global) inputs/outputs, and no 'free-floating' dots. We call these simple closed dot-diagrams. The only significant difference with the definition from [9] is that the bijections θ in , θ out are replaced with surjections. This makes the 'dots' in dot-diagrams serve as multi-edges, rather than single wires.
When considering homomorphisms of simple, closed dot-diagrams, three of the equations above become redundant: The first is forced by surjectivity and the fact that connections between boxes and dots must respect Σ. The other two are vacuously satisfied for closed diagrams. Furthermore, we only need to require the box function Ψ b to be surjective in order to obtain a surjective homomorphism of dot-diagrams.
be a homomorphism of dot-diagrams, and let Ψ b be a surjective function. Then Ψ d is also surjective.
Proof. Since Ψ b is a surjection, it indices a surjection Ψ b : Inputs F → Inputs G . Then, by the homomorphism conditions, the following diagram commutes: would not be surjective, which is a contradiction. Thus Ψ d is surjective.
Unlike in [9], it is the property of surjectivity that lifts from the box function to the whole homomorphism, not isomorphism. In fact, there are examples of homomorphisms with bijective box functions whose dot function is merely a surjection, and not a bijection, e.g.
f f Definition 4.5. For a (dagger) monoidal signature Σ, the (dagger) multigraph category Dot(Σ) of dotdiagrams is defined as follows: • Objects are words in Obj * , • Morphisms are (isomorphism classes of) dot-diagrams F such that for all i, j: where dom(F )[i] is the i-th object in the input word of F and cod(F )[j] the j-th object in the output word.
• Composition is defined by pushing out over adjacent dots: where θ G in and θ F out are (restrictions of) the wiring functions of G and F . Then: induced by the coproduct of boxes, ℓ G•F d by the pushout of dots: and the wiring functions defined by: • the monoidal product is defined as the disjoint union of two dot-diagrams: where I F + I G and O F + O G are chosen coproducts of the form: The monoidal unit is given by the empty dot-diagram.
• Swap maps are defined as pairs of dots with swapped outputs (or inputs): • For a fixed object A, we define the SCFA structure on A. Each map is the unique dot-diagram with a single dot of type A and the appropriate number of inputs/outputs: • If Σ is a dagger signature, the dagger structure is obtained by changing all of the boxes to their daggered versions and interchanging the role of inputs/outputs: where we change an element 'in j ' to 'out j ' as appropriate, and vice-versa. This is very close to the combinatoric string diagram presentation of the free traced symmetric monoidal category given in [5]. The biggest departure is in composition. In order to obtain a new dot-diagram as a composition of dot-diagrams, adjacent dots fuse together: The category Dot(Σ) is the free (dagger-)multigraph category over a signature Σ. In other words, there is an equivalence of categories: or for Σ a dagger-signature and C a dagger-multigraph category: Proof. Since the free category is characterised up to equivalence, let us show an equivalence (actually an isomorphism) between Dot(Σ) and a more 'obvious' representation of the free category. Let Σ ′ be a signature containing Σ and additional maps ( , , , ) for each A in the objects of Σ. Let Free(Σ ′ ) be the free traced symmetric monoidal category of Σ ′ and let Free(Σ ′ ) ≡ be the same, but with morphisms taken modulo the SCFA equations. This becomes a dagger-multigraph category in the obvious way, and by construction, forms the free dagger-multigraph category.
Since Free(Σ ′ ) is the free traced SMC, we can take its morphisms to be string diagrams. Define a functor F : Free(Σ ′ ) ≡ → Dot(Σ) that is identity-on-objects. For morphisms, it sends each box in Σ to itself, each connected component of morphisms in Σ ′ \Σ to a dot, and each 'blank' wire (i.e. a wire not otherwise touching a morphism in Σ ′ \Σ) to a dot with one input and one output.
The category Mat(R)
While it is more typical to define a category of matrices whose objects are natural numbers and whose matrices are indexed by sets of the form {1, . . . , m}, it will be more convenient for our purposes to use an equivalent category with arbitrary finite sets as indices. It will also be convenient for the proofs to define Mat(R) for all unital semirings, not just fields. Thus, to fix notation, we will now define this version of Mat(R) and give its (dagger-)multigraph structure.
Let R be a unital semiring with a (possibly trivial) involution operation (−). Let Mat(R) be the category whose objects are finite sets I, J , . . . and whose morphisms ψ : I → J are |J | × |I| matrices, i.e. matrices whose rows are indexed by i ∈ I and whose columns are indexed by j ∈ J , with composition and identities defined as usual.
Where the juxtaposition f k i g l k means multiplication in R. Note we have adopted tensor notation where inputs/columns appear as lower indices and outputs/rows appear as upper indices: We define a monidal product on objects as I ⊗ J := I × J and on morphisms as the Kronecker product of matrices: Note that we typically drop brackets and commas, when there can be no confusion: Mat(K) is a dagger-monoidal category, letting: The multigraph structure is given by 'generalised Kronecker delta' matrices.
Clearly any connected diagram of these matrices just becomes a bigger Kronecker delta, with the general case being: (S n m ) j1...jn i1...im = 1 if for some k, and all α: i α = j α = k 0 otherwise From this and the observation that S 1 1 is the identity matrix, all the SCFA identities follow. It is possible to characterise functors out of the free multigraph category in terms of matrix (i.e. tensor) contraction. For a dot diagram F and a multigraph functor M : Dot(Σ) → Mat(R), let be the set of indexing functions for F . The elements of Idx(F, M ) can be seen as tuples, but for out purposes, it will be more convenient to write them using (dependent) function notation. That is, each φ ∈ Idx(F, M ) assigns an element φ(d) ∈ M (ℓ F d (d)) to each dot in F . Theorem 5.1. Let F : I → I be a morphism in Dot(Σ) represented by a closed, simple dot-diagram (up to isomorphism), and let M be a multigraph functor. Then: where and are sums and products in K, respectively.
Proof. The only difference between this interpretation and the usual interpretation of a string diagram as a tensor contraction is that a single index can be repeated any number of times, not just as a single pair of upper and lower indices. Rather that summing over dots, we could have instead summed over individual wires. Since no wire is connected to more than one box, it is uniquely identified by being the i-th input or j-th output of box b. Each of these wires is then connected to a map of the form S n m .
By commutativity, the order of the indices on the S-maps doesn't matter, so we've abused notation by writing them as the appropriate sets. Since the maps S n m are generalised Kronecker deltas, we can simplify this expression by removing redundant indices e.g.
So we are left with one distinct index corresponding to each dot, and the indices which used to be labelled by wires are now labelled according to the dot each wire was connected to. This is precisely the form stated in the theorem.
Completeness of multigraph categories
In this section, we will prove the completeness theorem for multigraph categories, without the dagger structure. In the next section, we will show the dagger case by tweaking the proof a little bit. Throughout this and the next section, let K be a field of characteristic 0. We begin by proving the simple, closed case, then generalising via corollaries. Proof. Suppose for simple, closed dot-diagrams F and G that F ∼ = G. Then, we need to show that there exists some functor − : Dot(Σ) → Mat(K) such that F = G . Suppose firstly that they have a different number of dots. Then, we can distinguish them by the 'dot-counting' functor. This sends every object in Σ to a two-element set, and every morphism to the following matrix: ( f d ) y1,...,yn x1,...xm = 1 Then, using the form of evaluation given by Theorem 5.1, we can compute: Therefore, assume F and G have the same number of dots. We will now construct a functor − F G that distinguishes them. We do this in two phases, first let X := {X b | b ∈ B F } be a set of variables indexed by the boxes in F and let Z[X] be a polynomial ring. Then, we will construct a functor: On objects, let A F be the (finite) set of all wires in F labelled by A: For each box b labelled by f , let: The interpretation itself is then defined as a sum over all of the boxes in F labelled f : We can then compute G F by writing it in the form given by Theorem 5.1.
Applying distributivity yields: We will refer to the coefficient of G F corresponding of b X b as the 'magic coefficient'. Clearly if two dot-diagrams G, G ′ have different magic coefficients, then the polynomials G F and G ′ F will not be equal. Why do we call it the magic coefficient? Since the summation in (2) is over a pair of functions φ, ψ, computing the value of the magic coefficient amounts to identifying for which ψ, φ the following conditions are satisfied exactly once for each box b ′ ∈ B F : j) The latter two conditions give precisely what it means for (ψ, φ) to be a homomorphism of dot-diagrams. As the product ranges over b ∈ B G , the fact that these are satisfied exactly once means that ψ is a bijection of boxes. Let BBij(G, F ) be the set of homomorphisms (ψ, φ) from G to F such that the box function is a bijection. It follows that the magic coefficient of G F is |BBij(G, F )|.
In particular, ψ is a surjection, so by Theorem 4.4, so too is φ. But, since F and G have the same number of dots, a surjection from G to F is actually an isomorphism of dot-diagrams. Since we assumed F ∼ = G, the magic coefficient of G F must be 0, whereas the magic coefficient of F F is |Aut(F )|. From this, we can conclude Since the polynomial p := F F − G F is non-zero, it has finitely many roots in K. Since K is infinite, choose values for X b in K such that p is non-zero. Then F F G − G F G = 0, and hence F F G = G F G .
Extending to all morphisms in Dot(Σ) is now straightforward. First, we eliminate the 'simple' condition.
Corollary 6.2. Two closed morphisms F and G in Dot(Σ) are equal iff for all multigraph functors − : Proof. Suppose F and G are closed, but not simple. The following functor counts the 'free dots' of each type in F and G, i.e. those not connected to a box. Fix a set of distinct prime numbers {p A } for each object A labelling a dot in F or G.
This sends every simple closed diagram to 1 and closed diagram consisting of a single free dot of type A to the number where F ′ , G ′ are simple, and d F , d G consist only of free dots. Then: Let − F G be the functor defined in the proof of Theorem 6.1. Then, it is easy to check that: It follows by functoriality that E(F ) = E(G) .
One might be tempted to short-circuit this step using the SCFA structure (i.e. units and counits) to provide i A and o B , but this does not work. Consider the following example:
Completeness of dagger-multigraph categories
We now turn to proving the completeness theorem of dagger-multigraph categories for Mat(K), where K is a field with non-trivial involution. Note that it is necessary take a non-trivial involution, otherwise new equations become true in all models. For example, scalars s : I → I would automatically satisfy s † = s in all models, which is not provable by the dagger-multigraph axioms. Also, the 'tranposition' defined in terms of the Frobenius structure would become equal to the dagger: f † = f which again is not provable by the dagger-multigraph axioms. This is natural to consider these two expressions not to be equal because the LHS computes the conjugate-transpose whereas the RHS computes the transpose. Proof. The proof is almost identical to that of Theorem 6.1. The main difference is that we define the functor − F in terms of a different ring. Let Z[X ∪ X] be the polynomial ring taking as variables X as before, along with a second copy X : This becomes a ring with involution by taking the involution to be the ring homomorphism that interchanges X b ↔ X b and leaves the other ring elements fixed. We can then define as: Then, by a similar calculation to Theorem 6.1 (and in fact, a nearly identical calculation to that in [9]) we get the following value for the closed, simple dot-diagram G: and φ(θ G in (i, b)) = θ F in (i, ψ(b)) and φ(θ G out (b, j)) = θ F out (ψ(b), j) for all i, j X ψ(b) if ℓ F b (ψ(b)) = ℓ G b (b) † and φ(θ G out (b, i)) = θ F in (i, ψ(b)) and φ(θ G in (j, b)) = θ F out (ψ(b), j) for all i, j 0 otherwise When F and G are simple and closed, we can still identify the 'magic coefficient' of b X b , whose value counts the cardinality of BBij(G, F ). Thus, if we first use the functor − d to check if F and G have the same number of dots, then F F = G F iff F ∼ = G. An involution-preserving semiring homomorphism will define a dagger-multigraph functor from Mat(Z[X∪ X]) to Mat(K). Again, we define the functor ev : Mat(Z[X ∪ X]) → Mat(K) in terms of an evaluation homomorphism, but this time one that sends X b to some element k ∈ K which in turn fixes the value of X b to be k. Thus, the following is a dagger-multigraph functor: | 2015-08-19T08:31:21.000Z | 2014-06-23T00:00:00.000 | {
"year": 2014,
"sha1": "388eda28d3f9610112a4a6a67824bc5e1ed46398",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "388eda28d3f9610112a4a6a67824bc5e1ed46398",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119234250 | pes2o/s2orc | v3-fos-license | Particle diffusion and localized acceleration in inhomogeneous AGN jets - Part I: Steady-state spectra
We study the acceleration, transport, and emission of particles in relativistic jets. Localized stochastic particle acceleration, spatial diffusion, and synchrotron as well as synchrotron self-Compton emission are considered in a leptonic model. To account for inhomogeneity, we use a 2D axi-symmetric cylindrical geometry for both relativistic electrons and magnetic field. In this first phase of our work, we focus on steady-state spectra that develop from a time-dependent model. We demonstrate that small isolated acceleration region in a much larger emission volume are sufficient to accelerate particles to high energy. Diffusive escape from these small regions provides a natural explanation for the spectral form of the jet emission. The location of the acceleration regions within the jet is found to affect the cooling break of the spectrum in this diffusive model. Diffusion-caused energy-dependent inhomogeneity in the jets predicts that the SSC spectrum is harder than the synchrotron spectrum. There can also be a spectral hardening towards the high-energy section of the synchrotron spectrum, if particle escape is relatively slow. These two spectral hardening effects indicate that the jet inhomogeneity might be a natural explanation for the unexpected hard {\gamma}-ray spectra observed in some blazars.
INTRODUCTION
The inner parts of relativistic jets in Active Galactic Nuclei (AGNs) are known to emit radiation in every energy band we can observe. The actual size and location of the emission region, e.g. those in blazars, are still under debate (Ghisellini & Tavecchio 2009;Marscher 2013). Their size and distance make them challenging to resolve with our current imaging capability, except maybe a few cases where mm-VLBI observations are paving the way to resolve the base of the jet (Lu et al. 2013;Doeleman et al. 2012). For this reason, many theoretical efforts concerning AGN jets assume homogeneous emission region as the source of the multiwavelength emission (e.g. Dermer et al. 2009).
However, increasing temporal coverage of multiwavelength data and modeling results begin to suggest that single-zone homogeneous models are not sufficient in describing the complex phenomena. The observation that the blazars exhibit variability as fast as 3∼5 minutes (Albert et al. 2007;Aharonian et al. 2007), and the detection of γ-ray above 100 GeV from several flat-spectrum ra-⋆ chenxuhui.phys@gmail.com dio quasars (FSRQs) without signature of γ − γ absorption by soft photons in the broad-line region (MAGIC Collaboration et al. 2008;H.E.S.S. Collaboration et al. 2013;Aleksić et al. 2011), indicate that the γ-ray emission region is extremely small, and at the same time located at parsecs away from the central AGN engine. This would require an unusually small angle of collimation, if the emission region covers the entire cross-section of the jet. One resolution to this conflict is the hypothesis that the larger jet contains small high-energy regions, presumablely resulting from turbulence that is generated locally, far way from the central black hole. Apparently, single-zone homogeneous models are not adequate to describe these scenarios. (See Marscher 2014, for an example of such turbulent blazar emission model.) In the picture considering small-scale structures, fast escape of particles means that, the highest-energy particles could have already cooled before they can travel far, while particles with lower energy still survive and occupy significant larger regions. This consideration suggests that to account for the multiwaveband radiation signals of AGN jets, one must consider inhomogeneous models spanning certain scale ranges to cover both the acceleration region and the region with the escaped particles. Various efforts have been c 0000 RAS r z nz=1 nr=1 nz=20 nr=15 Figure 1. Sketch of the 3D geometry of the particle diffusion and localized acceleration in the axisymmetric cylinder. The red region represents the acceleration region where the acceleration is causing the particles to have the highest energy density. The spatial diffusion causes its surrounding regions to be still relatively energetic (orange zone), while in the yellow zone the particles have already cooled significantly. The actual particle distribution is shown in more detail in Fig. 5 and other figures as 2D maps. made to model inhomogeneous jets (e.g. Ghisellini et al. 1985;Sokolov et al. 2004;Sokolov & Marscher 2005;Graff et al. 2008), although usually the details of the particle acceleration were not considered. Simplified approaches have been adopted to treat the acceleration region and the emission region separately (Kirk et al. 1998), although the emission from the acceleration region is not considered in their case. Recently, Richter & Spanier (2015) built a one-dimensional spatially-resolved model that accounts for the particle acceleration process, but the important light-travel-time effects (LTTEs) are not considered. Their geometry is suited for the laterally homogeneous shock structure, but not suitable for the study of 2D/3D small-scale structures such as turbulent acceleration regions.
The modeling of blazar SEDs usually requires very fast particle escape, considering the electron spectrum form required to match the observation. The required escape time scale is usually not much longer than the light-crossing time of the blazar emission region (Katarzyński et al. 2006;Chen et al. 2014). For this fast escape to be physically feasible, this escape should refer to escape from the accelerator, probably some smaller-scale structures (Giannios 2013) within the emission region. Particles experience cooling and diffusion, but no acceleration outside of these regions. Throughout this paper, we will call the entire region of the jet contributing to the blazar radiation 'emission region'. The smaller subregion where particle acceleration takes place is referred to as 'ac- Figure 2. A sketch of the relationship between various time scales related to particle acceleration, cooling, and escape. celeration region', while the rest of the 'emission region' takes the name 'diffusion region'.
Particle acceleration mechanisms that predict localized acceleration confined in small regions include magnetic reconnection (Guo et al. 2014;Sironi & Spitkovsky 2014), which can be triggered through turbulence (Zhang & Yan 2011), as well as various acceleration mechanisms at the shock front (Blandford & Eichler 1987;Sironi & Spitkovsky 2011). The thin but extended structure of the shocks (e.g., internal, external, or standing shocks, Spada et al. 2001;Kirk et al. 1998;) means that they can facilitate fast particle escape, but in order to explain the fast variability in blazar emission, internal shocks produced very close to the jet base (Rachen et al. 2010), or shocks associated with mini-jets (Giannios et al. 2009) or, again, small scale turbulent structures (Marscher 2014), would be required.
Except the consideration of emission region structure and particle acceleration, another major focus of relativistic jet models has been the radiative mechanism. The SEDs of blazars usually consist of two components, with the first peaking between infrared to X-ray frequencies, while the second peaking between X-ray to γ-ray energies (Ulrich et al. 1997;Fossati et al. 1998). Both hadronic and leptonic models have been frequently discussed, and have been successfully applied to blazars in most cases (Böttcher et al. 2013). The two kinds of models agree in explaining the low-frequency (below ultra-violet or X-ray) component of the blazar emission as electron synchrotron emission, but differ in their interpretation of the origin of the high energy (above Xray) component. In the hadronic models protons are responsible for the high energy radiation through processes such as proton synchrotron emission (Aharonian 2000;Mücke & Protheroe 2001), pp pion production (Pohl & Schlickeiser 2000), or p-γ pion production (Mannheim & Biermann 1992) with subsequent synchrotron emission of pion decay products (Mannheim 1993;Rachen 2000;Mücke et al. 2003). The leptonic models on the other hand assume that the electrons, and possibly also positrons, in addition to providing the low-frequency emission through synchrotron, are also responsible for the high energy emission through inverse Compton (IC) scattering (e.g. Maraschi et al. 1992). Depending on whether the seed photons of these scattering are the synchrotron photons the leptons themselves produced, or photons with origin external to the jet, the leptonic models can be further classified into synchrotron self-Compton (SSC) models and external Compton (EC) models. The EC models can then differ from each other based on the various possible sources of external seed photons, such as the accretion disc (Dermer et al. 1992), the broad line region (Ghisellini & Madau 1996), or the dusty torus ). The complexity of the SSC models come from the mathematical treatment of the nonlinear cooling of electrons in the SSC process (Zacharias & Schlickeiser 2013;Zacharias 2014), which is further complicated by the light retardation of the synchrotron photons (part of LTTEs, see the discussion by Sokolov et al. 2004). Traditionally the SSC models are usually associated with BL Lac objects while the EC models are usually associated with FSRQs. This is because, by definition, external emission lines are readily seen in FSRQs, but not in BL Lacs . But whether this distinction in radiation mechanism is real or not, remains to be an open question (Chen et al. 2012).
In order to study inhomogeneous jets, Chen et al. (2011) have built a 2D leptonic model that takes into account all the LTTEs, including the external ones that cause delayed observation of furtheraway cells, and the internal ones that cause delayed arrivals of synchrotron photons in the SSC scattering. The model has been applied to cases where the inhomogeneity is caused by plasma crossing a standing perturbation. In those cases the inhomogeneity is mostly along the longitude of the jet, while the radial structure remains largely homogeneous. Direct particle exchange between cells is also neglected, based on the fact that the Larmor radius of the electrons is sufficiently small, and the assumption that the magnetic field is highly tangled. However, the nature of particle diffusion is also dependent on the turbulence property of the magnetic field, which is poorly known. Under certain circumstances, the diffusion between cells can be very important.
In this work we extend the model of Chen et al. (2011) by implementing particle diffusion between cells, as one mechanism for realistic particle escape. Combined with our direct handling of the particle acceleration using the Fokker-Planck equation, we investigate both spatial and momentum diffusion of particles at the same time. For the first time, our modeling of the particle evolution and emission encompasses both particles inside the accelerator and those already escaped from the accelerator. A sketch of acceleration and emission regions is shown in Fig. 1. Although our model is a time dependent one, in this paper we focus on what kind of steady state spectra emerge from the time dependent solution, and how. The flare-related variations will be the topic of discussion in a forthcoming paper.
As a simplification that permits understanding some, but not all of the physics that is captured in the 2D model, we will first introduce an semi-analytical two-zone model in §2. The methods used in the 2D model will be described in §3, followed by the simulation results in §4. Discussion and conclusion can be found in §5 and §6.
Throughout this paper, we will use non-primed notations for the quantities in the jet frame, and primed ones for those in the observer's frame. Subscripts 'em','acc' and 'dif' are used to denote parameters for emission, acceleration and diffusion regions respectively.
A TWO-ZONE MODEL
We first discuss the particle and emission spectra resulting from a semi-analytical two-zone model, which treats the acceleration and diffusion regions as two separate model zones. In this twozone model it is assumed that particles are injected and accelerated in a small spherical acceleration zone. Those particles escape, and are subsequently injected into a much larger diffusion zone that surrounds the acceleration zone. There is no particle acceleration in the diffusion zone, but radiative cooling and further particle escape do play a role. In the two-zone model we only account for the synchrotron cooling, while IC cooling is not considered. We calculate analytically, with the help of numerical integrations, the electron spectrum of particles in both the acceleration and the diffusion zones. Then we estimate the synchrotron and SSC emission from both zones. We take into account the synchrotron seed photons from both zones when calculating the SSC emission, under the spherical geometry where the acceleration zone sits in the center of the diffusion zone. The diffusion zone approximately generates a synchrotron photon energy density of 3L s,dif (ǫ)/4πR 2 dif c in both the acceleration and diffusion zones, with Ls(ǫ) denoting the synchrotron Luminosity as a function of photon energy in units of electron rest energy. The same energy density caused by the acceleration zone is more inhomogeneous, and approximated as 3Ls,acc(ǫ)/4πR 2 acc c in the acceleration zone, and 3Ls,acc(ǫ)/4πR 2 dif c in the diffusion zone. 1 To match the cases we study in the 2D model, we choose R dif = 8.25Racc.
The calculation of the steady-state electron spectrum is described in Appendix A. Four time scales, namely the acceleration time scale tacc, the cooling time scale t cool , the escape time scale from the acceleration region ,tesc,acc, and the emission region, tesc,em, are important in determining the total electron energy distribution (EED). As illustrated in Fig. 2, the Lorentz factor γmax, at which the high-energy cut off starts, is determined by a balance between tacc and t cool ; the Lorentz factor γ b , at which there is a spectral break, is determined by the relationship between tesc,em and t cool ; the spectral index of the EED above the spectral break, p, is determined by the ratio between tacc and tesc,acc.
The synchrotron power and synchrotron spectrum are calculated in the same way as we will do in the 2D model (Chen et al. 2011). We follow Graff et al. (2008) of using the δ-function appoximation to get the IC emission through a simple integration, i.e. ǫIC = 4 3 γ 2 ǫ0, and using a step-function approximation for the Klein-Nishina effect (Thomson scattering for γǫ0 < 3/4; no scattering for γǫ0 3/4 ). With this approach, we integrate over the seed photon distribution to obtain because of the Klein-Nishina effect. The resulting EEDs have a sharp cutoff at the highest energy. This is caused by the simplification of not considering radiative cooling in computing the electron spectrum in the acceleration zone. A direct high-energy cutoff on the particle spectrum in the acceleration zone is implemented based on a posterior consideration of the cooling. Since this cutoff also affects the particle number, especially when the spectrum is hard, we make a correction to the particle number density afterwards. This ensures that with the particle escape and particle injection considered, the total particle number is conserved (ne,acc/tesc,acc = Q).
We guide our modeling using the SEDs of Mrk 421. But we restrict ourselves from matching the SEDs in details, to avoid excessive time spent on fine tuning of parameters. We also intend to keep our results generally applicable to different objects.
In a benchmark case for the two-zone model (Fig. 3, parameters listed in Table 1), the EED forms a typical broken power-law distribution, with a spectral break of ∼ 1 at γ ≈ 3 × 10 3 . Because there is a concentration of higher-energy synchrotron photons in the acceleration zone, emitted by the higher-energy electrons in that same zone, the seed-photon field for the SSC is disproportionately strong for the highest-energy electrons. This preference of SSC scattering between the high-energy electrons and the highenergy photons causes the SSC spectrum to be harder than the synchrotron spectrum, especially at frequency below the SED peaks, above which Klein-Nishina effect begins to play a role. This effect is clearly visible in Fig. 3 right, where the spectral indices are measured to be -0.68 at 10 eV and -0.52 at 1 GeV.
In another case (Fig. 4) the particle escape time is three times longer. This results in a harder electron spectrum, which leads to a dominance in the spectrum at the highest energy by electrons in the acceleration zone. Looking at the EED from low energy to high energy, this shift of dominance causes a spectral hardening at the highest energy, because the un-cooled electron spectrum in the acceleration zone is harder than the cooled electron spectrum in the diffusion zone. This feature is clearly visible in Fig. 4 left. But it is less apparent in Fig. 4 right, because the SED is similar to a γ 3 N − γ representation, instead of the EED shown on the left which is a γ 2 N − γ representation. A careful examination of the synchrotron spectral index reveals that a slight hardening of the spectrum by 0.01 is still present in the synchrotron SED. A combination of this EED hardening and the above mentioned hardening of the SSC spectrum result in a very hard GeV spectrum with spectral index of about -0.3 (equivalent to a photon index of -1.3).
THE 2D MODEL
The semi-analytic two-zone model already shows some unique spectra features that are not captured in one-zone models. However, there are some significant simplifications in the analytic approach that limits the accuracy of the model, e.g. the neglect of radiative cooling in the acceleration zone, the return-flux for lower-energy particles from the diffusion zone to the acceleration zone, and the inhomogeneity within the acceleration and diffusion zones. Further more, the applicability of the analytic model is limited because it does not account for the LTTE, which is important especially in studies of variability.
Taking one step beyond the two-zone analytic model, in this section we will describe our time-dependent 2D numerical model we built to study the particle acceleration and spatial diffusion in inhomogeneous jets. We consider a 2-dimensional axisymmetric jet model that is built on the Monte-Carlo/Fokker-Planck (MCFP) code developed by Chen et al. (2011). This model employs an approach combining the Monte-Carlo (MC) method for photon tracking and scattering, and Fokker-Planck (FP) equation for the electron momentum evolution (hence the name MCFP). The full transport equation takes the form (3) n(γ, r, t) is the differential number density of particles. The first term on the right hand sidė includes both radiative coolingγ cool (γ, r, t) and stochastic accel-erationγD(γ, r, t) = γ tacc in the acceleration region, caused by momentum diffusion of particles. The dispersion effect of the diffusion is described by the second term, also applicable in the acceleration region only, where the diffusion coefficient is The third term represents the injection of particles. The fourth term is the spatial diffusion of particles. Dx(γ) is the spatial diffusion coefficient. Dx(γ) could easily be energy dependent in our calculation, but in this work we restrict our discussion to the energy independent situations to reduce the number of free parameters. This also implies that the momentum diffusion coefficient, which is associated with the spatial diffusion, should be proportional to γ 2 , i.e., tacc should be energy independent (Shalchi 2012). Also, only under this assumption is the analytical solution used in the twozone model available (See Appendix A). Restricting the 2D model to this assumption makes the comparison of the two models much easier. More discussion on energy dependent tacc can be found in Tramacere et al. (2011). We use operator splitting to treat the momentum terms and spatial terms separately. Without the spatial terms, the equation is reduced to the FP equation. The finite-difference method used to solve the FP equation is described in details in Chen et al. (2011). The spatial terms of the transport equation is handled using the finite-element method, where we calculate the flux at each spatial boundary, and use those fluxes to update the density in each cell.
The diffusion causes propagation of particles to neighboring With spatial diffusion considered, we focus on the effects of localized particle acceleration, i.e., the acceleration region is a small accelerator. This acceleration region can occupy either one or multiple cells in the 2D model, but is conceptually equivalent to the acceleration zone in the two-zone model. This acceleration can represent either second-order Fermi acceleration or acceleration by magnetic reconnection,in both of which the acceleration could be restricted to small turbulence regions.
In the 2D model we consider scenarios with reflecting (closed) boundary condition (zero flux between the surface cells r = rmax, z = 1, z = zmax and the imaginary cells outside of the emission region) and escape (open) boundary condition (N (γ, r, t) = 0 at the imaginary cells; reflecting boundary condition is always used for the innermost boundary). In the former case the particle number is conserved, so the system will reach a steady state after a while. In the later case, the particles keep escaping from the emission region, with an implicit assumption that any particle outside of the emission region has negligible contribution to the emission. This is true for synchrotron radiation, if the magnetic field in outer regions are much weaker. It is also valid for SSC emission because of the lower synchrotron radiation density resulted from the weaker magnetic field. However, for EC emission, this assumption needs more careful examination. With particles continuously escape from the emission region, steady state only exists when there is additional source of particle pick up. This may happen at the same locations as the particle acceleration, because the turbulent magnetic field there may trap and isotropize particles in the intergalactic medium. In those cases, the trapped particles may have Lorentz factor similar to the bulk Lorentz fac- Fig. 15 tor of the jet. This is how we choose γinj in our models. Effectively,this particle Lorentz factor determines the minimum Lorentz factor of the steady state EED in the open boundary scenario. In both the closed and open boundary scenarios, initially the emission regions have homogeneous particle distributions that form a powerlaw EEDs with spectral index -1.1 between Lorentz factor 1 and 33. This choice of initial particle distribution only affects the early evolution of the EED, it hardly has any effect on the final steady state spectra.
In this paper, our discussion focuses on SSC scenarios, even though some results may be generalized to EC scenarios as well, especially the results with closed boundary conditions, or if synchrotron emission is the primary subject of concern. Synchrotronself absorption is also included in the 2D model, although it turns out to be not very important above 2 GHz in the cases discussed in this paper. All the simulation shown in this work use 20 layers in the longitudinal direction and 15 layers in the radial direction (nz=20 and nr=15). The length/radius ratio (Z/R) is 4/3, so the cell sizes in longitudal and radial direction (dz and dr) are the same. The simulation time step is chosen to be the same as the light crossing time of one cell (dz/c).
2D RESULTS
With this time-dependent 2D model, we study the acceleration and diffusion of particles, as well as their synchrotron and IC emissions. The different cases we study, along with the associated section and figure numbers, are listed in Table 2.
Confined Particle Diffusion
In this section we discuss a closed emission volume, in which the particle diffuse spatially within a confined cylindrical region, while there is no particle exchange/escape at the outer-most boundary, i.e. the total particle number is conserved.
Similar cases discussed in this section cannot be studied with our two-zone model in §2, because in the two-zone mode we did not include the backflow of low energy particles from the diffusion region to the acceleration region. This backflow is important in the closed boundary case, because only with it, a particle number balance can be maitained between the acceleration region and the diffusion region.
Localized acceleration in the center
In this case, in a central region with 2x2 cells, particles are continuously accelerated through momentum diffusion. Subsequently the spatial diffusion in both z and r direction transports the high energy particles through the emission region. The time-dependent evolution of this process is shown in the electron energy density (equivalent to the area covered by a EED plot like those in Fig. 3 left) maps of Fig. 5. Figure 6. Density maps of the confined particle diffusion case at simulation time step 1700. They are maps of differential density Nγ for electrons with Lorentz factor 10 3 , 10 4 and 10 5 . The left halves of the images are mirror image of the right halves to illustrate the cylindrical geometry. The cyan line in the EED of Fig. 7 shows the distribution close to the steady state, after a long simulation time (8500 time steps). However, to save computer time, in most of the other cases in this work (except §4.1.2) we only run the simulation to the point of the black line (1700 time steps). This is enough for our purpose of showing the difference between cases. For §4.2, those time is already more than enough for the simulation to reach a steady state.
The semi-steady total EED forms a broken power-law distribution where the slope before the break is close to 0. At early stages, the total EED shows two peaks, because it contains elec- trons from different regions, in some of which the particles are accelerated, while in others the particles remain close to their initial distribution. We also show the SED that comes from the entire volume at late stages.
Single-cell EED from three sample cells are also shown (Fig. 7 bottom). The inner cell shows that the particles are accelerated to a power-law distribution with very hard spectrum. The EED with broken power-law distribution in the middle cell (cyan) is a result of the subsequent transport and cooling of those particles. In the outer cell, the particles had even more time to cool, therefore peak at lower energy compared to the middle cell.
This case is used as the benchmark case for the closed boundary scenario. Main parameters are shown in Table 1.
Because the particles are exposed to radiative cooling without further acceleration after they leave the central acceleration region, the highest-energy particles can only survive in a small central region. This region gets smaller with increasing particle energy (Fig. 6). This energy-dependent jet morphology means that, by making observation at different frequency, effectively we may be observing emission region of different size. The variability at different energy may still be correlated, but there might be a significant difference in the light curves. More details of the variability pattern of the jet with localized acceleration will be discussed in a separate publication. The energy dependence also affects the SSC scattering. As we have already discussed in §,2, the concentration of the most energetic photons and electrons in the center causes the SSC spectrum to be harder than the synchrotron spectrum. This feature is clearly seen in both the two-zone (Fig. 4 right) and the 2D (Fig. 11 upper-right) models, even though the confined diffusion scenario is quite different from the two-zone model. But it would not be expected in a one-zone model. This energy dependence also implies that even though the highest-energy photons are produced in a very compact region, the lower-energy photons, which may cause pair creation with the high energy photons, are less concentrated, thus alleviating the compactness constraint (Boutelier et al. 2008).
Slow diffusion
In this case, the particle diffusion is less efficient compared to §4.1.1. This results in slower rate of particle escape from the acceleration region, and therefore harder EED (lower-left of Fig. 8).
Similar to the two-zone model with slow particle escape (Fig. 4), the particles in the acceleration region have excess energy that provides an additional bump at the high-energy end of the total EED before cut off. The γ-ray spectrum is also extremely hard, with a spectral index of about -0.4 (equivalent to a photon index of -1.4) at 10 GeV. This kind of spectrum generally applies to localized acceleration with slow particle escape (tesc,acc ≫ tacc). The spectral hardening is another consequence of the energy-dependent inhomogeneity we show in Fig. 6. For slow particle diffusion, in which the high-energy bump is apparent, the power-law slope before the bump should always be close to 2, because it is the radiativelycooled version of the p ∼ 1 spectrum resulted from inefficient particle escape.
Diffusive Particle escape
In this section the boundaries of the emission region are assumed to be open, i.e. the particles diffuse outside of the simulation box as if the density outside were zero. This implies a constant escape from the emission region, which is assumed to have a magnetic field stronger than its surroundings so that the emission from the surrounding region is negligible. We also assume that the acceleration region picks up particles from the intergalactic medium at a constant rate through the turbulent magnetic field. We have chosen the parameters in the two-zone model ( §2) so that they are directly comparable to the two cases in this section ( §4.2.1 & §4.2.2). However, the total EEDs and SEDs are still slightly different, as can be seen in the comparison in Fig. 10 & 11. One of the reasons for the difference, for example, is that in the 2D model, the acceleration region contains more than one cell. The particle escape time for the central-most cell is longer than the escape time for the entire acceleration region, therefore some particles at the highest energy can have a harder spectrum in the 2D model. Another example is the posterior consideration of cooling in the acceleration zone of the two-zone model. This simplification leads to the sharp cut-off of EED and SED at the highest energy in the two-zone model, in contrast to the gradual cut-off in the 2D model.
Localized acceleration in the center
The acceleration region is placed in the center of the emission region, similar to §4.1.1. In addition to the energy density map, we also show the particle number density map in Fig. 9. This illustrates how particles are picked up in the central region and then escape from the outer regions. The total EEDs (Fig. 10 upper-left) at later times overlap with each other, meaning that theyhave already reached a steady state. The steady EED shows a classic broken power-law distribution, with a cooling break of about 1 at γ = 10 3 −10 4 . This is consistent with what we saw in the two-zone model (Fig. 3). The SSC spectrum is also observed to be harder than the synchrotron spectrum, a feature already established in the twozone model, and the closed boundary scenario. In Fig. 10 bottom we plot the EEDs for three different cells (in the center, mid-way between the center and the outer boundary, and the outer boundary, all at mid-way height in z direction.) This case is used as the benchmark case for the open boundary scenario. Main parameters are shown in Table 1.
Slow diffusion
We study the case with less effecient diffusion, similar to the case in §4.1.2, with the open boundary condition. In this scenario, we also observe the development of the high-energy bump in the EED, although it is not obvious in the SED (Fig. 11).
Localized acceleration away from the center
In this section we study the cases where the acceleration region is not located at the center of the emission region. In the first case (Fig. 12), the acceleration region occupies the 3rd and 4th grid cells from the bottom. Except the location of the accelerator, the other parameters are identical to those in §4.2.1.
Because of the off-center position of the acceleration region, the whole emission region loses particles in different directions at a different rate. The particle escape happens on several time scales, and can no longer be described by a single escape time. One consequence of this non-uniform escape is, that the spectral break in the EED, which is a result of the competition between cooling and escape, no longer occurs at one specific energy. Instead, the spectrum gradually changes over a large range of electron energy that likely extends to the cut-off energy. If one were to measure the change in spectral index at the break, it would be less than 1, the number expected of radiative cooling. Because the proximity of the acceleration to the boundary, we also lose particles faster in general. This leads to a softer 'uncooled' spectrum (the one before the break). The EED with all these effects are shown in Fig. 12 bottom, with a comparison to the EED with acceleration region in the center. An exemplary attempt to measure the spectral change between γ = 2 × 10 2 and γ = 2 × 10 4 gives a break of 0.77. Compared to §4.2.1 the average electron density is adjusted to achieve similar SED and SSC cooling.
In order to test how the proximity of the acceleration to the boundary affects the total EED, we move the acceleration region closer to the boundary, occupying the 2nd and 3rd grid cells from the bottom (Fig. 13). The EED is shown in comparison with the case above. The measured spectral break becomes even smaller (0.71). Therefore we predict, if the acceleration region is located further away from the center than our model's spatial resolution allows, the measured spectral break in the total EED may be significantly smaller than 1.
Another question we address is whether the location of the particle injection affects the spectral break. To answer this question we conceive a case where the particles are injected in the central region with 2x2 cells, while the acceleration region is located 10 15 cm (2 cells) away from the bottom boundary (Fig. 14). The resulted total EED shows a prominent bump at the injection energy γ = 33. But otherwise the EED is almost identical to that of Fig. 12 (the case with off center injection). We conclude that the spectral break is not affected by the location of the particle injection, but only by the location of the particle acceleration. However, this case is unlikely to represent the real picture of what happens in blazar jets, because it predicts a flux excess in radio frequency, which is not consistent with observation. Figure 11. Total electron distribution and SED for §4.2.2 (open boundary, slow diffusion). The spectral indices of the SED are -0.57 at 10 eV, -0.56 at 50 eV and -0.48 at 10 GeV (due to the higher noise in the SED of this case, these estimation is done in a slightly broader frequency rangecompared to the other cases). The thick yellow lines are the EED and SED from the two-zone model with slow particle escape (Fig. 4), plotted here for comparison.
Elongated acceleration region
Except the location of the accelerator, we also explore the effect of different geometry of the accelerator. In this case we construct an elongated accelerator with 8x1 cells (Fig. 15 left). The total volume of the accelerator is the same as in §4.2.1. The other parameters are kept unchanged. The resulting EED (Fig. 15 right) has a slightly softer spectrum index both below and above the spectral break, while the break remains close to 1. This is caused by the more efficient escape from the accelerator under the current geometry with unchanged diffusion coefficient. However, without a reference spectrum, it is difficult to distinguishes the spectrum from this case from those in §4.2.1. This result indicates that the geometry of the acceleration region has little impact on the EED. The choice of geometry does not affect our findings regarding the spectral breaks, or spectral hardening with increasing energy, as discussed in previous sections.
Spectral hardening at high energy
In both the closed and open-boundary scenarios, we notice the spectral hardening of the EED at high energy, if the particle diffusion is sufficiently slow ( §4.1.2 & §4.2.2). This is a result of accounting for acceleration region and emission region at the same time. The acceleration region, which is small but dominates both the EED and the SED at high energy, has a harder spectrum compared to the emission region because of radiative cooling. This dominance in the synchrotron SED will be even stronger if we con-sider a stronger magnetic field in the acceleration region. An exception is that if the magnetic field is so strong that the emission from the acceleration region dominates at all energies, the spectral hardening will no longer be present. If observations can measure the spectral index accurately enough, at a frequency close to but below the high energy cut off, we could search for this hardening of spectrum. Its existence will be evidence for localized particle acceleration and moderate particle escape being at play in AGN jets. A similar spectral hardening is not clearly visible in the SSC spectra. This might be related to the broadness of the seed photon spectra in the SSC scenario. Whether the spectral hardening for γ-rays can be more apparent in an EC scenario will be assessed in our future work. Interestingly, at very high energy (VHE, above 100 GeV) γray, several blazars are already observed to show hardening of the spectra towards higher energy after the correction for extragalactic background light (EBL) absorption (MAGIC Collaboration et al. 2008;Archambault et al. 2014).
Hard SSC spectrum
In all our results, we observe the SSC spectra in the SED to be significantly harder than the synchrotron spectra at corresponding wavelengths. This is caused by the preference of IC scattering between high-energy synchrotron photons and high-energy electrons, because both of them are concentrated close to the accelerator in inhomogeneous jet models. This effect, combined with the hard EED in the cases with slow diffusion, provides a mechanism to produce very hard spectra (photon index harder than -1.5 2 ) at GeV energy at least (see §2 and §4.1.2). Since our choice of parameters is based on the SEDs of Mrk 421, a different parameter set might shift those hard spectra to even higher energy. Considering these effects, the inhomogeneous jet model might provide very important explanation for some of the unexpectedly hard VHE γ-ray spectra measured in several 'high'-redshift VHE blazars after correction for the EBL absorption, and loosen the constraint these observations placed on the EBL (Aharonian et al. 2006).
Electron spectral break
Radiative cooling normally softens the electron spectrum by 1. A spectral break is expected to exist at the electron energy where t cool = tesc,em. Below this energy particles do not have enough time to cool before escape from the emission region, while above this energy, particles becomes softer because of the cooling effect.
In the SED this break is expected to be 0.5 (Sari et al. 1998). However, the transition between uncooled and cooled spectra does not necessarily present itself as a clean cut broken power-law. In the open boundary scenario, we found ( §4.2.3) that if the acceleration region is not located in the center of the emission region, the EED changes gradually over an energy range, and if measured as a broken power-law, the break may appear less than 1 (or 0.5 in the SED). If the observed power-law break in the SED is much larger than 0.5, it can not be explained by the cooling/escape break.
Limitation of the simulation
Our simulation volume is divided into 20x15 cells. Higherresolution simulations are possible but not practical because of the computational cost. The acceleration region in our model is therefore set to be of approximately 1/10 the length scale of the emission region, allowing it to occupy 2x2 cells. This limit on the acceleration region is only a numerical one, but not a physical constraint, i.e. the acceleration region can even be smaller, or located closer to the outer part of the emission region in AGN jets. Some of the phenomena observed in the current modeling work can be more significant if larger size ratio is considered.
CONCLUSIONS
We used our inhomogeneous time-dependent emission models to investigate the localized particle acceleration and spatial diffusion in AGN jets. This work focus on the steady-state spectrum and we summarize our findings as follows: (i) With the acceleration region much smaller than the emission region, the electrons form power-law/broken power-law distributions that adequately reproduce blazar SEDs with reasonable rates of particle escape; (ii) The inhomogeneity developed in the jet is energydependent, with higher-energy particles concentrated in smaller regions; (iii) The inclusion of particles both inside and outside of the acceleration region causes the EED/SED to show spectral hardening at high energy, if particle diffusion is slow; (iv) The energy-dependent inhomogeneity causes the SSC spectrum to be harder than the synchrotron spectrum, and this might help to explain the very hard VHE spectra in several blazars.
(v) If the acceleration region is not located at the center of the emission region in an open-boundary scenario, the resulting EED forms an atypical broken-power-law distribution with spectral break less than 1; (vi) The EED formed is weakly dependent on the geometry of the acceleration region.
APPENDIX A
Let N d (γ, t) denote the total number spectrum of particles in the diffusion (outer) zone, where escape is possible on time scale τ esc,d and energy loss on time scale τ loss = 1/(a γ), as for a dominance of synchrotron losses. The electron spectrum must satisfy the continuity equation ∂N d (γ, t) ∂t − ∂ ∂γ γ N d (γ, t) τ loss + N d (γ, t) τ esc,d = Q = Na(γ, t) τesc,a .
Here, we have already indicated that the source term Q is given by the rate of particle leakage out of the acceleration zone, for which Na(γ, t) denotes the particle spectrum and τesc,a the escape time | 2014-11-17T15:50:02.000Z | 2014-11-17T00:00:00.000 | {
"year": 2015,
"sha1": "0c19d761a7f31cd994321aac29e3005877b2d5ed",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/447/1/530/4927486/stu2438.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0c19d761a7f31cd994321aac29e3005877b2d5ed",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
117411537 | pes2o/s2orc | v3-fos-license | Frame potentials and the geometry of frames
This paper concerns the geometric structure of optimizers for frame potentials. We consider finite, real or complex frames and rotation or unitarily invariant potentials, and mostly specialize to Parseval frames, meaning the frame potential to be optimized is a function on the manifold of Gram matrices belonging to finite Parseval frames. Next to the known classes of equal-norm and equiangular Parseval frames, we introduce equidistributed Parseval frames, which are more general than the equiangular type but have more structure than equal-norm ones. We also provide examples where this class coincides with that of Grassmannian frames, the minimizers for the maximal magnitude among inner products between frame vectors. These different types of frames are characterized in relation to the optimization of frame potentials. Based on results by Lojasiewicz, we show that the gradient descent for a real analytic frame potential on the manifold of Gram matrices belonging to Parseval frames always converges to a critical point. We then derive geometric structures associated with the critical points of different choices of frame potentials. The optimal frames for families of such potentials are thus shown to be equal-norm, or additionally equipartitioned, or even equidistributed.
INTRODUCTION
The frame-based expansion of vectors in Hilbert spaces has become an increasingly popular tool in many areas of mathematics [7], science and engineering [20]. Frames have many properties comparable to orthonormal bases, but are not required to form linearly independent sets, and therefore offer more flexibility to accommodate specific design requirements. Since the early days of frame theory [10], structured frames have been given special consideration. The structure can be of an algebraic nature, for example when the frame is constructed with the help of group representations [10,15,8,16], or it can be present in the form of geometric conditions on the frame vectors. Such geometric conditions often result from frame design problems, for example when frames are used as analog codes for erasures [6,17,4,19], or for beam forming in electrical engineering [23,27], or for quantum state tomography [33,26]. Equiangular tight frames are optimal for many of these applications. Such frames are most similar in character to orthonormal bases in that they provide simple expansions for vectors, the norms of all the frame vectors are identical and the inner products between any two frame vectors have the same magnitude. The optimality can be expressed conveniently in terms of a so-called frame potential.
It is appealing that optimization principles have such a simple geometric consequence. The characterization of equiangular tight frames was the motivation to implement numerical searches for equiangular Parseval frames via the minimization of frame potentials. However, it is known that equiangular tight frames do not exist for all numbers of frame vectors and dimensions of the Hilbert space [31,32]. Thus, one is left with unanswered questions: Is there a more general structure for Parseval frames that includes equiangular ones as a special case and characterizes optimizers for certain frame potentials? Moreover, is there a program for the optimization of these frame potentials which is guaranteed to converge? The application-oriented optimality principles lead from the special case of equiangular tight frames to the class of Grassmannian frames. These minimize the maximum inner product among frame vectors subject to certain constraints [30]. However, there was no clear indication whether these frames exhibit special structures that might help with their design [17].
We refer to the largest such A and the smallest such B as the lower and upper frame bounds, respectively. In the case that A = B, we call F a tight frame, and whenever A = B = 1, then F is a Parseval frame. If f j = f l for all j, l ∈ J, then F is an equal norm frame. If F is a an equal-norm frame and there exists a C ≥ 0 such that | f j , f l | = C for all j, l ∈ J with j = l, then we say F is equiangular. The analysis operator of the frame is the map V : H → l 2 (J) given by (V x) j = x, f j . Its adjoint, V * , is the synthesis operator, which maps a ∈ l 2 (J) to V * (a) = j∈J a j f j .
The frame operator is the positive, self-adjoint invertible operator S = V * V on H and the Gramian is the operator G = V V * on ℓ 2 (J).
In this paper, we focus on the case that H = F K , where F = C or R, K is a positive integer, and always choose the canonical sesquilinear inner product. Thus, K always denotes the dimension of H over the field F. Furthermore, we restrict ourselves to finite frames indexed by J = Z N , where N ≥ K, and reserve the letter N to refer to the number of frame vectors in the frame(s) under consideration. When the group structure of Z N is not important, we also number the frame vectors with {1, 2, . . . , N }, with the tacit understanding that N ≡ 0 (mod N ). Since this paper is mostly concerned with finite Parseval frames, we call a Parseval frame for F K consisting of N vectors an (N, K)-frame.
If F is an (N, K)-frame, then V is an isometry, since V x 2 2 = N j=1 | x, f j | 2 = x 2 holds for all x ∈ F K . Hence, we obtain the reconstruction identity x = N j=1 x, f j f j , or in terms of the analysis and synthesis operators, x = V * V x. In this case, we also have that the the Gramian G = V V * is a rank-K orthogonal projection, because G * G = V V * V V * = V V * = G and the rank of G equals the trace, tr(G) = K.
Many geometric properties of frames discussed in this paper only depend on the inner products between frame vectors and on their norms, which are collected in the Gramian. This means, most of the results presented hereafter refer to equivalence classes of frames. Proof. Assuming G is the Gramian for the frame F as well as for the frame F ′ , then G = V V * = V ′ (V ′ ) * , where V and V ′ are the analysis operators belonging to F and F ′ , respectively. By the polar decomposition, V = (V V * ) 1/2 U = G 1/2 U and V ′ = (V ′ (V ′ ) * ) 1/2 U ′ = G 1/2 U ′ with isometries U and U ′ from H to ℓ 2 (J), thus V * = U * U ′ (V ′ ) * . By the frame property, the range of U is identical to that of U ′ and that of G, so Q = U * U ′ is unitary, which shows that V * e j = Q(V ′ ) * e j for each canonical basis vector e j in ℓ 2 (J), or equivalently, f j = Qf ′ j for all j ∈ J. Conversely, if F and F ′ are unitarily equivalent, then it follows directly that the Gramians of both frames are identical.
Special emphasis is given to the Gram matrices of Parseval frames. By the spectral theorem and the condition G 2 = G this set is precisely the set of rank-K orthogonal projections, the Gram matrices of Parseval frames for F K .
2.4. Definition. We define for F = R or C This subset of the N × N Hermitians is, in fact, a real analytic submanifold (see Appendix A for a proof of this statement).
Equidistributed frames.
A type of frame that emerged in our study of frame potentials is what we call equidistributed. These frames include many structured frames that have already appeared in the literature: equiangular Parseval frames, mutually unbiased bases, and group frames.
be an (N, K)-frame and let G be its Gramian. The frame F is called equidistributed if for each pair p, q ∈ Z N , there exists a permutation π on Z N such that |G j,p | = |G π(j),q | for all j ∈ Z N . In this case, we also say that G is equidistributed.
In other words, F is equidistributed if and only if the magnitudes in any column of the Gram matrix repeat in any other column, up to a permutation of their position. For Parseval frames, equidistribution implies that all frame vectors have the same norm.
Proof. By assumption, for each p ∈ Z N there exists π such that |G j,p | = |G π(j),1 | holds for the entries of the associated Gram matrix G for all j ∈ Z N and thus by the Parseval identity The trace condition N j=1 G j,j = N j=1 f j 2 = K for the Gram matrices of Parseval frames then implies that each vector has the claimed norm.
Below are a few examples to illustrate our definition. To begin with, any equiangular Parseval frame is equidistributed.
2.7.
Example. Equiangular Parseval frames. Let G be the Gram matrix of an equiangular (N, K)frame. Since the magnitudes of the entries of any column of G consist of N − 1 instances of C N,K and one instance of K N , G is equidistributed. A class of frames with close similarities to equiangular Parseval frames is called Mutually Unbiased Bases [28,18]. We show that a slightly more general class, Mutually Unbiased Basic Sequences, is equidistributed. In this case, the frame vectors are a collection of orthonormal sequences that are mutually unbiased. To include Parseval frames, we allow for an overall rescaling of the norms.
2.8. Example. Mutually Unbiased Basic Sequences. Let N = M L and G be such that the matrix Q whose entries are Q j,l = |G j,l | is the sum of Kronecker products of the form Q = bI M ⊗ I L + c(J M − I M ) ⊗ J L , where b > 0, c ≥ 0, and the matrices I M and I L are the M × M and L × L identity matrices, and J M and J L are the matrices of corresponding size whose entries are all 1. Each row of G has one entry of magnitude b, L − 1 vanishing entries and (M − 1)L entries of magnitude c, so G is equidistributed. We also provide a concrete nontrivial example of such a (6, 4)-frame with M = 3 and L = 2.
Let ω = e 2πi/8 , a primitive 8-th root of unity, λ = 1 18 and let One can verify that G = G * = G 2 and clearly tr(G) = 4. Thus, G ∈ M 6,4 . Since the magnitudes of the entries of every column consist of one instance of 0, one instance of 2 3 , and four instances of λ, it follows that G is equidistributed.
2.9.
Example. Group frames. Let Γ be a finite group of size N = |Γ| and π : Γ → B(H) be an orthogonal or unitary representation of Γ on the real or complex K-dimensional Hilbert space H, respectively. Consider the orbit F = {f g = π(g)f e } g∈Γ generated by a vector f e of norm f e = K/N , indexed by the unit e of the group. If F is a Parseval frame F = {f g } g∈Γ , then F is equidistributed, because f g , f h = π(h −1 g)f e , f e and left multiplication by h −1 acts as a permutation on the group elements.
To have the Parseval property, it is sufficient if the representation is irreducible, but there are also examples where this is not the case, such as the harmonic frames [14] which are obtained with the representation of the abelian group (Z N , +) on H.
2.10.
Example. Tensor Products of Equidistributed Frames. Let 1 ≤ K 1 < N 1 and 1 ≤ K 2 < N 2 be integers, let G 1 ∈ M N 1 ,K 1 and G 2 ∈ M N 2 ,K 2 be equidistributed, and consider the Kronecker and q = q 1 N 1 + q 2 , and let Q, Q 1 , and Q 2 denote the matrices whose entries are the absolute values of the entries of G, G 1 , and G 2 , respectively. Since G 1 and G 2 are equidistributed, row p of Q is of the form where X is row p 2 of Q 2 and row q of Q is of the form where Y is row q 2 of Q 2 . Since G 1 and G 2 are equidistributed, there is π 1 such that |(Q 1 ) q 1 ,j | = |(Q 1 ) q 2 ,π 1 (j) | for each j ∈ Z N 1 and similarly, the magnitudes of the entries in Y are obtained from those in X by applying a permutation π 2 to the indices. Thus, the magnitudes of the entries of ρ q are a permutation of those of ρ p , so G is equidisributed.
Grassmannian Parseval frames and equidistribution.
It is known that equiangular Parseval frames do not exist for all choices of K and N ≥ K. In the absence of such frames, perhaps the best alternative is known as Grassmannian frames [30]. These minimize the maximal magnitude of the inner products between any two frame vectors, subject to certain constraints, for example among equal-norm frames. Here, we consider such minimizers among the family of Parseval frames. As before, we express this property of frames in terms of the corresponding Gram matrices.
2.11. Definition. Let G be the Gram matrix for any frame consisting of N vectors over F K and let µ(G) = max j =l |G j,l |. A frame F is called a Grassmannian Parseval frame if it is an (N, K)-frame and if its Gram matrix G satisfies Since the space of rank-K orthogonal projections in F N ×N is compact, the minimum on the right hand side exists by the continuity of µ, and thus Grassmannian Parseval frames exist for any N and K.
In the usual topology of F N ×N , the Gram matrices belonging to equal-norm frames whose vectors have norm K/N form a paracompact set. Moreover, the subset of Gram matrices belonging to equal-norm (N, K)-frames is compact and non-empty for each K and N ≥ K. By the continuity of µ and the compactness, minimizers for µ always exist over this restricted space. We call such minimizers Grassmannian equal-norm Parseval frames.
Definition.
Let Ω N,K denote the set of Gram matrices corresponding to equal-norm frames A frame F is called a Grassmannian equal-norm Parseval frame if it is an equal-norm (N, K)-frame and if its Gram matrix G satisfies By the set inclusion M N,K ∩Ω N,K ⊂ M N,K , a Grassmannian Parseval frame which is equal norm is a Grassmannian equal-norm Parseval frame. Similarly, by M N,K ∩Ω N,K ⊂ Ω N,K , a Grassmannian equal-norm frame which is Parseval is also a Grassmannian equal-norm Parseval frame.
In [17], Grassmannian equal-norm Parseval frames are shown to be the optimal frames when frames are used as analog codes and up to two frame coefficients are erased in the course of a transmission. Based on the numerical construction of optimal frames for R 3 , they did not seem to have a simple geometric structure, apart from the case of equiangular Parseval frames. Nevertheless, it is intriguing that there are other dimensions for which we can find equal-norm Parseval frames that are not equiangular, but equidistributed. We provide examples for the case where F = R.
2.13. Example. Let K = 2 and N > 3. For j ∈ Z N , let then F = {f j : j ∈ Z N } is easily verified to be a Parseval and equidistributed frame, but it is not equiangular. Furthermore, as shown in [2], this frame is a minimizer of µ over the space of equal norm frames, so it must also be a minimizer of µ over the intersection of the equalnorm and Parseval frames. Therefore, F is a Grassmannian equal-norm Parseval frame which is equidistributed.
2.14. Example. Let K = 4 and N = 12. Consider the (12, 4)-frame F with analysis operator V whose vectors are given by the columns of the following synthesis matrix, where a = 1/12. This is an equal-norm sequence of vectors which can be grouped into 3 sets of 4 orthogonal vectors, thus it is straightforward to verify that this is a Parseval frame for R 4 . In addition, inspecting inner products between the vectors shows that they form, up to an overall scaling of the norms, mutually unbiased bases. Thus F is equidistributed (and equal-norm). To see that this is a Grassmannian equal-norm Parseval frame, we note that showing that a frame to be Grassmannian equal-norm is equivalent to showing that it corresponds to an optimal sphere packing; that is, we desire the absolute value of the smallest angle to be as large as possible. With this in mind, we compute that the absolute values of the sines of all possible angles between frame vectors belong to the set {1, √ 3/2}. The orthoplex bound (see [9] for details) shows us that √ 3/2 is indeed the largest possible value that the sine of the smallest angle in such a frame can take, thereby verifying that this is a Grassmannian equal-norm Parseval frame.
BOUNDS FOR FRAME POTENTIALS AND STRUCTURED FRAMES
Special classes of frames are characterized with the help of inequalities for frame potentials, which relate to Frobenius norms. This is the case for equal-norm Parseval frames and for equiangular Parseval frames.
3.1. Definition. The p-th frame potential of a frame F = {f j } N j=1 for a real or complex Hilbert space H is given by Benedetto and Fickus showed that among frames {f j } N j=1 whose vectors all have unit norm, the tight frames are minimizers for Φ 1 [1]. We adjust the norms to obtain a characterization of equal-norm Parseval frames.
and equality holds if and only if F is Parseval.
Proof. The assumption on the norms is equivalent to the condition on the diagonal entries, G j,j = K/N , of the Gram matrix G = V V * of the frame F. By the Cauchy-Schwarz inequality with respect to the Hilbert-Schmidt inner product, Φ 1 (F) = tr(G 2 ) ≥ (tr(GP )) 2 /tr(P 2 ), where P is the orthogonal projection onto the range of G in ℓ 2 (Z N ). However tr(GP ) = tr(G) = K = tr(P ) = tr(P 2 ), thus the claimed lower bound follows. The case of equality holds if and only if G and P are collinear, which means whenever F is Parseval.
In analogy with the characterization of tight frames, if equiangular tight frames exist among unit-norm frames, then they are minimizers for Φ p if p > 1 [26,25], see also [32]. Again, we present this result with rescaled norms to replace tight frames by Parseval frames.
j=1 be a frame for F K , with F = R or C, and f j 2 = K/N for all j ∈ Z N , and let p > 1, then
and equality holds if and only if F is an equiangular Parseval frame.
Proof. With the elementary properties of equal-norm frames and Jensen's inequality, we obtain the bound Expressing this in terms of Φ 1 and using the preceding theorem then gives Moreover, equality holds in the Cauchy-Schwarz and Jensen inequalities if and only if F is Parseval and if there is If equality holds, then inspecting the proof shows that the magnitude of the off-diagonal entries of the Gram matrix is a constant, see also [13], [17], and [30], which we record for further use, By definition the value of Φ p (F) only depends on the entries of the corresponding Gram matrix. Thus, the characterizations of equal-norm (N, K)-frames and of equiangular (N, K)-frames are implicitly statements about equivalence classes of frames.
Corollary. If two frames
For this reason, we consider instead of Φ p the corresponding function of Gram matrices. Moreover, it is for our purposes advantageous to consider the compact manifold M N,K consisting of Parseval frames instead of the open manifold of equal-norm frames. In this setting, we have analogous theorems which characterize the equal-norm case and the equiangular case.
and equality holds if and only if
Proof. We know that N j=1 G j,j = K, so the Cauchy-Schwarz inequality gives and equality is achieved if and only if G j,j = G l,l for all j, l ∈ Z N . By summing the diagonal entries of G, we then obtain N G j,j = K for each j ∈ Z N .
In terms of (N, K)-frames {f j } N j=1 , the function estimated here is N j=1 f j 4 . Bodmann and Casazza called this a frame energy. They showed that if a Parseval frame has a sufficiently small energy, then under certain additional conditions an equal-norm Parseval frame can be found in its vicinity [3].
Next, we state the characterization of equiangular Parseval frames. Since this was only published in a thesis, we are grateful for the opportunity to present the proof here. [12]] Let G ∈ M N,K , then
Theorem. [Elwood
and equality holds if and only if G j,j = K/N and |G j,l | = C N,K for each j = l.
Proof. We recall that by the fact that G is an orthogonal rank-K projection, one has that N j,l=1 With the help of these identities, we express the difference between the two sides of the inequality as a sum of quadratic expressions, In this form it is manifest that this quantity is non-negative and that it vanishes if and only if G is a rank-K orthogonal projection with G j,j = K/N for all j and with |G j,l | = C N,K for all j = l.
It is natural to ask whether a characterization of Grassmannian Parseval frames in terms of frame potentials exists. In order to formulate this in a convenient manner, we first introduce another type of frame potential.
where the Kronecker symbol δ j,l vanishes if j = l and contributes δ j,j = 1 otherwise.
Although for a fixed value of η a Grassmannian Parseval frame may fail to be a minimizer for Φ η od , the family of frame potentials {Φ η od } η>0 characterizes them. 3.9. Proposition. Let G ∈ M N,K , then Moreover, if G ′ belongs to a Grassmannian Parseval frame and G ′′ ∈ M N,K does not, then there exists . Moreover, if G ′ is the Gram matrix of a Grassmannian Parseval frame and G ′′ is not, then µ(G ′′ ) = µ(G ′ ) + ǫ for some ǫ > 0 and if η > ln(N (N − 1))/ǫ, then Although µ is continuous on M N,K , it is not globally differentiable. Thus, locating even local minima is difficult. Fortunately, we can reduce the minimization problem for µ to finding minimizers for a sequence of frame potentials.
where the first equality follows from continuity of the max function. This shows that G belongs to a Grassmannian Parseval frame.
If the off-diagonal sum potential is properly complemented by terms for the diagonal entries of G, then a simple characterization of equiangular Parseval frames can be derived.
and equality holds if and only if G belongs to an equiangular Parseval frame.
Proof. We use Jensen's inequality to obtain Now using the Parseval property gives N j,l=1 |G j,l | 2 = K and thus . Equality holds in Jensen's equality if and only if the average is over a constant. This implies that the diagonal entries equal G j,j = K/N and the magnitude of the off-diagonal entries equals C N,K .
To conclude this section, we show that equidistributed frames can be characterized in terms of families of frame potentials based on exponential ones. To prepare this, we introduce the notion of a frame being α-equipartitioned.
3.14. Proposition. Let G = (G j,l ) N j,l=1 be the Gramian of an (N, K)-frame F, and let I ⊆ (0, ∞) be any open interval, then G is equidistributed if and only G is α-equipartitioned for all α ∈ I.
Proof. If G is equidistributed, the magnitudes of every column are the same as those of any other column, up to permutation. Thus, by definition of α-equipartitioning, it is trivial to verify that then G is α-equipartitioned for all α ∈ I.
Conversely, consider for each for all α ∈ R then since f x and f y are both analytic functions which agree on an open interval, it follows by the principle of analytic continuation that they must agree on all of (0, ∞). In particular, this means that If the maximum magnitude in row x is not equal to the maximum magnitude of row y, then this equation cannot hold. Similarly, if these maximal magnitudes did not occur with the same multiplicity in each column, then again the equation would not be possible. Thus, we can remove the index sets M x and M y corresponding to the maximal magnitudes in rows x and y from the sum in the definition of f x and f y to obtain the new identity for all α ∈ (0, ∞). Repeating the procedure of isolating the strongest growth rate shows that every possible magnitude that appears in row x must agree in multiplicity with every possible magnitude that appears in row y. In other words, the magnitudes in row x are just a permutation of those in row y. Since x and y were arbitrary, we conclude that G is equidistributed.
THE GRADIENT DESCENT ON M N,K
In this section, we first show that following a gradient descent associated with a real analytic frame potential always converges to a critical point. This depends heavily on results by Łojasiewicz. To apply these results, we use that when F = C (respectively F = R), the manifold M N,K is embedded in the (linear) manifold of Hermitian (respectively symmetric) N × N matrices equipped with the Hilbert-Schmidt norm, which induces a topology on M N,K generated by the open balls B(X, σ) = {Y ∈ M N,K : Y − X < σ} of radius σ > 0 centered at each X ∈ M N,K . Moreover, the Hilbert-Schmidt norm induces a Riemannian structure on the tangent space T M N,K . Via the embedding, the tangent space T G 0 M N,K at G 0 ∈ M N,K is identified with a subspace of the Hermitian (respectively symmetric) matrices, and the Riemannian metric is the real inner product (X, Y ) → X · Y ≡ tr(XY ) = tr(XY * ) restricted to the tangent space.
We also recall that the gradient of a differentiable function F on M N,K is the vector field ∇F which satisfies the identity for each G 0 ∈ M N,K and each curve γ ∈ C 1 (R, M N,K ) with γ(0) = G 0 and dγ(0) dt = X.
The frame potentials we have defined on M N,K are all given in terms of real analytic functions of matrix entries. 4.1. Theorem. ([21], [22]; see also [24]) Let Ω be an open subset of R d and F : Ω → R real analytic. For any x ∈ Ω there exist C, σ > 0 and θ ∈ (0, 1/2] such that for all y ∈ B(x, σ) ∩ Ω,
4.2.
Corollary. Let M be a d-dimensional real analytic manifold with a Riemannian structure. Let G 0 ∈ Ω ⊂ M and let W : Ω → R be real analytic, then there exist an open neighborhood U of G 0 in Ω and constants C > 0 and θ ∈ (0, 1/2] such that for all G ∈ U , Proof. Since the manifold is real analytic, after choosing a chart Γ : The combination of the Łojasiewicz inequality in local coordinates with this norm inequality gives the claimed bound, valid in the neighborhood
Convergence of the gradient descent.
It is well known that the Łojasiewicz inequality can be used to prove convergence of gradient flows induced by analytic cost functions on R d . Here we provide a proof of convergence in our setting adapted from [24] Proof. First, we observe that W (γ(t)) is a nonincreasing function, since d dt W (γ(t)) = ∇W (γ(t)) ·γ(t), Furthermore, since M N,K is compact, there must some point G 0 ∈ M N,K along with an increasing sequence t n in R, t n → ∞, which satisfies that γ(t n ) → G 0 . Thus, the continuity of W together with the fact that t → W (γ(t)) is nonincreasing implies that lim t→∞ W (γ(t)) = W (G 0 ).
Since adding a constant to our energy function will not alter the gradient flow, let us assume without loss of generality that W (G 0 ) = 0 and W (γ(t)) ≥ 0 for all t ≥ 0.
Henceforth, we will consider the case where W (γ(t)) > 0 for all t ≥ 0. Due to Corollary 4.2, we know that since W is real analytic in some neighborhood of G 0 , it follows that there exist C, σ > 0 and θ ∈ (0, 1/2] such that for all t ≥ 0 where γ(t) ∈ B(G 0 ; σ) ∩ M N,K . Let ǫ ∈ (0, σ). Then there exists a sufficiently large t 0 ∈ R + that yields Since this inequality holds for any t ∈ [t 0 , t 1 ), it follows by integrating both sides that for any t ∈ [t 0 , t 1 ] we have This shows that t 1 = +∞, so that Thus, we see that γ(t) ∈ L 1 (R + ), and conclude that γ(t) → G 0 as t → ∞.
Characterization of fixed points for the gradient flow.
We recall that when F = C (respectively F = R), the embedding of M N,K into the real vector space of Hermitian (respectively symmetric) N × N matrices induces a similar embedding of the tangent space to M N,K at G 0 , whereγ is the (matrix-valued) derivative of γ. We use this embedding to compute gradients and characterize where the gradient vanishes.
Lemma.
Let G 0 ∈ M N,K , then the real linear map is the orthogonal projection onto T G 0 M N,K .
Proof. As a first step, we observe that because P G 0 is idempotent, its range is the real vector space We show that this vector space contains each tangent vector at G 0 . Let γ : (a, b) → M N,K be a smooth curve such that 0 ∈ (a, b) and γ(0) = G 0 . Since γ(t) is an orthogonal projection for all t ∈ (a, b), one has that γ(t) * = γ(t) and γ(t) = γ(t) 2 = γ(t) 3 for all t ∈ (a, b). Therefore, differentiating γ(t) 2 − γ(t) 3 = 0 yields γ(t)γ(t)γ(t) = 0 . If X =γ(0), then at t = 0 this gives Similarly, if ι(t) = I, then the equations for the complementary projection, for X =γ(0). This, together withγ(0) * =γ(0) shows that each tangent vector is in V G 0 . Moreover, from Appendix A.1, we know the dimension of M N,K is 2K(N − K) when F = C and K(N − K) when F = R. If U is a unitary (respectively orthogonal) matrix whose columns are eigenvectors of G 0 , the first K columns corresponding to eigenvalue one, then if X = X * and This is precisely the dimension of the real manifold M N,K , thus the vector space is the span of all the tangent vectors. Finally, we note that the map P G 0 is idempotent and self-adjoint with respect to the (real) Hilbert-Schmidt inner product. Thus, it is an orthogonal projection onto its range, the tangent space of M N,K at G 0 .
Since P G 0 is the orthogonal projection onto T G 0 M N,K , it can be used to construct Parseval frames for T G 0 M N,K from suitable orthonormal sequences. We first discuss the complex case and then the real case. In the following, ∆ a,b with a, b ∈ Z N denotes the matrix unit whose only non-vanishing entry is a 1 in the ath row and the bth column. 4.5. Theorem. Suppose F = C and let {S a,a : a ∈ Z N } ∪ {S a,b , T a,b : a, b ∈ Z N , a > b} be the orthonormal basis for the real vector space of the anti-Hermitian N ×N matrices given by S a,a = i∆ a,a , Proof. We first note that because S a,b and T a,b are anti-Hermitian, G 0 S a,b G 0 + G 0 S * a,b G 0 = 0 and G 0 T a,b G 0 + G 0 T * a,b G 0 = 0, which shows the simplified expressions for the projections onto the tangent space. Next, we show the Parseval property. Since {S a,b , T a,b } N a,b=1 is an orthonormal basis, the orthogonal projection P G 0 maps it to a Parseval frame for its span. This means we only need to show that the span of the projected vectors is the space of all tangent vectors at G 0 .
Conjugating the orthonormal basis vectors {S a,a : a ∈ Z N } ∪ {S a,b , T a,b : a, b ∈ Z N , a > b} with a unitary U does not change the span. We choose U so that it diagonalizes G 0 , with the first K columns of U belonging to eigenvectors of G of eigenvalue one. Thus where I K and I N −K are identity matrices of size K × K and (N − K) × (N − K). Inserting the definition of S a,b shows that this is zero unless a > K and b ≤ K. In that case, Similarly, if a > K and b ≤ K, then The set {iU T a,b U * , −iU S a,b U * } a>K,b≤K is by inspection the orthonormal basis of a 2K(N − K)dimensional real vector space of Hermitian matrices. Since this is in the range of P G 0 , it is a subspace of the tangent space. Its dimension then shows that the set {iU T a,b U * , −iU S a,b U * } a>K,b≤K spans the entire tangent space. Consequently, Proof. The proof follows verbatim the proof of the complex case, with {S a,b } a≥b omitted from the basis of the anti-Hermitian matrices. We note that after conjugating with a suitable orthogonal matrix U , the resulting projection of T a,b , with a > K and b ≤ K, onto the tangent space is which is indeed a real symmetric matrix. Dimension counting then gives that the image of {T a,b } a>b is a basis for the K(N − K)-dimensional space of tangent vectors at G 0 .
The appearance of anti-Hermitian (respectively anti-symmetric) matrices is natural if one considers that selecting G 0 ∈ M N,K and a differentiable function u ∈ C 1 (R, U (N )) with values in U (N ) (respectively O(N )), the manifold of N × N unitary (respectively orthogonal) matrices, induces curves in M N,K of the form γ(t) = u(t)G 0 u * (t) .
We recall that the embedding of the tangent spaces T I U (N ) (respectively T I O(N )) and of T G 0 M N,K in F N ×N induces via the Hilbert-Schmidt inner product a Riemannian structure on the tangent spaces.
Corollary. The tangent map DΠ
Proof. The preceding theorems show that the map DΠ G 0 is the synthesis operator of a Parseval frame, so it is a surjective partial isometry.
In order to characterize fixed points of the gradient flows associated with each potential, we lift frame potentials and gradients to the manifold of unitary (respectively orthogonal) matrices.
Given a function Φ : M N,K → R and G 0 ∈ M N,K , we consider the lifted function Proof. We first cover the complex case. Letting In the real case, A * is simply the transpose of A, so A = −A * means that A is skew-symmetric rather than anti-Hermitian. The same argument as in the complex case applies.
Frame potentials and properties of their critical points.
From the last part of this section we have learnt that the gradient descent for any real-analytic frame potential always approaches a critical point of the frame potential. Next, we direct our attention to the geometric character of the critical points corresponding to several choices of frame potentials. An essential tool for the characterization of critical points is that by the last corollary, ∇Φ vanishes at G 0 if and only if ∇Φ G 0 vanishes at I.
We start with ∇( E α x,y ) G (I). From here on, when computing the gradient ∇ Φ G (I) corresponding to any frame potential Φ, we suppress the subscript G and the argument I and simply write ∇ Φ.
4.9.
Lemma. Let F = C or F = R. Let G ∈ M N,K , α ∈ (0, ∞) and x, y ∈ {1, 2, ..., N }, and let ( E α x,y )(U ) = E α x,y (U GU * ) , then the (a, b) entry of the gradient of E α x,y at I is given as follows: 2 for a < b as before. We first compute the entries of the matrices S a,b G − GS a,b and T a,b G − GT a,b , and and x δ a,y )) .
Thus, when F = C, summing the components of the gradient gives Using the fact that G j,l = G l,j for all j, l ∈ Z N when F = R, we obtain again Because the expression for ∇ E α x,y does not depend on whether F = C or F = R, we do not distinguish between the two cases for the remaining gradient computations.
4.3.1.
The sum potential and the absence of orthogonal frame vectors. Next, we investigate the sum potential.
4.10. Proposition. Let G ∈ M N,K , let a, b ∈ Z N , and let Φ η sum (U ) = Φ η sum (U GU * ) , then the (a, b) entry of the gradient of Φ η sum is given as follows: Proof. By linearity of the gradient operator, we have By applying Lemma 4.9, the claim follows.
In the investigation of gradient descent for equal-norm frames, nontrivially orthodecomposable frames presented undesirable critical points [11]. We show that this class of frames does not pose problems for our optimization strategy when an initial condition is met. In terms of its Gram matrix G, a frame F is nontrivially orthodecomposable if there is some permutation matrix P which makes G block diagonal, where G ′ 1,1 ≡ 0 and G ′ 2,2 ≡ 0. A sufficiently small initial value of the sum potential rules out that the gradient descent on M N,K encounters such orthodecomoposable frames. 2)) , then G contains no zero entries.
Proof. We prove the contrapositive. Let G j,l = 0. Without loss of generality, we can assume that j = l because if a diagonal entry in G vanishes, then so do all entries in the corresponding row. Now we can perform Jensen's inequality for the entries other than G j,l and G l,j and obtain Inserting the value for C N,K and using the Parseval property gives the claimed bound.
4.3.2.
The diagonal potential and equal-norm Parseval frames.
4.14. Proposition. Let G ∈ M N,K , let a, b ∈ Z N , and let Φ δ diag (U ) = Φ δ diag (U GU * ). Then the (a, b) entry of the gradient of Φ δ diag is given as follows: Proof. Observe that by linearity of the gradient operator, we have By summing over the different cases for x and applying Lemma 4.9 , the claim follows. Proof. Recall from Proposition 4.14 that each (a, b) entry of ∇ Φ δ diag is given by + 2G a,b (G a,a e δ|Ga,a| 2
Thus, by hypothesis and Corollary 4.8, we have
Since [∇ Φ δ ] a,b is constant for all δ ∈ I and since Ψ does not depend on δ, taking the derivative of this expression with respect to δ yields for all a, b ∈ Z N and for all δ ∈ I. Since G contains no zero entries, we can cancel the factor 2G a,b in these equations to obtain for all a, b ∈ Z N and all δ ∈ I. By the strict monotonicity of the function x → x 3 e δx 2 on R + , this implies G a,a = G b,b for all a, b in Z N . This is only possible if G is equal-norm, so we are done.
The chain potential and equipartitioning.
4.16. Definition. Let G = (G a,b ) N a,b=1 ∈ M N,K . Given x ∈ Z N and α, β ∈ (0, ∞), we define the exponential row sum potential R α,β x : We define the link potential L α,β x : M N,K → R by and the chain potential Φ α,β ch : Next, we compute the gradient of R α,β x at I.
4.17.
Lemma. Let G ∈ M N,K , α ∈ (0, ∞) and x ∈ {1, 2, ..., N }, and let R α,β x (U ) = R α,β x (U GU * ), then the (a, b) entry of the gradient of R α x at I is given as follows: Proof. This computation follows immediately from Lemma 4.9 by observing that, because of linearity of the gradient operator, we have 4.18. Lemma. Let G ∈ M N,K and α, β ∈ (0, ∞). Furthermore, let x, a ∈ Z N and set b = a + 1. Let L α,β x (U ) = L α,β x (U GU * ) , then the (a, b)-entry (ie, along the superdiagonal) of the gradient of L α,β x at I is given as: Proof. If we let h(t) = t 2 , then we see that Therefore, by applying the chain rule and linearity of the gradient operator, we see that The rest follows by Lemma 4.17.
4.19.
Proposition. Let G ∈ M N,K , α, β, ∈ (0, ∞). Let a ∈ Z N and set b = a + 1, so that (a, b) entry of G falls on the superdiagonal, and let Φ α,β ch (U ) = Φ α,β ch (U GU * ), then the (a, b) entry of the gradient of Φ α,β ch is given as follows: α (a, b, x, G) Proof. Observe that By summing over the different cases for j and using Lemma 4.18, the claim follows, where we have isolated the nonzero terms which are multiplied by the parameter β (i.e., corresponding to j = a − 1, j = a, and j = a + 1). Proof. Since Ψ does not depend on β and since ∇Φ α,β ch = 0 for all β ∈ J, then using corollary 4.8 and taking the partial derivative with respect to β of an (a, b) entry of ∇ Φ α,β ch gives d dβ
Proposition. Let
for all β ∈ J. In particular, we have d dβ [∇ Φ α,β ch ] a,b = 0 for all a, b ∈ Z N and for all β ∈ J. Next, we compute d dβ [ Φ α,β ch ] a,b for the case where b = a + 1 (ie, along the superdiagonal), thereby inducing a set of equations which will lead to the desired result. So, from here on, we suppose that b = a + 1.
First, we observe a simplification that results from the assumption that G is equal-norm. Referring back to Proposition 4.19, we note that every additive term of [∇ Φ ch ] a,b has a factor of the form (R α,β j (G) − R α,β j+1 (G)). However, since G is equal-norm, we can replace each of these factors with (R α j (G) − R α j+1 (G)) to denote the fact that the β terms corresponding to the diagonal entries have canceled because of the equal-norm property. After doing this, we see that there are only three terms of [∇ Φ α,β ch ] a,b which still depend on β. Now the desired partial derivative is easy to compute, which yields Once again, because G is equal-norm, it follows that G a,a = G b,b = K N , so this equation can be rewritten as d dβ Since G contains no zero entries, we can cancel the factor(s) The circulant matrix A is the polynomial A = −3I + 3S − S 2 + S N −1 of the cyclic shift matrix S. Therefore, its eigenvectors coincide with those of S, and by the spectral theorem the eigenvalues of A are then given by where ω j = e 2πij N , the N th roots of unity, are the corresponding eigenvalues of S. The system Ax = 0 is homogeneous, so we would like to obtain the zero eigenspace of A. By letting j ∈ Z N , setting λ j = 0, and then factoring, we obtain Inspecting both factors, we see that λ j = 0 iff ω j = 1 iff j ≡ 0 mod N . Thus, the zero eigenspace is 1-dimensional and spanned by the vector of all ones. In particular, this shows that . In other words, G is α-equipartitioned. Proof. Since J is an open interval, it follows by proposition 4.20 that G is α-equipartitioned for every α ∈ I. Therefore, by Proposition 3.14, G is equidistributed.
4.3.4.
A characterization of equidistributed frames. Finally, we combine these definitions to obtain the family of potential functions which will yield our main theorem, as defined below.
The assumption in the preceding proposition is met when the frame is generated with a group representation as specified below. 4.27. Proposition. Suppose Γ is a finite group of size N = |Γ| with a unitary representation π : Γ → B(H) on the complex K-dimensional Hilbert space H and {f g = π(g)f e } is an (N, K)-frame. If the Gram matrix G satisfies G g,h = G h −1 ,g −1 for all g, h ∈ Γ, then ∇Φ η sum (G) = 0 for all η ∈ (0, ∞).
Proof. Fix η ∈ (0, ∞). Since G is equidistributed (see Example 2.9), it is equal norm, so the last two additive two terms from the (a, b) gradient entry in Proposition 4.10 cancel. Therefore, to show that the gradient vanishes, it is sufficient to show that for all a, b ∈ Γ. As a first step, we note that the group representation gives G x,y = f y , f x = π(x −1 y)f e , f e ≡ H(x −1 y). Thus, we can change the summation index and get We also note that |H(g)| = |H(g −1 )|, so in combination with changing the summation index we obtain Finally, using the fact that the Gram matrix has the assumed structure gives This completes the proof, since η was arbitrary.
The claimed property of the Gramian is true if Γ is abelian. There is an abundance of equidistributed Parseval frames obtained with representations of abelian groups, in particular the harmonic frames that exist for any combination of the number of frame vectors N and dimension K ≤ N . The gradient of the sum energy also vanishes for any Gramian corresponding to a mutually unbiased basic sequence which has been rescaled to admit Parsevality. In order to make the block structure apparent in the notation, we write the matrix G as G = (G (p,q) x,y ) p,q∈Z M ,x,y∈Z L , where the doubly-indexed superscript indicates in which block the entry is and the subscript indicates the position within the block. The absolute value of any entry then satisfies To see the claim, we verify that every entry of ∇ Φ η sum vanishes. Since this is automatically true for the diagonal entries, let (p, x), (q, y) ∈ Z S × Z L with (p, x) = (q, y). One has that either p = q or p = q. If p = q, then re-expressing the identity in Proposition 4.10 in terms of block notation and noting that the last two terms on the right-hand side cancel due to the equal-norm property yields t,y .
The first series on the right-hand side is zero because G (e η|G (p,s) x,t | 2 − e η|G (q,s) y,t | 2 )G (p,s) x,t G (q,q) t,y t,y .
The first series vanishes because G (p,p) x,t = 0, the second one because G x,t | = |G (q,s) y,t | = C M,L,K . This confirms that ∇ Φ η sum = 0 and, since η was arbitrary, the claim is proven.
As a consequence of this Proposition and of Proposition 4.26, we know that Examples 2.13 and 2.14 provide us with family-wise critical points.
Constructing equidistributed Grassmannian Parseval frames.
We conclude the discussion of the relation between frame potentials and the structure of optimizers by showing how an equidistributed Grassmannian equal-norm Parseval frame can be obtained as the limit of minimizers to the sequence {Φ ηn sum } ∞ n=1 , where η n → ∞. Furthermore, by Parsevality, since each G(m) is equal norm, there must always exist an off diagonal entry G(m) a,b such that |G(m) a,b | 2 ≥ C 2 N,K . Hence, µ(G(m)) = max a,b∈Z N |G(m) a,b | for every m, which allows us to replace Φ ηm od with Φ ηm sum in the proof strategy from Proposition 3.10, which shows that G corresponds to a Grassmannian Parseval frame. Finally, since each G(m) is equal-norm, it follows that G must also be equal-norm. Therefore, G is a Grassmannian equal-norm Parseval frame.
If the sequence of minimizing Parseval frames has the stronger property of being equidistributed, which implies that it is equal norm, then the limit of the corresponding subsequence is equidistributed as well. for all a, b ∈ Z N . Since this is true for every α ∈ (0, ∞), G is equidistributed by Proposition 3.14.
Additionally, being the limit of a sequence of minimizers for {Φ ηm sum } m∈N , G is also a Grassmannian Parseval frame.
If we know that if each G(m) is a family-wise critical point without vanishing entries, then we can characterize this limit in terms of the gradient of frame potentials. The existence of equiangular Parseval frames for certain pairs of N and K provides an abundance of examples for which this theorem holds; however, due to our current inability to verify when a non-equiangular critical point of Φ α,β,δ,η is at an absolute minimum, we are unable to state outright that non-equiangular, equidistributed frames exist which satisfy the conditions of Theorem 4.34. Based on numerical experiments, it is our conjecture that Example 2.8 is an absolute minimizer of Φ α,β,δ,η for all η ∈ (0, ∞) and therefore corresponds to a Grassmannian equal-norm Parseval frame which is equidistributed.
In addition, we know that the conclusion of the preceding theorem can hold even if G contains vanishing entries, as provided in examples of family-wise critical points given by the equidistributed Grassmannian equal-norm Parseval frames in Examples 2.13 and 2.14.
Noting thatφ J (G) contains a K × K identity submatrix, we define the chart φ J (G) to be the K × (N − K) matrix given byφ J (G) = (I K φ J (G)), thereby defining what will be our local coordinates in F K×(N −K) . Thenφ J is analytic, since the inverse of G J,J is rational in its entries; hence, φ J is also analytic, since there is no loss of analyticity in the removal of entries.
To see that φ J has an analytic inverse, we show that we can reconstruct G from φ(G) in an analytic fashion. First, we reinsert the K × K identity block in a way that corresponds to J so that we have recovered the K × N matrix A :=φ J (G) = (G J,J ) −1 G J,N , as above. Next, we form the K × K Gram matrix Q = AA * = (G J,J ) −1 G J,N (G J,N ) * ((G J,J ) −1 ) * . Since G J,N was extracted from an orthogonal projection, G J,N (G J,N ) * = G J,J , so that Q = (G J,J ) −1 is analytic in the coordinates. Next, we orthogonalize the rows of A to obtain B := Q −1/2 A = (G J,J ) 1/2 A. The negative square root of Q is seen to be analytic in Q via a convergent power series expansion of (cI − (cI − Q)) −1/2 in terms of the powers of cI − Q, where c > Q . The rows of B then provide an orthonormal basis with the same span as the rows of A and BB * = I. Thus, B is the synthesis operator of a Parseval frame with the Gram matrix We see that the entries of G are analytic in the coordinates if there is c > 0 such that the power series expansion of (cI − (cI − Q)) −1/2 converges, so φ −1 J is analytic on the range φ J (B(G 0 ; ǫ)). Combining the analyticity of the charts and of their inverses, we conclude that M N,K is a real analytic manifold because φ J • φ −1 L is analytic on the image of the intersection of the domains of φ J and φ L for any subsets J and L of size |J| = |L| = K. The dimension of M N,K is the real dimension of F K×(N −K) , which is K(N − K) if F = R and 2K(N − K) if F = C. | 2014-07-07T11:01:56.000Z | 2014-07-07T00:00:00.000 | {
"year": 2015,
"sha1": "f1a59092891017b45dc90e0ea93ea4eea7efa705",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1407.1663.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f1a59092891017b45dc90e0ea93ea4eea7efa705",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
10217896 | pes2o/s2orc | v3-fos-license | When is S=A/4?
Black hole entropy and its relation to the horizon area are considered. More precisely, the conditions and specifications that are expected to be required for the assignment of entropy, and the consequences that these expectations have when applied to a black hole are explored. In particular, the following questions are addressed: When do we expect to assign an entropy?; when are entropy and area proportional? and, what is the nature of the horizon? It is concluded that our present understanding of black hole entropy is somewhat incomplete, and some of the relevant issues that should be addressed in pursuing these questions are pointed out.
I. INTRODUCTION
In the past 25 years there has been a great deal of activity around the nature and origin of black hole entropy, since the pioneer work of Bekenstein and Hawking who found a close relation between entropy and area of black holes [1][2][3]. It is generally regarded that the identification of the fundamental degrees of freedom and the computation of the entropy for a black hole is one of the major challenges for any candidate quantum theory of gravity. Of particular importance are the recent attempts to recover, from basic approaches to quantum gravity, the "standard expression" for the entropy of a black hole [4,5]. For a recent review see [6]. Our aim in this article is to review and discuss the foundations of those attempts by examining in detail under what conditions this standard answer can be expected to be obtained.
The question of when entropy is equal to A/4 for a black hole is in fact made of at least 3 questions: 1. To what exactly and under what conditions do we expect, in general, to assign an entropy?
2. When is the entropy assigned to a black hole equal to 1/4 of its area?
3. To which area exactly do we refer in answering the above question?
The first (and probably the most controversial) of these questions is in itself a collection of questions: 1.a) Under what physical circumstances do we assign an entropy? 1.b) To what do we assign entropy?: To a history?, to an "instantaneous" state of a system?, to a our description of such a state of the system? 1.c) What needs to be specified in order to assign an entropy?
More concretely, question 1.a), namely under what circumstances we assign entropy, refers to the issue of assigning entropy to: i) all situations, ii) stationary situations, iii) quasi-stationary situations, or iv) some other set of situations, larger than ii) but smaller than i).
In answering this question, we are also led to the issue of what type of entropy are we talking about: thermodynamic entropy or statistical mechanical entropy. Within this later case we should also decide whether we refer to the Boltzmann or the Gibbs entropy.
Once one has decided when to assign entropy, one should also specify to what "object" it should be associated. For instance, one should say if the entropy is to be associated with the interior, the exterior or the horizon of a black hole. Finally, the question of what needs to be specified in order to assign entropy [(1.c)], refers to standard specifications that one provides in treating the statistical mechanics of any system, for instance, the distinction between system and observer, coarse graining, etc.
On a closer look, one can see that all the questions that make up the question of when to assign entropy, are in a sense, an indication that we need to put the concept of entropy on a firmer ground than it is at the moment. It might seem that the fact that statistical mechanics can deal successfully with all practical applications of the subject is an indication that all is well and clear. However, to understand the limitations let us focus for a moment on another concept where the situation is fully clarified, but was not so when the concept was firstly used: The concept of energy. For a start, one realizes that for free particles the kinetic energy is conserved, and then, that for certain forces one can introduce a potential energy, where now it is the sum of the former and the latter that constitutes the true conserved energy. The story can be continued with the realization that one needs to incorporate more forms of energy in order to maintain the validity of the conservation law in more general circumstances. It is only when one regards energy as the "conserved quantity in the presence of time-invariance", that the mystery behind all its apparent manifestations disappears, and one recognizes that one is truly talking in all cases about one single quantity: energy. Similarly, it seems that we would need an 'unified' definition of entropy similar to the one available for energy, in order to clarify the situation and in particular the discussion around question 1, namely the question of when to assign entropy to a system. In other words, in the case of energy one knows that one of the implied aspects of having a single energy conservation law, instead of, say, one for kinetic energy and one for potential energy, is the tacit acknowledgment that there exits physical process that convert one type of the energy into another. Moreover, given two types of energies, one can in principle (in almost all cases) find one physical process taking the first type into the second. The exceptions are in fact supposed to be codified trough the concept of entropy and by the second law of thermodynamics (besides the usual limitations arising from other conservation laws). In the same way, if such a 'unified recipe' to deal with entropy was at hand, one would hope to understand not only the second law (in its generalized form including, in particular, the entropy of black holes), but also the limitations, if any, to converting one type of entropy into another.
This last point is particularly interesting, because as far as the authors know, no such restriction has been advanced to date and thus the second law is the only limitation for the conversion of energy from one type to another. In fact one can give strong arguments that such restrictions must exist, in particular, associated with the issue of locality: No one would believe that one is allowed to device a machine that, say, transfers heat from a cold reservoir to a hot one without any additional local effects, and which avoids violating the second law simply by having a second part of the machine that creates sufficient entropy in a distant galaxy. Such considerations clearly indicate that one must face the issue of localization of entropy in general, and by having a black hole as a part of such contraption, the localization of black hole entropy in particular. One would like to have a general definition, analogous say to the definition of the energy momentum tensor, of an entropy current S a (see [6]) satisfying an equation of the form: such that the entropy associated with an hypersurface Σ, with unit normal n a , and volume element dV , given by, be such that the entropy associated to an hypersurface is greater or equal to that associated with an hypersurface to the past of the first one.
Such a general and precise notion of entropy would seem to be required, before one can hope to have a complete understanding of the reasons behind the validity of a generalized second law (including the contribution to entropy associated with black holes). There are in fact proposals for the derivation of this generalized second law [7], which however fail to clarify the underling reason for their validity, in the sense that, for example, the conservation of energy is understood as a result of a time invariance of the underlying theory. Needless to say, these issues can not be treated with the current understanding, because we lack, among other things, the notion of localization of the entropy.
Assuming that one is willing to consider entropy as assigned to all situations, one is then confronted with question 2: is the entropy associated to a black hole always proportional to its area? It is clear that in (quasi)stationary situations, the existence of the first law leads us to the conclusion that entropy is proportional to area. Then, the question narrows to: should we consider area as a measure of entropy, also for the dynamical case?
The third question then refers to the identity of the area horizon one wishes to consider in order to identify it with entropy. There are situations in which all known notions of horizon agree (if defined). In particular, this is true when the spacetime under consideration is stationary.
However, already in the quasi-stationary case there are differences between, say, the event horizon and an isolated horizon. In the dynamical case, none of the definitions agree. It is then of vital importance to "make up our mind" about the nature of the relevant horizon.
It is important to stress that the question of when entropy and area are proportional (question 2) and to the nature of the horizon (question 3) have to be considered once one has tried to answer question 1 in detail. In the hypothetical case that one has an "universal" definition for entropy, one might hope that question 3 would be settled 'ab initio' and that question 2 can be answered by a direct application of the relevant formalism (assuming one has a quantum theory of gravity).
The purpose of this work is to critically review our current understanding of the foundations of black hole entropy and to point out the interrelations (not always fully appreciated) between the positions one takes in answering the various questions here posed, and the requirements that the positions one adopts in facing each of these issues ought to be mutually consistent. It is important to stress that this article does not intend to give a global answer to the issues that are addressed, but rather to point out the unresolved issues.
In this work we restrict our attention to the general theory of relativity (with, in principle, arbitrary matter couplings), and do not consider higher derivative theories, for which the relation between entropy and area does not seem to hold even at the classical level (i.e., in the generalized first law) [8].
This paper is organized as follows: In Sec. II we discuss the question of when entropy is defined. Section III is devoted to the study of when entropy and horizon area are proportional. The question of the nature of the horizon is the subject of Sec. IV. Finally, we end with a discussion in Section V.
II. WHEN IS ENTROPY DEFINED?
First, let us elaborate on the question of what kind of entropy we should focus on, namely thermodynamical vs statistical.
We know that the thermodynamic entropy is associated with stationary and quasistationary situations. In the case of black holes, this is normally reflected in the existence of the ordinary first law, and the thermodynamic entropy would be the quantity appearing there. Nevertheless, we are in fact interested in the statistical mechanical entropy because, as one can argue, it is the most general kind of entropy since, for every situation in which the thermodynamical entropy is defined, so is the statistical mechanical entropy. In these cases the two essentially agree, but there are situations is which the (standard) thermodynamic entropy is not even defined. Furthermore, it is the statistical mechanical entropy the object that, in principle, can be calculated from the basic microscopic theory, which in the case of a black hole's entropy contribution would be the quantum theory of gravity 1 . Now, the two types of statistical entropy, namely Boltzmann and Gibbs are, in principle, conceptually different. The first, depends on the exact microstate of the system under consideration and is defined as the logarithm of the number of microstates being "macroscopically indistinguishable" from the given one. This set is said to represent the mesostate. That is, if N i denotes the number of microstates making up the i th mesostate, then the Boltzmann entropy is, whenever the microstate finds itself within the i th mesostate. The second is an "ensemble functional", rather than a function of the actual physical state of the system. Associated with the ensemble there is a probability density ρ on the space of microstates, and the Gibbs entropy is However, in practice we use an essentially identical coarse graining prescription to define the level of uncertainty that allows us to construct either the notion of mesostate for Boltzmann entropy or the ensemble for the Gibbs entropy. Since the statistical mechanical entropy is more general than the thermodynamical entropy, the former must be defined at least in stationary and quasi-stationary situations [ii) and iii) above]. We know of no criteria that would be appropriate to specify a more general situation (case iv) above) so our options for when the statistical entropy is defined seems to be restricted to just the set i) (i.e. always) or the same as the thermodynamical entropy (i.e. stationary and quasi-stationary cases).
We would like to further argue against the choice of having the statistical entropy defined only in the stationary and quasi-stationary cases. Making this choice would render the second law as practically useless to prevent, for instance, the construction of perpetual motion machines ( i.e. consider a contraption that moves in such a way as to avoid passing trough a situation for which entropy is defined).
Moreover, this option would seem to destroy the Markovian nature of a physical theory, namely the property that the predictive power of a physical theory is not increased by considering the past together with the present, as compared to the consideration of the present alone. That is, if the present corresponds to a situation in which entropy is not defined, we could not preclude a future situation with a given entropy S 1 , in terms of our knowledge of the present, but could, for example, preclude that situation if we use (together with the second law) the fact that in the past the (closed) system was in a situation characterized by an entropy S 0 > S 1 . One could argue that the Markovian aspect should be associated with the whole physical theory and not with each physical law separately. However, in this case, the fact that the second law is the only law incorporating a particular arrow of time, it seems difficult to imagine that the other known physical laws could restore the Markovian nature to the situation at hand. All these considerations seem to take us to the position of accepting 1.a.i), i.e The statistical entropy should be defined for all situations (See also [9]). Now, in providing this answer we are in fact providing also a partial answer to the question of which object should one assign an entropy to [1.b) above] in the sense that we are assigning an entropy to a physical situation with a notion of localization in time. The alternative of assigning an entropy to a complete spacetime or a history (as indicated for example in [10]) would seem to make the concept of entropy completely useless in the sense of adding predictive power, and in particular as a tool for ruling out perpetual motion machines. In a sense we are puzzled as to what would be the current view of the authors in [10] about the area increase theorem in classical general relativity and its resulting connection with the generalized second law.
Another, more moderate view, would be to assign an entropy with a situation localized in time but within the context of a spacetime or history. Here again there are potential problems if we require the history to be known to a larger extent that is possible from a description in terms of initial data -a situation that could occur if we let quantum events play a decisive role in the selection of the possible histories, as in the examples discussed in [11]-as we could render the concept of entropy, again, useless in ruling out perpetual motion machines. These problems would be avoided if we accept that there be no supplementary information in the history.
Thus, from the previous discussion we are lead to the conclusion that in order for the concept of entropy to be useful in the ways we expect it to be, we must assign entropy under all circumstances and to instantaneous physical situations.
The remaining aspect of question 1.b), namely to which object to assign entropy is in fact incorporated within the question of what needs to be specified in order to define entropy [question 1.c) above] which we address next. It is obvious that we must specify at least the physical system to which one is about to associate an entropy, and the coarse graining needed to define the macrostate (in the Boltzmann scheme) or the ensemble (in the Gibbs scheme). This leads to the conclusion that in fact we do not assign entropy to a physical situation, but to our description of the physical situation. This view is consistent with the assignment of entropy to, say, de Sitter and Rindler horizons as in [12]. This is a consequence of the fact that there is, in principle, no natural specification of the coarse graining, if no specification is given of the experimental procedure used to prepare the system. Thus, entropy would seem to be a relative concept, with different physicists disagreeing about the value of the entropy of a specific system at a given "time". This in itself is not so worrisome, after all, other concepts like energy or length suffer from the same relativism and are nevertheless very useful. However, this indicates that we must specify the observer with respect to which entropy is to be assigned. It is even possible that the two specifications, observer and coarse graining become intertwined, as for example in the case of a Black Hole, for which one can think that the prescription to disregard whatever occurs inside is in fact part of the coarse graining [11], or part of the specification of the observer (whom we can think is restricted to move in the outside thus having no access to any information about the inside). This must not be interpreted as in any way saying that the entropy of a black hole, is associated with the internal degrees of freedom since that would lead to various problems [7,13], but only as the statement that there is entropy to be assigned in this case because we are disregarding the inside. That is, we leave open the possibility that, for instance, the entropy might have to do with those degrees of freedom of the inside which can affect the outside through boundary conditions, or correlations, which would lead to the view that the relevant degrees of freedom are those associated with the horizon [13]. Returning to the issue raised in 1.c), in particular, to the details of the specification of observer and its observational capacity, we must, in accordance with the limitations imposed by the relativistic nature of physical reality, agree from the onset that the observer must be replaced by a collection of observers. We are faced then with the problem of having to specify the extent of this set of observers in order to be able to decide the extent of the observable quantities, a specification that one could hope will take into account, for example, the existence of horizons of various sorts. In particular, in the case one wished to consider event horizons in the previous discussion one would immediately run again into the problems derived from the teleological nature of such object, i.e. the fact that the present location of the horizon depends strongly on events in the future that might, as in the example considered in [11], make it impossible to predict its location based on the full knowledge of the system at present.
III. STATIONARY VS. DYNAMIC
Now we turn to question 2. That is, when is the entropy of a black hole proportional to its area? Here again we seem to face various alternatives: 2.a) In stationary and quasistationary situations; 2.b) Always; and 2.c) Some other restricted set of circumstances.
If we take the option that entropy and area are proportional only in stationary and quasi-stationary situations, we immediately face two questions. First, what are we going to take as the expression for entropy in other situations? One can try to answer this question both at the classical level and in the quantum domain. Classically, one would have to define a geometrical quantity to be associated with a dynamical entropy. Several such attempts are available in the literature [8,14,15]. Note that the particular prescription becomes intertwined with our question 3, that is, with the nature of the horizon one wishes to consider.
A second possibility is that the answer will be available only when we have a full theory of quantum gravity. After all, if asked to compute the entropy of a given non-equilibrium (macroscopic) configuration of a mass of gas, we need to go to the microscopic theory to count microstates etc, and we can not expect the answer to be a simple function of a single macroscopic parameter (which might not even be defined). Furthermore, within a full quantum theory of gravity, a generic configuration might not even have a macroscopic description in terms of a space-time with a horizon in it (like in some of the D-brane calculations [4]).
An important issue in both cases is, in a sense, the other side of the same coin, i.e., what are we going to make of the area theorem? If the dynamical entropy is not to be identified with the area of the event horizon, the existence of this theorem will lead to a strange situation because we will have on the one hand, the second law for the true entropy (which would not be a simple expression) and the second unrelated non-decreasing quantity: the area of the event horizon.
Furthermore, we note that the first law of black hole mechanics: where M is the ADM mass, κ the surface gravity, A the area of the event horizon, and δW stands for work terms, is known to be valid for arbitrary variations of stationary black holes [16] even if these configurations are unstable. Thus, the point of the phase space to which the variation has taken the original stationary black hole, is not only not stationary, but it can not be said to be a configuration that would remain close to a stationary one, i.e. can not be said to be quasi-stationary. Nevertheless the identification of this law with the first law of thermodynamics, clearly indicates that we are assigning an entropy S ∼ A to these black holes.
Assuming that we take option 2.b), namely that the entropy is always proportional to the horizon area, then we are lead to a problematic situation because of the fact that we need to know the area of which horizon we are talking about. The event horizon seems not to be adequate since we need to know the complete spacetime in order to locate the horizon, and, as we concluded previously, the entropy need to be assigned to an instant of time, which in general relativistic settings corresponds to Cauchy hypersurfaces. We could take the view that the prescription is, then, to take the data associated with the hypersurface, including the geometry and the matter fields, evolve it according to Einstein's equations and proceed to locate the horizon in the corresponding hypersurface. Unfortunately this is also not a viable option in general as demonstrated by the example discussed in [11], in which the initial data, although complete, is not enough to locate the horizon on account of a decisive role played by a quantum measurement that is to be performed to the future of the given hypersurface, leading to fluctuations of the area of the event horizon with The previous discussion seems to lead us to option 2.c), that is, to some other restricted set of circumstances, which still needs to be specified. In this regard, again the analysis of the example in [11] would point to the following generalization of the previous prescription: take the data associated with the hypersurface, including the geometry and the matter fields, consider the possible evolutions taking into account quantum alternatives and proceed to locate the horizon on the initial hypersurface for each of the corresponding spacetimes, and finally add the corresponding values of the areas with the appropriate probabilistic weights. So far we have centered our discussion of this question assuming that the entropy would be associated with the area of the event horizon, and in fact, the alternative 2.c) (some other restricted set of circumstances) is also an opening for the consideration of the next section.
IV. AREA OF WHAT?
In the past sections we have argued that the event horizon, even when it has a clear spacetime definition, and is in a sense the obvious choice one might make, has several problems for a satisfactory definition of entropy. Again, the main problem of choosing the event horizon is its teleological nature that makes the situation different (as explained in Ref. [11]) from the case of an ordinary thermodynamical system put in a quantum superposition of states. As explained in [11], if one wishes to adopt the event horizon, then one needs to give up a canonical theory and/or modify the existing quantum theory. On the other hand, if one is not willing to give up a canonical quantum theory, then one can not consistently insist on the event horizon as the relevant quantity. One might conclude that, in this case, the event horizon should be replaced by another geometrical quantity in dynamical situations and then one is faced with the problem of finding a suitable alternative. The purpose of this section is to review the available possibilities, which to our knowledge are: 1) The apparent horizon [17], 2) The isolated horizon [18,19], and 3) The trapping horizon [15].
The apparent horizon has very serious problems, since it is known to be discontinuous for dynamical situations like a collapsing star [20], [11]. Furthermore, it is known that even the Schwarzschild spacetime contains Cauchy hypersurfaces with no apparent horizons.
The second alternative, namely isolated horizons, are particularly interesting for several reasons. First, it has been shown that for quasi-stationary processes, the (quasi-local) horizon mass satisfies a first law in which the entropy is proportional to the horizon area [18]. Secondly, there exists a calculation of the statistical mechanical entropy that recovers the "standard result" S = A/4 for various types of black holes [5]. This formalism is in fact a generalization of the standard stationary scenario to more physically realistic situations, because the exterior region need not to be in equilibrium. Nevertheless, the whole approach is based in the assumption that the horizon itself is in internal equilibrium. In particular, its area has to be constant, and nothing can "fall into the horizon". In this regard, isolated horizons as presently understood, are not fully satisfactory since the formalism is not defined and does not work in general, dynamical, situations.
Moreover, there are situations in which one is faced with the occurrence of several isolated horizons, intersecting a single hypersurface, one within the other, and one must decide to single out the one to which entropy is to be assigned. We can take the view that this should be the outermost horizon, but this seems to be just an add hoc choice, unless it is argued that the selection is the natural one associated with the fact that we are specifying the "exterior" observers to be the ones with respect to which entropy is assigned. This view would be natural if we take the position that the assignment of entropy is related to the coarse graining, which is partially specified by pointing out the region from which information is available to the observer. However, this point of view would conflict with the fact that the isolated horizons are not good indicators of such regions, basically because their definition is purely local and thus not fully based on causal relations.
Isolated horizons are well defined for equilibrium situations. If some matter or radiation falls into the horizon, the previously isolated horizon ∆ 0 will cease to be isolated, and (one intuitively expects) there will be in the future a new isolated horizon ∆ 1 , once the radiation has left and the system has reached equilibrium again. One would like to have a definition of horizon that interpolates between these two isolated horizons ∆ 0 and ∆ 1 , such that the physical situation can be described as a generalized horizon that "grows" whenever matter falls in. There is a natural direction for this notion of horizon, and this leads us to the third possibility, namely, trapping horizons [15].
In a series of papers, Hayward has been able to show that there exists (at least in the spherically symmetric case) a dynamical (as opposed to quasi-stationary) first law, for a (quasi-local) energy that, however, does not coincide with the horizon energy of the isolated horizons formalism (in the static limit). There exists also a second law, for the area of the trapping horizon, when a particular foliation of the space-like horizon is chosen. However, we face the problem that, by definition, these horizons can be specified only when the full spacetime is available: given a point in spacetime, the issue of whether or not it lies on a marginally trapped 2-surface, can not in general, be fully ascertained until the whole spacetime (where the rest of the 2-surface is to be located) is given. These option is also problematic because the trapping horizons are in general space-like and thus there is no guarantee that a given hypersurface would not intersect the horizon in several components thus leading to the same problem of in-definition that was mentioned in connection with option 2). Moreover, in this case the horizon can even be tangent to the hypersurface which is an extreme version of the previous problem. The fact that all this objections can be raised against this option, has its origin in the fact that the trapping horizon is not a surface defined on the grounds of causality alone.
It would be interesting to have a coherent formalism developed, which incorporates both the isolated and the trapping horizons formalisms, and that allows for a dynamical description of "black hole horizons".
V. DISCUSSION AND CONCLUSIONS
In this paper, we have critically reviewed our current understanding of black hole entropy, focusing on the conceptual foundations that lie below the assignment of entropy to a black hole. In particular, we have argued that for the full content of the second law to be useful one needs to take the view that entropy should be assigned to all physical situations. Moreover we also argued that entropy should be assigned to situations localized in time, which in the general covariant setting implies that it should be assigned to Cauchy hypersurfaces, which can be viewed as immersed in spacetimes but only to the extent that the spacetimes themselves can be obtained from the data on the corresponding hypersurfaces.
The conclusions above seem to be very robust in the sense that taking an alternative viewpoint would seem to force us to deprive the second law from its predictive power and therefore from much of its meaning. This has lead us to a rather paradoxical situation when dealing with the entropy of a black hole, because the alternatives that are available to play the role of entropy in the general dynamical case are all suffering from serious disadvantages: The event horizon can not in general be localized on a given hypersurface, and the best that can be expected is to have a probability for its various possible locations associated with the various possible spacetime developments of the given initial data. The apparent horizon would lead to the assignment of zero entropy even to some Cauchy hypersurfaces of the Schwarzschild spacetime. The Trapping horizons are also in general not localisable in the absence of the full spacetime and moreover are in general space-like, a feature that can lead to the multiple crossings of the horizon with the given space-like hypersurface, unless a preferred foliation of the horizon is chosen to begin with, and therefore a natural definition of "time evolution" along the horizon.
One possible conclusion is that the general expression for the entropy of a dynamical back hole that would arise from a complete quantum gravity theory, would be rather complicated and dependent on the details of the theory, a situation which would put such entropy in a similar footing with the entropies of other systems which, in the dynamical situation, are not expressible as simple functions of a few macroscopic parameters. The puzzling aspect of this view is the meaning we would ascribe to the classical area increase theorem, since we would have to abandon its interpretation as being just an expression of the second law, in the case of classical black holes.
Another possibility is that, in the semi-classical limit of the full theory, the states that yield space-times with a horizon in it, would have the property that the horizons are quasistationary (or even isolated). 'Dynamical states' would be present but they might not be interpreted as a classical space-time with a dynamical horizon. This drastic conclusion would of course have deep implications on black hole evaporation and information loss. This would imply that there are no quantum counterparts to classical dynamical black holes in certain regimes.
An alternative (an more moderate) conclusion would be to take the entropy as the probability-weighted average of the event horizon areas associated with the possible future developments of the appropriate Cauchy data. This was, for instance, the viewed taken in the analysis of [11] where an argument in favor of this proposal was obtained from a sumover-histories formulation of quantum mechanics, together with the hypothesis that taking S = −tr(ρ ln ρ) with ρ the density matrix for the exterior black hole region yielded the correct result A/4 in the standard situations.
This discussion leads us to conclude that quite aside the issue of finding a correct theory of quantum gravity, the status and interpretation of the thermodynamics of black holes is rather incomplete. Furthermore, the basic issues raised in its connection could prove fundamental as guidance in the search for the quantum theory of gravity 2 . | 2014-10-01T00:00:00.000Z | 2000-10-23T00:00:00.000 | {
"year": 2000,
"sha1": "7c391185f6b6fc98fdad9255226596189724769b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/0010086",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7c391185f6b6fc98fdad9255226596189724769b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
164667179 | pes2o/s2orc | v3-fos-license | Modeling of acetylene detonation in a shock tube by the large particle method with TVD correction
A one-dimensional TVD correction scheme is described, which can be successfully applied not only in 1D calculations, but also in 2D - 3D modeling [1-2]. The advantage of the proposed scheme is the lack of solving matrix equations. The implementation of the scheme in conjunction with the large particle method was tested on the problems of arbitrary discontinuity breakup and strong point explosion. The modeling of acetylene combustion in a shock tube and its transition to detonation was performed depending on the conditions set on the surface of the tube, and the parameters of the problem.
Introduction
The scheme of the large particle method [3] consists of several stages corresponding to the scheme of splitting into processes. We use the TVD scheme as the most last stage of this method to correct the solution obtained after the execution of the previous stages. Hence the name of the method includes -«TVD correction». The large particle method modified in this way allows to increase the accuracy of the approximation to the second order, as well as to get rid of the need for the use of artificial viscosity.
Verification of the joint operation of the large particle scheme and the TVD correction was performed on the problem of decay of an arbitrary discontinuity and the problem of a strong point explosion. Although these test problems are one-dimensional in nature, they were simulated using a two-dimensional axisymmetric formulation, which allowed validating the applicability of onedimensional TVD scheme in 2D modeling.
Numerical simulation in a shock tube was performed for stoichiometric mixture of enriched air and acetylene represented by four elements: air -2.5O2 + 5.2N2; acetylene -C2H2; nitrogen -N2; combustion products -H2O+2CO2. The proposed physical-chemical model uses tabular descriptions of the equilibrium thermodynamic state for each component of the gas mixture separately, but they do not have to be in a state of chemical equilibrium with respect to each other and can participate in mutual chemical conversion.
Mathematical model
The numerical method is based on the solution of a system of Euler equations written in the form of the mass, momentum, and energy conservation laws. The two-dimensional Euler equations can be represented as a system of conservation laws recorded in cylindrical coordinates ( , ) z r as follows: where U is the column vector of the conserved quantities, F and G are the column vectors of flows along the coordinates z and r , respectively, and S is the source vector. The number of components in the vectors is determined by the number of equations in system (1); this number, in turn, depends on the used physical model. When describing the processes of chemical conversion in combustible mixtures, the equations of conservation of the corresponding chemical components have to be added. Then the number of equations in (1) must be equal to 4 N , where N is the number of components of the mixture. The vector quantities from the system (1) can be written as: where is the density of the substance; u and v are the components of the velocity vector, respectively, in z and r coordinates; E is the total specific energy; p is the pressure; -the mass fraction vector of the chemical components; -the vector of the chemical reaction rate; q -thermal conductivity flow. To close the system of equations (1)-(2), one should specify the equation of state or the system of equations by which the pressure in the form ( , , ) p p T Y and its derivatives can be calculated: The pressure derivatives determined in (3) are needed for calculating the TVD dissipation, in particular, of the local speed of sound: The temperature used to calculate the partial pressure and the energy of the mixture components is determined based on the energy conservation equation for the mixture in a given calculation cell: The numerical simulation used the functions ( , ) obtained in the form of tables, calculated with a given grid upon the parameters of temperature and density. A uniform temperature scale ranging from 200 K up to 6000 K with increments of 50 K and a density scale ranging from 10 -8 up to 10 2.5 kg/m 3 with a uniform logarithmic step of 0.25 were used. For a realistic description of the thermodynamic properties of the substance, equilibrium tables calculated by NASA CEA for specific components of the burning mixture considered in the simulation were used.
The combustion rate of acetylene in the air, determined in accordance with the Arrhenius law: Were 1 Y -mass fraction of the enriched air; 2 Y -mass fraction of acetylene; 2 O Y -fixed fraction of oxygen in the air (variable 1 Y ); ( ) B T = 2.3 . 10 11 m 3 mol/kg 2 s; a E -activation energy expressed in units of temperature is a fitting parameter.
TVD correction
Initially, the scheme of Total Variation Diminishing (TVD) was developed for the conservation equations for the one-dimensional plane case [4]. However, as shown in [5], it can be successfully applied in both cylindrical and spherical geometry for one-dimensional gas dynamics problems. In the future, the TVD scheme has been widely used for the numerical solution of two-dimensional and three-dimensional gas dynamics problems [1][2]. Herewith, the same one-dimensional version of the scheme, applied alternately to different coordinate directions, was used. The possibility of using the TVD method in conjunction with the large particle method was described in [6] too.
Nonphysical oscillations are suppressed by adding the TVD correction after completion all of the large particle method stages. That is, after the values Here 1/2, is used for another direction. This approach is a version of the solution of the linearized Riemann problem [7]. However, with respect to the gas dynamics conservation equations, this decomposition problem can be solved in a different way, without the use of Jacobi matrices treatment. The way proposed in this work consists in the application of known expressions for Riemann invariants: respectively, for the characteristics with the eigenvalues equal to u c , where c is the local velocity of sound. Variations of the quantities from (8) are the total differentials only for the isentropic flow. In the case of an arbitrary flow, the entropy variation differs from zero 0 s and propagates along the characteristic with the eigenvalue equaled to u , i.e., it moves with the mass of the matter. Thus, any perturbation of gas-dynamic quantities can be decomposed to independent parts of the mentioned three types that propagate along the corresponding characteristics.
We determine the variations of the quantities necessary for the described decomposition: Next, we omit the spatial indices for the values defined at the cell boundaries. Then the decomposition vector and the vector of its corresponding eigenvalues a will have the form: In the variant with the slip and low activation energy, the slow combustion of the mixture almost immediately after ignition passes into the detonation mode with a stable flat combustion front. Such an option, but with high activation energy gives a stable plane combustion wave, never passing into detonation. In the case of an adhesion and a low activation energy pattern of propagation of the combustion slightly different from the similar option with the slip. However, the detonation front is less stable here, judging by the small temperature oscillation on the wave front in axis (figure 1). In the high-energy version of the activation detonation occurs much later and the temperature oscillation at the wave front have a much higher amplitude ( figure 2).This suggests that the surface of the detonation wave front is not flat and stationary. That can be seen in figure 3 where color maps of pressure distribution are represented for some moments in the range of the oscillation period. Here is shown a fragment of the pipe located in the vicinity of the detonation front at three (a, b, c) sequential time points divided with time interval equaled to 0.01ms. The detonation wave moves from up to down in the figure.
Conclusion
The use of TVD correction in conjunction with the large particle method improves the accuracy and stability of the latter. TVD correction extends capabilities of large particle method relatively to legitimately including second-order processes such as thermal conductivity and molecular viscosity in the model. The simulation results showed the importance of taking into account the friction on the pipe walls. However, without taking into account viscous stresses, the effect of these boundary conditions disappears when the mesh cells decrease. | 2019-05-26T14:19:30.837Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "0031c4285b32e686603e9fbcf4fcce44eeadc985",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1205/1/012027",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3817a45eaac995c72e76af0f61d7bf3d86e7c9b4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
216495737 | pes2o/s2orc | v3-fos-license | Decreasing postoperative opioid use while managing pain: A prospective study of men who underwent scrotal surgery
Abstract Objective To compare postoperative pain control among men who received different quantities of narcotic prescriptions following scrotal surgery. We hypothesized that men receiving eight vs four pills of acetaminophen 300 mg/codeine 30 mg there would be no significant difference in mean pain following scrotal and inguinal surgery. Patients and methods In this prospective, open‐label study, men who underwent scrotal surgery received eight or four acetaminophen 300 mg/codeine 30 mg pills. Men were encouraged to take scheduled non‐steroidal anti‐inflammatory drugs (NSAIDs), apply ice on the incision, and take acetaminophen 300 mg/codeine 30 mg as needed for breakthrough pain. Men were evaluated within 1‐2 weeks after surgery. Statistical analysis was performed using Microsoft Excel and Stata/IC 15.1. Results A total of eighty‐seven men met inclusion criteria, fifty‐four men received eight acetaminophen/codeine pills, and thirty‐three men received four pills. There was no significant difference in mean pain score (0‐10) of men receiving eight pills vs four pills in the week after surgery (3.6 ± 1.9 vs 3.3 ± 1.8, P = .5004). Of men who used NSAIDs and ice, 93.5% and 92.3% found them to be moderately or very helpful. Conclusion Reducing the total prescription of combined narcotic/non‐narcotic medication is not associated with increased postoperative pain in patients undergoing scrotal/inguinal surgery. There was no difference in postoperative pain in men taking eight or four acetaminophen 300 mg/codeine 30 mg pills. A limited prescription of eight or four pills was adequate for pain control in the majority of men who underwent scrotal surgery. NSAIDs and ice were found to be useful adjuncts for pain relief by those who used them.
| PATIENTS AND ME THODS
In this institutional review board approved study, informed consent was obtained from study participants. We create a prospective database of men undergoing scrotal and inguinal urological procedures from September 2018 to September 2019 associated with specified current procedural terminology codes: sub-inguinal varicocelectomy, vasovasostomy, vasoepididymostomy, testes biopsy, microepididymal sperm aspiration, microdissection testicular sperm extraction, scrotal orchiectomy, and hydrocele. In addition to general anesthesia, all men received local anesthesia with 10cc of 1% lidocaine at the end of the procedure. Men were instructed to take non-steroidal anti-inflammatory drugs (NSAIDs) every 4-6 hours, apply ice packs to the incision for 24 hours after surgery, and take 1-2 pills of acetaminophen 300 mg/codeine 30 mg as needed for breakthrough pain.
Initially all men were prescribed eight acetaminophen 300 mg/codeine 30 mg pills, starting at the study's chronological mid-point (April 2019) the number of pills prescribed was reduced four. This study was open-label, both participants and clinicians had access to the dose and quantity of prescribed medications.
At the first follow-up visit, typically within 2 weeks after surgery, the men were asked to recall their mean pain on a scale of 0-10 with 10 being the worst, the number of narcotic pills taken, whether NSAIDs and ice were used, and the efficacy of the ice and NSAIDs in pain control during the first week post-surgery. Efficacy was measured using a 4-point questionnaire as "very helpful," "moderately helpful," "minimally helpful," and "not used". We attempted to contact men who did not appear at the follow up-visit via telephone. Men were asked to bring remaining medication to the follow-up visit for verification or count the remaining pills when contacted by phone.
The only exclusion criteria were those who had pre-existing opioid prescriptions or underwent more than one surgery. A combination of Microsoft Excel and Stata/IC 15.1 was used to organize data and perform statistical analysis. One-way ANOVAs and independent ttests were computed to analyze whether the amount of pain or pill consumption varied by age, procedure, and compare levels of pain by number of pills taken. Chi-squared test were performed to analyze the helpfulness of ice and NSAIDs reported by different groups.
Pearson correlations were calculated to examine the degree to which pain and pill consumption correlated. Post-hoc power calculations were conducted using the freely available ClinCalc post-hoc power calculator (http://clinc alc.com/stats /Power.aspx).
| RE SULTS
A total of 127 men underwent scrotal and inguinal surgery. Of those patients, 87 men met inclusion criteria, 2 were excluded due to pre-existing opioid prescriptions, 2 was excluded due to multiple procedures, and 36 were excluded due to lost to follow-up. The number of patients lost to follow-up was similar in both groups. Of the men who met inclusion criteria, 54 men received eight acetaminophen/codeine pills and 33 received four pills. The mean age was 36.8 ± 9.9 years ( Table 1). The overall reported mean pain (0-10) was 3.5 ± 1.9 in the week after surgery, and 3.6 ± 1.9 vs 3.3 ± 1.8 (P = .5004) for men receiving eight vs four acetaminophen/codeine pills, respectively ( Table 2). Men who received eight pills took a mean of 4.2 ± 3.0 and kept 3.5 ± 3.0 pills ( Table 2). There were no phone calls to the provider requesting re-fills in either group, TA B L E 1 Characteristics of men undergoing scrotal and inguinal surgery during the study period Ice and NSAIDs are useful adjuncts for pain control for most men, 89.7% and 93.5% of those who used them found ice and NSAIDs to be moderately to very helpful. The rates of ice and NSAID use and reported efficacy were similar between the two cohorts (Table 3).
| D ISCUSS I ON
Interestingly, four-pill cohort reported greater satisfaction with ice than the eight-pill cohort (84.8% found ice to be very helpful vs In light of these studies and our own results, we believe a reasonable next step would be to attempt narcotic-free postoperative management of men who have undergone scrotal and inguinal surgery. There was no statistically significant difference in pain reported by those receiving eight vs four acetaminophen/codeine pills, and both groups took approximately half the pills prescribed on mean (4.2 ± 3.0 vs 2.1. ± 1.8). In our study, many more men took acetaminophen/codeine (78.1%) than NSAIDs (35.6%), despite the high efficacy reported by those who took NSAIDs (93.5%). A possible explanation is that some patients who took all the opioid pills believed that they were following physician orders to take them all. Another possible explanation is that anticipation of pain prompted men to take the medication perceived to be stronger. The low overall use of NSAIDs (6) | 2020-03-26T10:27:24.910Z | 2020-03-20T00:00:00.000 | {
"year": 2020,
"sha1": "4f15597eb20bc38a72c2a9fd6aa72ada0b0e16ac",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/bco2.12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7961be41ad88854739b4a346e4275e63bf81fadf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46921876 | pes2o/s2orc | v3-fos-license | Gaps in the implementation of antenatal syphilis detection and treatment in health facilities across sub-Saharan Africa
Background Syphilis in pregnancy is an under-recognized public health problem, especially in sub-Saharan Africa which accounts for over 60% of the global burden of syphilis. If left untreated, more than half of maternal syphilis cases will result in adverse pregnancy outcomes including stillbirth and fetal loss, neonatal death, prematurity or low birth weight, and neonatal infections. Achieving universal coverage of antenatal syphilis screening and treatment has been the focus of the global campaign for the elimination of mother-to-child transmission of syphilis. However, little is known about the availability of antenatal syphilis screening and treatment across sub-Saharan Africa. The objective of this study was to estimate the ‘likelihood of appropriate care’ for antenatal syphilis screening and treatment by analyzing health facility surveys and household surveys conducted from 2010 to 2015 in 12 sub-Saharan African countries. Methods In this secondary data analysis, we linked indicators of health facility readiness to provide antenatal syphilis detection and treatment from Service Provision Assessments (SPAs) and Service Availability and Readiness Assessments (SARAs) to indicators of ANC use from the Demographic and Health Surveys (DHS) to compute estimates of the ‘likelihood of appropriate care’. Results Based on data from 5,593 health facilities that reported offering antenatal care (ANC) services, the availability of syphilis detection and treatment in ANC facilities ranged from 2% to 83%. The availability of syphilis detection and treatment was substantially lower in ANC facilities in West Africa compared to the other sub-regions. Levels of ANC attendance were high (median 94.9%), but only 27% of ANC attendees initiated care at less than 4 months gestation. We estimated that about one in twelve pregnant women received ANC early (<4 months) at a facility ready to provide syphilis detection and treatment (median 8%, range 7–32%). The largest implementation bottleneck identified was low health facility readiness, followed by timeliness of the first ANC visit. Conclusions While access was fairly high, the low levels of likelihood of antenatal syphilis detection and treatment identified reinforce the need to improve the availability of syphilis rapid diagnostic tests and treatment and the timeliness of antenatal care-seeking across sub-Saharan Africa.
Introduction
Globally, an estimated 5.6 million new syphilis cases occur every year, including one million cases among pregnant women [1,2].Sub-Saharan Africa accounts for 63% of the global burden of syphilis in pregnancy, and the prevalence of antenatal syphilis seroreactivity ranges from 0% to 7.1% with a regional average estimated at 1.7% [2].More than half of untreated maternal syphilis cases result in adverse pregnancy outcomes including stillbirth and fetal loss, neonatal death, prematurity or low birth weight, and neonatal infections [2,3].Studies have also associated syphilis in pregnancy with an increased risk of acquiring and transmitting HIV and perinatal HIV transmission [4][5][6].Early detection and treatment of syphilis in pregnancy is well-recognized as an effective strategy to reduce syphilis transmission and adverse pregnancy outcomes due to syphilis.In endemic countries, antenatal syphilis detection and treatment can reduce the number of stillbirths by 82%, preterm births by 64%, and neonatal deaths by 80% [7].
The World Health Organization (WHO) recommends syphilis screening at the first antenatal care (ANC) visit, ideally in the first trimester [8].Antenatal syphilis screening has become simple, fast and inexpensive, even in settings with limited laboratory capacity, as a result of the development of point of care rapid diagnostics [8,9].Timely treatment of seroreactive pregnant women with an intramuscular injection of penicillin G can prevent transmission of disease and syphilis-associated adverse pregnancy outcomes [7,10].A recent meta-analysis found that syphilis screening and treatment during the first and second trimester of pregnancy compared to the third trimester reduced the risk of congenital syphilis by two thirds [11].Despite the availability of simple diagnostic tools and highly effective and inexpensive treatment, screening and treatment of syphilis in pregnancy is not yet universal, and mother-to-child transmission (MTCT) of syphilis remains an under-recognized public health problem.
To address this problem, the WHO has called for the dual elimination of MTCT of syphilis and HIV [12][13][14].Strategies are focused on ensuring sustained political commitment, improving access to and quality of maternal and newborn services, and universal screening and treatment of pregnant women and their partners [15][16][17].Achieving elimination goals requires monitoring progress towards global targets on a range of indicators of health systems performance, from input indicators reflecting the availability and readiness of health facilities to provide essential screening and treatment, to outcome indicators reflecting rates of associated morbidity and mortality.However, routinely assessing progress towards elimination of MTCT of syphilis at national, regional and global levels has proved challenging, in part due to weak or non-existent routine health surveillance data reporting systems in many high burden countries.Despite the widespread adoption of policies for universal syphilis screening and treatment during pregnancy in many low-and middle-income countries, there is scant information on the availability, access to and coverage of antenatal syphilis detection and treatment.Several key indicators have been integrated within the Global AIDS Monitoring (GAM) system, and data from a number of countries are available through the WHO Global Health Observatory [18,19].However, with limited data, tracking progress in the implementation of antenatal screening and treatment has primarily relied on modelling, and focused on estimation of the burden of disease [2].
Health facility surveys, complemented by household surveys offer an alternative approach to gain valuable insight on the availability, quality and uptake of reproductive, maternal, newborn and child health (RMNCH) interventions including antenatal syphilis detection and treatment [20][21][22].Health facility surveys assess supply-side factors such as the availability of health services, essential medicines and commodities and human resources for health.Household surveys provide information on demand-side factors contributing to the universal coverage of essential health services.Linking household surveys to health facility surveys has been used to estimate population-level coverage of health interventions, particularly facility-based interventions not amenable to tracking by household surveys alone [23,24].The framework of the linking approach addresses the need to consider both supply-side and demand-side factors driving coverage of health services.In this paper, we link household surveys and health facility surveys to assess the readiness of ANC facilities to provide syphilis detection and treatment to pregnant women and estimate the likelihood of appropriate care for syphilis detection and treatment across 12 sub-Saharan African countries.Based on our findings, we identify barriers to the implementation of antenatal screening and treatment of pregnant women and highlight opportunities to improve strategies for the elimination of MTCT of syphilis in sub-Saharan Africa.
Methods
We conducted a secondary analysis of supply-side data obtained from two types of nationally representative cross-sectional health facility surveys, the Service Provision Assessment (SPA) and the Service Availability and Readiness Assessment (SARA).Both surveys use standardized data collection instruments to provide measures of availability and readiness of health facilities in a given country to provide essential services across several program areas including ANC.The availability of staff, guidelines, equipment, diagnostics, medicines and commodities is based on self-report and direct observation and verification.Further details on the sampling methods and survey procedures are available from final country survey reports [25,26].This analysis focused on 12 countries in sub-Saharan Africa with a recent health facility survey, conducted between 2010 and 2015, and a household survey within +/-2 years.Where multiple health facility surveys were available for the same country, the most recent survey was used.The 12 countries that met the inclusion criteria represented 4 sub-regions: Central Africa (Democratic Republic of Congo), East Africa (Kenya, Malawi, Tanzania, and Uganda), Southern Africa (Zimbabwe) and West Africa (Benin, Burkina Faso, Mauritania, Senegal, Sierra Leone and Togo).In 2012, the estimated number of pregnancies that occurred in these countries ranged from 122,246 in Mauritania to 3,036,898 in the Democratic Republic of Congo (Table 1).The total number of pregnancies with probable active syphilis infection was almost 200,000, representing 36% of the estimated burden in sub-Saharan Africa in 2012 [27].
To track progress in the implementation of antenatal syphilis detection and treatment we used a 3-step process.First, based on data from the health facility surveys, availability of syphilis detection for a given country was calculated as the percentage of health facilities providing ANC with observed availability of a rapid diagnostic test (RDT) on the day of assessment.Given WHO's push for point of care diagnostics, in this study, facilities referring ANC clients or send blood samples elsewhere for screening were considered as not having syphilis screening available.The availability of syphilis detection and treatment was calculated as the percentage of health facilities providing ANC services with both a screening test for syphilis and treatment (benzathine penicillin or procaine penicillin, needles and syringes in stock).
Second, we analyzed demand-side data from publicly available nationally representative household surveys conducted as part of the Demographic and Health Surveys (DHS) program [29].Information on patterns of antenatal care seeking for the most recent pregnancy, including where ANC was sought, when the first ANC visit occurred, and the ANC components received, is typically collected from women 15-49 years who gave birth in the 5 years preceding the survey.For this analysis, the household survey reference period was restricted to the three years preceding the survey.To reduce temporal misalignment, the estimation of the likelihood of appropriate care was only conducted for countries with a DHS conducted within (+/-) two years of the index health facility survey.No DHS was available within the required time frame for Burkina Faso and Mauritania.For the remaining 10 countries with a corresponding DHS within two years of the index health facility survey, we estimated the percentage of women who had a live birth in the three years preceding the survey who had at least one ANC visit with a skilled provider (ANC1+) and who had the first ANC visit at less than 4 months gestation.
Third, estimates of the likelihood of appropriate care were calculated by multiplying indicators of service utilization (ANC1+ and timing of first ANC visit) by indicators of health facility readiness at the stratum level.As service utilization and health facility readiness vary within countries, linking was conducted at the stratum level which was defined by health facility type (e.g.health post, health center and hospital) and managing authority (public/non-public).As women who attended multiple ANC visits can report multiple sources of ANC in the DHS, we made the simplifying assumption that these women sought care at the highest level of facility type reported.The estimates of likelihood of appropriate care for each country were disaggregated by timing of the first ANC visit (categorized as <4, 4-6, !7 months).Women who attended at least one ANC visit at a health facility with syphilis detection and treatment available were classified as having a high likelihood of appropriate care, while those who sought ANC at a health facility with only syphilis detection available were classified as having a moderate likelihood of appropriate care.Women who sought ANC at a health facility that did not have the necessary diagnostics in stock were classified as having a low likelihood of appropriate care due to low health facility readiness.All other women did not seek any ANC and were considered classified as having no likelihood of appropriate care.This classification formed the basis for our identification of three bottlenecks in the implementation pathway of antenatal syphilis detection and treatment: access, timeliness, and health facility readiness.All analyses were conducted at the country level and took into account the sampling design (survey-specific stratum, cluster and sampling weights).Because of important differences in epidemiological and programmatic context, we grouped country-specific results by subregion.All analyses were conducted in STATA 14.2 (College Station, Texas).
Availability of syphilis detection and treatment
A total of 6,991 health facilities were sampled in the 12 sub-Saharan African countries during 2010 to 2015; health facility survey sample sizes ranged from 95 (Uganda) to 1,555 (Democratic Republic of Congo, Table 2).A subset of 5,593 health facilities that reported offering -No corresponding household survey was available for Mauritania and Burkina Faso. 1 Early ANC enrollment was defined as first ANC visit at less than four months gestation.
à Staff trained was defined as at least one staff member trained in any aspect of ANC in the previous 2-3 years. https://doi.org/10.1371/journal.pone.0198622.t002 ANC were included in this analysis.Diagnostic capacity for syphilis at health facilities offering ANC varied across countries, ranging from 3% in Burkina Faso to 92% in Zimbabwe (Fig 1).In general, diagnostic capacity was lower in health facilities in West Africa relative to the other sub-regions (range 3% -15%).By and large, most health facilities with diagnostic capacity also had syphilis treatment available (range 44% -98%).However, in DRC, while 72% of facilities offering ANC had syphilis detection available, only half of those also had treatment available.
Timing and coverage of antenatal care
Across the 12 countries, nearly all pregnant women made at least one ANC visit (median 94.9%; Table 2).Benin, Democratic Republic of Congo, Kenya, Togo and Zimbabwe had ANC1+ coverage below 95%.The drop off in ANC attendance from coverage of at least one visit (ANC1+) to coverage of 4 or more visits (ANC4+) ranged from 18 percentage points in Zimbabwe to 62 percentage points in Burkina Faso.There were also substantial differences in the timing of the first ANC visit, with the median months of gestation at first ANC visit varying from 3.6 months in Senegal to 5.8 months in Kenya.While the recommendation is that all pregnant women are screened for syphilis during the first ANC visit in the first trimester, the percentage of women attending the first ANC visit at less than four months was low in all countries, with considerable variability (median 27%; range:13.5% -57.8%;Table 2).For example, in Malawi, while 99% of all pregnant women attended at least one ANC visit, only 23.8% attended ANC early enough to experience the maximum benefit of treatment on the risk of adverse outcome due to syphilis.
Likelihood of appropriate care for syphilis detection and treatment
Based on the linking approach, we estimated that across countries one in twelve women received ANC at a facility ready to provide syphilis detection and treatment during the first three months of pregnancy (high 'likelihood of appropriate care'; median 8%; range: 7% -32%) (Fig 2).If only the availability of syphilis detection was considered (high and moderate 'likelihood of appropriate care' combined), then one in ten women received ANC during the first three months of pregnancy at a facility ready to provide syphilis screening (median 10%; range: 7% -35%).Among women who received ANC at a facility ready to provide syphilis detection, one in eight initiated ANC after the first six months of pregnancy (median 12%; range: 3% -28%).Due to delayed ANC seeking, these women may have missed the opportunity to receive the maximum benefit of early syphilis detection and treatment.Additionally, among women who initiated ANC in the first six months, more than half (median 54%; range 7% -79%) missed the opportunity to be screened for syphilis as a result of low health facility readiness (i.e.no availability of syphilis RDTs).
Discussion
Despite the widespread adoption of antenatal screening and treatment of syphilis as the main strategy for the prevention of MTCT of syphilis, we found suboptimal implementation strength across 12 sub-Saharan African countries.The global campaign to eliminate MTCT of syphilis has targeted achievement of at least 95% on three process indicators: coverage of ANC1+, coverage of syphilis testing of pregnant women and treatment of syphilis-seropositive pregnant women [16].Using our approach linking supply-side data from health facility surveys and demand-side data from household surveys, our estimates of the percentage of pregnant women who received early ANC at a facility with a syphilis RDT and penicillin treatment available (high likelihood of appropriate care) fell well below 95% coverage target (range 7% -32%).We identified three bottlenecks in the implementation of appropriate syphilis detection and treatment during pregnancy: access, timeliness, and health facility readiness.ANC is a key platform for the delivery of evidence-based RMNCH interventions.Access to ANC has improved in recent years [30].Notably, 6 out of the 10 sub-Saharan Africa countries with recent surveys in this study had attained ANC1+ coverage levels above the 95% target.However, recent evidence suggests low coverage of ANC interventions resulting in substantial missed opportunities to provide quality health services [21,31].With respect to antenatal syphilis screening and treatment, the timing of the first ANC visit makes a significant difference on the risk of adverse pregnancy outcomes due to syphilis [11].In this study the median gestational age at first ANC visit ranged from 3.6 months in Senegal to 5.8 months in Kenya.The timeliness of ANC was identified as a bottleneck in the implementation of appropriate syphilis detection and treatment.Between 25% and 85% of pregnant women did not seek ANC until 4 months or later.To fully benefit from the early detection and treatment, there is need for effective strategies to promote early ANC initiation.
Health facility readiness was another major bottleneck.Across countries, a median of 54% of pregnant women who sought ANC in the first six months could not access syphilis detection and treatment due to inadequate supply of syphilis tests and treatment in ANC facilities, representing substantial missed opportunities for the delivery of high impact interventions to pregnant women seeking ANC early.The low availability of syphilis detection and treatment to ANC attendees identified in this study supports the findings that improving access to ANC does not guarantee the delivery of quality RMNCH services [21,31,32].The availability of syphilis tests and treatment at health facilities across sub-Saharan Africa was sub-optimal and varied by country.For instance, in Zimbabwe, syphilis tests and treatment were available in most facilities offering ANC (83%).By contrast, syphilis tests and treatment were in low supply in ANC facilities across West African countries (Benin 10%, Burkina Faso 2%, Mauritania 4%, Senegal 12%, Sierra Leone 5% and Togo 13%).This difference in health facility readiness may reflect relatively lower burden of syphilis, poorer health infrastructures and few health system resources in West Africa [33].The difference may also be explained by increased attention due to higher burden of HIV and the integration of syphilis interventions into existing ANC and prevention of MTCT of HIV programs in Zimbabwe and other countries in Southern Africa [12,34].There are too few Southern African countries to systematically assess this explanation.However, UNAIDS estimates of HIV prevalence in 2012 for the 6 West African countries ranged from 0.6% -2.6% compared to 15.1% for Zimbabwe and 5.9% -10.3% for the 4 East African countries [35].
Greater efforts are needed to strengthen implementation, improve the quality of RMNCH services and accelerate progress towards elimination of MTCT of syphilis.Continued efforts to routinely measure and track progress in universal screening and treatment of syphilis during pregnancy and more broadly, key RMNCH interventions are needed [23,24].The inclusion of indicators for syphilis in pregnancy using a unified system such as the UNAIDS GAM system is an important step to support routine reporting and the collation of data from surveillance systems in low-and middle-income countries.While efforts should be made to establish and improve surveillance systems in the long term, the expansion in geographic scope and frequency of health facility surveys such as the SPA and SARA can provide information to fill crucial data gaps.Health facility surveys are useful for monitoring and addressing deficiencies in the availability and quality of service provision [36], and linked with household surveys can facilitate the estimation of coverage of interventions and identification of implementation bottlenecks [20,21,23,37].
There are several limitations to this analysis.First, the definition of high likelihood of appropriate care used was the percentage of women who attended ANC in the first three months at a facility with syphilis detection and treatment available.While the availability of basic amenities, equipment, diagnostics, medicines and commodities is a prerequisite for the delivery of antenatal syphilis screening and treatment, it does not guarantee receipt of care.Information on the receipt of syphilis screening and treatment is not typically available, therefore our estimates may overestimate 'true coverage'.We did not account for factors such as provider knowledge or other health system factors that may hinder the actual receipt of interventions.Estimates of the likelihood of appropriate care are likely biased upwards.Second, the present analysis linked nationally representative household surveys and health facility surveys covering different survey reference periods.The availability of medicines and diagnostics can vary dramatically over a short time period due to the complexity of logistics, availability of penicillin, and seasonality.While the present study did not assess drug stock-outs, several countries in sub-Saharan Africa have experienced stock-outs of benzathine penicillin [38].Therefore, while data from health facility surveys may be representative of health system at one point in time, such data may not represent the current state.More periodic health facility surveys can provide timely information necessary to guide national and sub-national policy and program prioritization; health facility surveys need to be conducted in more countries and at more regular interval.Most of the health facility surveys included in the present analysis represent East and West Africa.Findings from this analysis should be not be considered as representative of sub-Saharan Africa, but rather as evidence to guide the ongoing campaign to end MTCT of syphilis.Lastly, the effectiveness of syphilis treatment during pregnancy is not a dichotomous variable.While the effectiveness of treatment decreases with increasing duration of pregnancy, treatment later in pregnancy is known to have some effect on health outcomes.The greatest effects on the prevention of congenital syphilis are observed when given before approximately 21-24 weeks [7,39], when the fetal immune system is still immature and has not yet developed an (adverse) response to the syphilis infection [40].Evidence of the population-level effectiveness of syphilis screening and treatment by duration of pregnancy is currently limited to a meta-analysis which compared the effects during the first and second trimester of pregnancy compared to the third trimester [11].In our study, we defined timeliness as the first visit occurring before 4 months since treatment of syphilis as early as possible in pregnancy is highly desirable.As women who attend ANC after the first three months of pregnancy could still benefit from screening and treatment, findings were disaggregated by timing of first ANC visit.
This study suggests low levels of availability of antenatal syphilis screening and treatment across 12 sub-Saharan African countries, albeit with wide variability in progress towards the elimination of MTCT of syphilis.Deficiencies in access, health facility readiness and timeliness of ANC identified represent opportunities to improve the coverage and quality of syphilis detection and treatment to pregnant women.Progress towards the elimination of MTCT of syphilis depends on sustained high levels of ANC uptake, improved timeliness of care-seeking, and increased availability of syphilis detection and treatment at health facilities across sub-Saharan Africa.
Table 1 . Characteristics of 12 countries included in the study sample. Country Annual number of pregnancies a Number of pregnancies with probable active syphilis infections a Neonatal mortality rate (per 1,000 livebirths) b HIV prevalence (%) c Antiretroviral therapy coverage for PMTCT (%) d
[28] Democratic Republic of Congo.PMTCT: Prevention of mother to child HIV transmission.aDataretrievedfromsupplementarytablesprovidedbyNewmanetal[27].bEstimatesdevelopedbytheUNInter-agencyGroupforChild Mortality Estimation[28].cRefers to the percentage of people ages 15-49 who are infected with HIV.Source: UNAIDS estimates[28].dRefers to the percentage of pregnant women with HIV who received antiretroviral therapy for PMTCT.Source: UNAIDS estimates[28].https://doi.org/10.1371/journal.pone.0198622.t001 | 2018-06-18T01:08:10.464Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "f5c916377d665082166c4c594e3d0fa8c3e45570",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0198622&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5c916377d665082166c4c594e3d0fa8c3e45570",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16674190 | pes2o/s2orc | v3-fos-license | The Patient’s Guide to Psoriasis Treatment. Part 4: Goeckerman Therapy
Background The Goeckerman regimen remains one of the oldest, most reliable treatment options for patients with moderate to severe psoriasis. Goeckerman therapy currently consists of exposure to ultraviolet B light and application of crude coal tar. The details of the procedure can be confusing and challenging to understand for the first-time patient or provider. Objective To present a freely available online guide and video on Goeckerman treatment that explains the regimen in a patient-oriented manner. Methods The Goeckerman protocol used at the University of California—San Francisco Psoriasis and Skin Treatment Center as well as available information from the literature were reviewed to design a comprehensive guide for patients receiving Goeckerman treatment. Results We created a printable guide and video resource that covers the supplies needed for Goeckerman regimen, the treatment procedure, expected results, how to monitor for adverse events, and discharge planning. Conclusion This new resource is beneficial for prospective patients planning to undergo Goeckerman treatment, healthcare providers, and trainees who want to learn more about this procedure. Online media and video delivers material in a way that is flexible and often familiar to patients.
INTRODUCTION
The Goeckerman regimen is a unique combination therapy of ultraviolet B (UVB) light and application of crude coal tar (CCT) for the treatment of psoriasis [1,2]. First introduced in 1925, the Goeckerman regimen remains one of the oldest, most reliable treatment options for patients with moderate to severe psoriasis [3]. The advantage in using tar and phototherapy together is that tar is a photosensitizer and when combined with UVB light acts synergistically to produce better results than either treatment alone [3][4][5]. In comparison to other treatment modalities such as internal biologic agents, oral systemic agents, and topical medications, Goeckerman therapy remains extremely effective with relatively few side effects. This makes Goeckerman regimen an excellent alternative for patients who may have previously failed multiple therapies, the elderly, pregnant patients, children, and the immunosuppressed [6]. Goeckerman therapy was originally administered at an inpatient hospital facility for 24 h a day for multiple days until the psoriasis cleared [2]. However, patients today are often treated in an outpatient day care setting where they return home at the end of the treatment day with similar results but significantly reduced cost [6,7]. Currently, Goeckerman therapy at the University of California-San Francisco (UCSF) Psoriasis Center requires a minimum time commitment of 4-5 h in the daycare facility for 5 days a week for 6 weeks for a total of 30 total treatment days [1]. The major limitation of this therapy is the time commitment required as patients should avoid interruption in their therapy, which can delay complete treatment of their psoriasis lesions and shorten the remission time.
Nevertheless, almost all Goeckerman patients have seen significant improvement in their skin condition over the duration of their therapy.
In a study performed at the University of California San Francisco (UCSF) Psoriasis and Day Care Center 100% of patients receiving Goeckerman over a 12 week period achieved a 75% or greater improvement in their psoriasis lesions [8]. Another advantage of Goeckerman is the long period of remission following completion of therapy, which can last between 8 months to over a year [6,9]. Studies have also shown that Goeckerman therapy can significantly increase patient satisfaction and improve overall quality of life [10]. In addition, UCSF has developed a modified Goeckerman regimen to treat other skin diseases such as eczema, prurigo nodularis, and pruritus [11,12].
For the first-time patient or referring provider the details of the Goeckerman procedure can be confusing and challenging to understand. Therefore, the following guide and online media attempt to deliver the material in a way that is easily understandable and readily accessible. Below we will describe supplies for Goeckerman therapy, evaluation and preparation, treatment procedure, daily assessment, discharge planning, and safety considerations.
METHODS
We reviewed the Goeckerman therapy protocol used at the UCSF Psoriasis and Skin Treatment Center. In addition PubMed database was searched using the term ''psoriasis'' combined with the term ''Goeckerman therapy,'' ''tar therapy,'' or ''tar and light therapy'' to identify relevant articles to design a comprehensive guide for patients receiving Goeckerman treatment.
This article does not contain any new studies with human or animal subjects performed by any of the authors. All photos are printed with the consent of the subject(s).
Overview
The guide below will cover the supplies needed for Goeckerman regimen, the treatment procedure, how to monitor for side effects, daily assessment, and discharge planning.
Evaluation and Preparation
Prior to therapy a complete history and physical examination is performed for each patient to obtain important information on current and past medications, response to previous psoriasis therapies, history of adverse reactions to sunlight or phototherapy, and severity of itch. An initial assessment of the skin will help to determine the degree and severity of psoriasis involvement and whether patients display widespread or intense erythema. In cases of severe psoriasis it is recommended that patients undergo a cool down period during which topical corticosteroids will be applied to the affected areas and occluded with plastic wrap until the erythema is greatly reduced (3-14 days) [1,11]. This is because UV light and tar preparations have the potential to worsen acutely inflamed psoriasis (Table 2). (Table 3). After phototherapy, CCT will be applied to affected areas of the body and LCD for scalp involvement. Tar is typically started with the lowest concentration available and increased gradually as tolerated by the patient. Some patients may not tolerate CCT or LCD in Aquaphor base due to the greasy texture or alcohol content [11,13]. For these patients, tar in Cetaphil cream, a water based moisturizer-may be substituted. A formulation of tar compounded with salicylic acid is often used in areas with greatly thickened plaques to help reduce the scaling. However, salicylic acid should be used with caution in patients with diabetes or gastric ulcers due to the potential for adverse side effects [14].
Occlusion of topical tar is then performed with plastic wrap to the body, arms, and legs, impermeable gloves for the hands, socks for the feet, and a shower cap for the scalp (Fig. 1).
Topical tar is typically left on the skin for a minimum of 4-5 h each day. During this time, patients may read, listen to music, work on a laptop computer, socialize with other Goeckerman patients, or participate in group activities such as meditation, board games, or cooking (Fig. 2). After the 4-5 h period, the tar is washed off in the shower with mineral oil and soap. After Goeckerman therapy is completed at the daycare center a nurse will apply LCD in Aquaphor ointment or Cetaphil cream (less greasy option) to the body before the patient leaves for home. (Table 4).
Discharge Planning
Upon completion of the Goeckerman course, patients will be started on a maintenance program which may include outpatient phototherapy three times a week for the first month with gradual taper and topical medications to be applied at home. One month after completion of Goeckerman the patient will be scheduled for a follow-up visit with the doctor (Table 5). Adjust phototherapy dosing and tar concentration as appropriate
Safety Considerations
The safety profile of Goeckerman therapy is excellent with relatively few side effects. One of the main concerns regarding coal tar is the theoretical carcinogenic potential. However, many studies including a review of 13,200 patients undergoing Goeckerman regimen for psoriasis and eczema showed that there is no increased risk of cancer with tar therapy compared to topical corticosteroids [13]. In addition, Goeckerman therapy is entirely topical and has limited internal absorption, so it does not increase the risk for cardiovascular disease, tuberculosis, or serious infections that may be associated with some oral or injectable medications [15]. The most commonly observed side effects include mild folliculitis, a skin condition characterized by itchy red bumps that develop around hair follicles, and mild skin burning from the UVB light [1]. For this reason it is recommended that patients avoid extended periods of sun exposure when tar is applied at home (Table 6).
ACKNOWLEDGMENTS
We would like to thank Tim Sarmiento for producing, directing, and editing the educational video that accompanies this manuscript. We would also like to thank the amazing staff and nurses from the UCSF Psoriasis and Skin Treatment Center for inspiring and helping make the video possible. We thank Olivia Chen for her help reviewing the Spanish translation of the accompanying video. No funding or sponsorship was received for publication of this article. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this manuscript, take responsibility for the integrity of the work as a whole, and have given final approval for the version to be published. | 2018-04-03T06:16:05.627Z | 2016-07-29T00:00:00.000 | {
"year": 2016,
"sha1": "aa711c631fc6658727e96b9fa9e9481f2f32aad9",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1007/s13555-016-0132-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa711c631fc6658727e96b9fa9e9481f2f32aad9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249128169 | pes2o/s2orc | v3-fos-license | Bone Mineralization in Electrospun-Based Bone Tissue Engineering
Increasing the demand for bone substitutes in the management of bone fractures, including osteoporotic fractures, makes bone tissue engineering (BTE) an ideal strategy for solving the constant shortage of bone grafts. Electrospun-based scaffolds have gained popularity in BTE because of their unique features, such as high porosity, a large surface-area-to-volume ratio, and their structural similarity to the native bone extracellular matrix (ECM). To imitate native bone mineralization through which bone minerals are deposited onto the bone matrix, a simple but robust post-treatment using a simulated body fluid (SBF) has been employed, thereby improving the osteogenic potential of these synthetic bone grafts. This study highlights recent electrospinning technologies that are helpful in creating more bone-like scaffolds, and addresses the progress of SBF development. Biomineralized electrospun bone scaffolds are also reviewed, based on the importance of bone mineralization in bone regeneration. This review summarizes the potential of SBF treatments for conferring the biphasic features of native bone ECM architectures onto electrospun-based bone scaffolds.
Introduction
Bone grafting has been a standard therapeutic option for bone defects and fractures [1]. Not only do bone fractures result from accidental causes, but osteoporotic fractures are incrementally increasing in aged populations worldwide [2]. Osteoporosis is a skeletal disorder that gradually leads to fragile fractures when a failure-inducing force (e.g., trauma) is applied, and it is a general term describing weakened bone density and bone quality [3]. The imbalance of osteoblastic bone resorption and osteoblastic bone formation results in osteoporosis. Several factors for osteoporosis have been identified, such as genetic, intrinsic, exogenous, and lifestyle factors [4]. Moreover, osteoporosis-related fractures occur in various anatomical positions in the body, such as the spine, hip, and wrist [5]. The annual cost of osteoporotic fractures and osteoporosis was $16 billion per year in the United States [6]. Almost 10 million people older than 50 years old were reported to have osteoporosis, and 1.5 million were estimated to be subject to fragility fractures in the United States [7]. Additionally, osteoporosis has been a leading disease of mortality worldwide, due to increasing life expectancy and longevity [8]. In an ideal case, autologous bone grafting and self-transplantation in the same patient would be considered for treating the bone fractures [9]. However, a limited supply of grafts and patient compliance while harvesting osseous matter has deterred patients from self-transplantation [10]. Alternatively, allopathic bone grafts can be collected from human cadavers [11]. Surgeons have often performed allografts, but the potential risk of contamination, including donor-derived infections, is a deterrence from using these therapeutically viable scaffolds [12,13]. Similar to other tissue-derived products, allografts often hold unexpected pathogenic infections such as bacteria, viruses, and prions [14]. More importantly, allogeneic bones often exert cellular and humoral immune reactions [15]. There are also other donor-based grafts, but they are from nonhuman species. Although such grafts can solve the shortage issue, they need to be prepared through more thorough sterilization protocols, reducing their original potential in osteoinductive properties [16]. Based on a recent study showing the clinical outcomes of bone grafts using allografts, xenografts, and alloplastics in sinus lift or ridge preservation procedures, it has easily been confirmed that an allograft has a better capacity for the creation of new bones than xenografts [17]. As a different approach for bone grafts, demineralized bone matrix (DBM) has been utilized [18,19]. DBM is prepared through a complex process where collected bones are soaked or washed in strong acid reagents (e.g., hydrochloric acid or nitric acid) to eliminate potential contamination and the risk of disease transmission [20,21]. However, the osteoinductivity of DBM is dependent on variations in bone quality from individual donors as well as batch-to-batch process variations [22,23]. In a practical setting, the variation of bone-forming potential in different commercially available DMB products has been documented. It is thought that this is due to the inconsistency of the manufacturing process [24]. The first and foremost benefit of BTE is the provision of well-designed osteoinductive and/or osteoconductive scaffolds for the improvement of bone density and bone quality [25]. Osteoinductive scaffolds are capable of permitting the growth of bone cells. In contrast, osteoconductive scaffolds represent the ability to stimulate primitive and undifferentiated cells (e.g., mesenchymal stem cells [26] and induced pluripotent stem cells [27]) towards bone-forming cells [28]. As a promising scaffolding platform, electrospun-based materials display interconnected porous structures, and become either cellular-based or drug-based scaffolds, thereby increasing the intrinsic potential of bone regeneration, as well as conferring an extrinsically regenerative potential for bone regeneration [29]. For example, a study created a multilayered synthetic fibrous scaffold comprising β-tricalcium phosphate (TCPs) and poly(ε-caprolactone) (PCL) electrospun nanofibers to form bone-like ECMs by the osteoconductive TCPs and the biocompatible elastic PCL nanofibers [30]. Using goat-derived bone marrow stromal cells (BMSCs), the authors proved that electrospun composite scaffolds could increase the osteogenic differentiation of exogenously supplied BMSCs. In this regard, BTE outperforms conventional bone allografts, which are immunogenic and often limited, due to supply shortages [31]. An engineered bone scaffold should have a functional resemblance to a natural ECM, with osteoconductivity for better regenerative outcomes [32]. Electrospun bone scaffolds have highly porous interconnected structures, thereby maximizing their surface area [33]. Moreover, electrospun nanofibers can quickly become a temporary bone substitute by conferring reinforced mechanical strength onto as-spun nanofibers via numerous cross-linking strategies [34]. Hence, the nanofibrous appearance of the as-spun electrospun bone scaffolds dictate the potential for electrospun-based scaffolds in recapitulating the native bone ECM environment, which is one of the critical engineering parameters of BTE [35].
To create functionally augmented electrospun-based scaffolds in BTE, the most common approach is to mineralize electrospun scaffolds to enhance osteogenic potential by mimicking the native bone ECM microenvironment, leading to successful bone grafts and repairs. Incredibly, simulated body fluid (SBF) is a robust but straightforward recipe for inducing hydroxyapatite (HA) and apatite-based inorganic clusters onto the surface of the electrospun scaffolds (Table 1) [36]. These inorganic solutions enable us to create functional electrospun bone scaffolds that are capable of inducing bone regeneration via a dynamic interaction between the synthetic grafts and the endogenously or exogenously provided cells that are responsible for bone regeneration. This study aims to outline numerous electrospinning technologies that are employed in electrospun-based bone scaffolds, and to describe the science of SBF development. Lastly, examples of biomineralized electrospun scaffolds are explored, in order to understand and to expand the potential of biomimetic scaffolds in BTE.
Bone: Dynamic and Biphasic Tissue
Bone is a vital tissue that is responsible for essential functions in the body: the mechanical support of the body, locomotion, and dynamic reservoir units for biological components and blood cells ( Figure 1) [43]. Bone provides minerals such as calcium, magnesium, and phosphate, and holds bone marrow, particularly red bone marrow, occupied inside the bone tissue to mature and to distribute blood cells. Rather than being a supportive physical frame, bone is a dynamic organ where bone cells and hematopoietic stem cells (HSCs) play a role in maintaining whole-body homeostasis. There are four types of cells in the bone: bone-forming osteoblasts, bone-resorbing osteoclasts, and bone-embedded osteocytes, which are known as a modulator of the cellular activities of osteoblasts and osteoclasts in the dynamic process of bone regeneration [44,45]. The last type of cell, the bone lining cell, has a relatively unclear role or mechanism for coupling bone resorption to bone formation [46]. To fulfill the bone's role in the body, bone is made of two different types of matter, organic and inorganic components.
Microstructural Bone Formation: Biphasic Aspects of Bone
The organic part of the bone is typically composed of type I collagen and other structural proteins. Type I collagen (Col1) represents nearly 90% of this part, and contributes to bone strength [48,49]. With a few exceptions, Col1 is a triple-helix of three chains: two α1 chains and one α2 chain [50]. Col1 is enzymatically converted from secreted type I procollagen, like other collagens [51]. With regard to the structural aspect, Col1 is one of the fibrillar collagens that is characterized by a triple-helix conformation and repeated (Gly-X-Y) n sequences [52]. Gly is glycine, while X and Y are different amino acids, meaning that theoretically, more than 400 combinations are possible. However, the Gly-Pro-Hyp triplets are the most prominent combinations present, increasing the molecular stability and the natural intermolecular actions. Collagen molecules can spontaneously form collagen fibrils, which then become collagen fibers and bundles [53]. Such collagen fibrils are also spontaneously created from purified collagen in aqueous solutions [54]. Based on the authors' observations, collagen liquid crystal can be induced at an acidic pH, since the positively charged residues of collagen help to maintain a liquid state without aggregations, while the rising pH with the aid of ammonia vapors gradually decreases the net charge of the collagen monomers, leading to the formation of collagen fibrils. The brief mechanism of type I collagen fibrillogenesis is mentioned below. Due to the long helical domain of each chain in Col1, the newly-formed three chains spontaneously assemble into triple-helix type I procollagen in the collagen synthesizing cells [55]. In type I procollagen, N-terminal and Cterminal propeptides prevent the formation of the premature collagen fibril, and modulate the fibril assembly [56]. The N-and C-telopeptides are typically 16 and 25 amino acids long, affecting the final self-assembly structures of Col1 [57]. For example, a partial loss of those peptides results in the poor self-assembly of Col1. In contrast, the loss of each telopeptide forms different kinds of self-assembled collagen, indicating that there are other kinetic mechanisms of collagen fibrils. When the type I procollagen is secreted into the extracellular space, abundant proteolytic enzymes such as matrix metalloproteinases (MMPs) and bone morphogenetic protein 1 (BMP-1) are responsible for initiating spontaneous collagen fibril formation [58]. Such enzymes target the N-terminal and C-terminal propeptides from the type I procollagen. For example, a disintegrin and a metalloprotease with thrombospondin type I motifs (ADAMTS), ADAMTS-2, can cleave the N-terminal propetides [59]. Likewise, ADAMTS-14 is also observed to have similar aminoprocollagen peptidase activity [60]. Bone morphogenetic protein-1 (BMP-1) can also excise the C-terminal propetides [61].
The inorganic bone minerals represent approximately 60% of bone tissue by weight and 40% by volume [62]. The bone minerals have two distinct roles: (1) they act as a reservoir of ions for the body, and (2) they are embedded in the organic components of the bone to create a light and tough natural composite material [63]. Bone minerals are important in ion homeostasis, regulating approximately 99% of the calcium and 85% of the phosphorus in the body [64,65].
Similarly, sodium and magnesium in the bone account for at least half of the required levels in the body (nearly 90% and 50%, respectively) [66]. In nature, both components become a biological composite with multi-level hierarchical properties [67]. Mineralized collagen can be considered to be a building block that creates the hierarchical structure of bone. The mineralized collagen by itself is thought to be a reinforced collagen composite where thin calcium phosphate-based crystals are intercalated between collagen nanofibrils [68]. Mineralized collagen fibrils eventually become a unit of lamellar bone structures. In this biphasic bone structure, the collagen-based organic ECM regulates the cellular activities of the bone-resident and the bone-forming cells. At the same time, the HA-based ECM plays a role in the structural support of bone [69]. As a result, a bundle of fibrillar collagens can be observed. Interestingly, each fibrillar collagen can undergo a further enzymatic cross-linking process, leading to a lysine-mediated intermolecularly cross-linked collagen bundle [70]. From the viewpoint of the structural locations of the cross-linking, this occurs between the short non-helical peptides (N-and C-terminal telopeptides) and a helical portion of an adjacent collagen molecule [71]. It is known that all major collagens, types I, II, and III, have four cross-linking sites at equivalent locations of each collagen molecule. Moreover, Col1 has unique cross-linking products, called pyridinoline cross-links, which interconnect between the N-telopeptide and the helix intermolecular cross-linking domain of the Col1 molecules [72]. Hence, biomineralization and extra cross-linking properties contribute to the complex hierarchy of the bone.
Macrostructural Bone Formation: Vascularization and Ossifications
In a macroscopic aspect, bone is a highly vascularized connective tissue, where bone vasculature participates in bone development (endochondral and intramembranous ossification), bone remodeling, and the regeneration of bone [73]. The trabecular bone is spongy bone tissue that is observed at the ends of a long bone, while compact bone, also called cortical bone, is the dense exterior bone [74]. The intricate vascular network pervading the Haversian and Volkmann's canals is observed in the cortical bone. The Haversian canals are the longitudinal route of blood vessels in the cortical bone, while the Volkmann canals interconnect the blood vessels of the Haversian canals [73]. Haversian systems or osteons are the basic units of compact bone. Each osteon is composed of lamellae of compact bone tissue derived from mineralized collagen fibrils and osteocytes that are founded in small cavities of each osteon, called lacunae [75]. Thin tubes called canaliculi from each lacuna serve not only as paths for blood supply, but also for the spaces that allow osteocytes to connect to each other through gap junctions [76]. Notably, the center of each osteon is called the Haversian canal, which contains blood vessels and nerve fibers that are parallel to the long axis of the bone. Bone can form via intramembranous ossification (osteogenesis) and endochondral ossification [77]. Endochondral ossification is the process of bone formation, during which chondroblasts (mesenchymal progenitor cells for cartilage formation) form a membrane called the perichondrium around a cartilage template. These chondroblasts become chondrocytes that secrete growth factors for recruiting blood vessels towards the perichondrium. Then, the perichondrium becomes the bone-forming periosteum. In contrast, intramembranous ossification is the immediate bone-forming process without the involvement of a cartilage model, which is shown in endochondral ossification. Endochondral ossification can be found in the long bones, whereas most skull bones are formed through intramembranous ossification [78,79].
Bone Remodeling and Bone Healing
Bone remodeling is the two-step process by which osteoclasts break old bone tissues down, followed by bone deposition, which replaces new bone tissues through the cellular activities of bone-forming osteoblasts [79]. As a dynamic tissue of the body, the bones are constantly under bone remodeling for the following reasons: (1) remodeled bones support newly applied mechanical stresses upon the bone architecture; (2) bone remodeling maintains ion homeostasis by regulating calcium and phosphate ions in the body; (3) bone remodeling repairs microdamage to the bone [80].
In fracture healing, there are several types of fractures after post-reduction treatment; the broken bones undergo the healing process. Damaged blood vessels that are associated with the fractures create a hematoma (localized bleeding) and induce clot formation around the damaged bone [81]. The clots help to recruit new blood vessels and become fibrous granulation tissues called soft calluses. Approximately 1 week after injury, the soft callus turns into a fibrocartilaginous callus, which eventually becomes a bony callus approximately 2 months later. Further bone remodeling and reshaping occur over several months to complete the stages of the fracture repairs [82].
Electrospinning Technologies: Electrospun Scaffolds in Bone Mineralization
Although there are many technical approaches for creating nano-sized ECM-like threads or fibers such as self-assembly, phase separation, and electrospinning, the versatile electrospinning strategy outweighs other methods in terms of material selection, postmodifications, and adaptability to other scaffold platforms ( Figure 2) [83][84][85]. Different versions of the original electrospinning strategy enable us to confer more delicate morphological features onto the final electrospun nanofibers for regenerative bone scaffolds ( Table 2). For example, a coaxial electrospinning technique creates dual growth-factorloaded electrospun scaffolds to enhance osteoconduction and osteoinduction [86]. Likewise, triaxial electrospun scaffolds have the potential for developing multifunctional nanofibers. A recent study showed that a tripolymeric triaxial electrospun scaffold supports the cellular activity of rat adipose-derived stem cells (ADSCs) [87]. In the melt electrospinning technique, distinctive non-woven nanofibrous architectures can be fabricated by maintaining a polymeric solution in a highly viscous liquid while performing the electrospinning [88].
Monoaxicial Electrospinning
William Gilbert initiated the basic concept of electrospinning in 1600. Under an electric field, he discovered the cone-shaped water droplet, which was eventually named the 'Taylor cone', since Geoffrey Taylor documented the mathematical modeling of the conical shape of a polymer solution when applying a strong electric field, in his seminal works in the 1960s [100]. Using a simple instrument, a sufficient electric potential can be imposed upon a polymer solution to create the Taylor cone. The basic setup of an electrospinning machine has four components: (1) a high-voltage power supply, (2) a syringe pump, (3) a spinneret, and (4) a conductive collector. Before electrification, a driving force that is applied to a polymer solution in a syringe creates a pendant-shaped droplet in the tip of a spinneret, where surface tension governs and results in the spherical shape of the solution. However, when an electric potential is accumulated in the solution, the electrostatic force sufficiently surpasses the surface tension to create a Taylor cone, continuously drawing polymeric fibers onto a conductive collector. During the charged polymeric liquid's travel, it converts into a series of solid nano-sized threads due to solvent evaporation. Electrospun scaffolds fabricated by a traditional electrospinning technique hold several favorable features for BTE. They have a large surface area-to-volume ratio, a high porosity, and a similar morphological shape to native bone ECM [101]. Before using the electrospun scaffold in BTE, biocompatible polymeric foams were investigated, giving an insight into the preferred material design parameters for BTE. Using poly(α-hydroxy acid) foam scaffolds, a previous study demonstrated that a preferable bone scaffold would have an interconnected internal structure with at least 90% porosity and a 100 to 350 µm pore size [102]. It is not convenient to fabricate such highly porous micro-channeled structures using the phase separation technique, but the electrospinning technique can easily make bone remodeling scaffolds with minimum effort. A simple but meaningful study showing the potential of the electrospinning technique in BTE has been reported [103]. In this study, the authors created a microporous and non-woven PCL scaffold by a monoaxial electrospinning technique, demonstrating the attachment and growth of mesenchymal stem cells (MSCs) derived from the bone marrow of neonatal rats. Because of this simple and efficient manufacturing method, monoaxial electrospinning remains a standard technique for creating a regenerative bone scaffold. A growth factor, basic fibroblast growth factor (bFGF), for supporting BMSCs was successfully incorporated into a monoaxial electrospun scaffold. It showed a sustained release of bFGF over time [104]. Rabbit BMSC seeded onto monaxial poly(lacticco-glycolic acid) (PLGA) was expanded and stimulated by slowly released bFGF to produce type I collagen, as well as fibronectin, over 1 week. In general, monoaxial electrospinning is an essential and effective tool for creating functional electrospun scaffolds for BTE.
Melt Electrospinning
The melt electrospinning technique can create a straight and stable polymer jet. Instead of using a solvent-based polymer solution for electrospinning, a heating element around a syringe allows us to make a molten polymer with relatively high viscosity and low conductivity. An ejected stable jet of molten polymer from an electrospinning apparatus, in general, creates thicker nanofibers but well-designed electrospun architectures over two or three dimensions. Due to its lack of requirement for solvents, melting electrospinning is also considered to be a green nanotechnology. Because solvent residuals in a biomaterial-based product affect the biocompatibility of the product, electrospun scaffolds obtained using this novel technique would be a promising approach for creating more sophisticated but also safer electrospun-based bone scaffolds [105]. A study utilized melt electrospinning to create a hybrid scaffold using conventional electrospinning and melt electrospinning [106]. Using silk fibroin (SF) and PCL, the authors successfully fabricated SF/PCL nano/microfibrous composite scaffolds and proved that the composites were supportive of the osteogenic potential of human mesenchymal stem cells (hMSCs) isolated from the alveolar bones of patients during oral surgery. Similar to the advancement of different electrospinning technologies, melt electrospinning has also evolved into melt electrowriting (MEW), a combined technology of electrospinning and fused deposition modeling (FDM) that is the most well-known extrusion-based additive manufacturing method [107]. One of the fabrication benefits of MEW is that it can create a highly porous structure with adjustable filament size (5 to 50 µm) [108]. A study created a flexible and osteoconductive fibrous composite made of PCL and HA based on the MEW process [109]. This study incorporated HA nanoparticles into a PCL solution to create a composite solution followed by the melt electrospinning writing process. The fabricated HA/PCL composite had a high degree of porosity (96−98%) and fully interconnected pore architectures, thereby supporting the osteoactivity of human osteoblast cells.
Aligned/Oriented Electrospinning
Recent progress in BTE has also encouraged electrospinning technology to fabricate more aligned and ordered nanofibers that assist with the design of osteoinductive and osteoconductive electrospun scaffolds. Compared to non-aligned nanofibers, aligned nanofibers modulate cell adhesion and migration, and they affect the production of ECM and cytokines [110]. NIH-3T3 fibroblast cells were attached and spread along with the aligned nanofibers by modifying the cellular cytoskeleton onto the aligned electrospun nanofibers. The topological characteristics of a substrate can influence cellular behaviors, including growth and differentiation [111]. In BTE, recent studies using aligned electrospun nanofibers have demonstrated the role of fiber orientation to the extent to which several stem cells undergo osteogenic differentiation. In a study, random and parallel poly (L-lactic acid) (PLLA) nanofibers were fabricated to evaluate the effects of fiber orientation on cell morphology, proliferation, and the differentiation of osteoblast-like MG63 cells [112]. MG63 cells grew along the aligned direction of the PLLA electrospun nanofibers. However, no statistically conclusive data showed better osteogenic potential in the aligned nanofibers. In contrast with the above study, using an osteoblast-like cell line, a study used human bone marrow mesenchymal stem cells (hBMSCs). It demonstrated the positive effects of both the aligned PLLA electrospun nanofibers and the aligned microfibers on the osteogenic differentiation of the stem cells [95]. The aligned nanofibers (AN) had average fiber diameters of 580 ± 10 nm, whereas the aligned microfibers (AM) in this study were demonstrated on a micro-size scale (1.21 ± 0.15 µm). Both the aligned or random electrospun nanofibers enhanced the osteogenic potential of the hBMSCs. Compared to the random electrospun fibers (random nanofiber; RN and random microfiber; RM), both the aligned fibers caused the hBMSCs to extend along the elongated direction of the fibers. In addition, hBMSCs on the aligned fibers showed faster migration speeds than the random fibers. Lastly, such morphological behaviors of hBMSCs on the aligned fibers reflect improvements in osteogenic differentiation, which were assessed via alkaline phosphatase (ALP) staining and alizarin red (ARS) staining. ALP is one of the most reliable markers produced from osteogenic cells, while ARS staining is a common assay technique for the cellular mineralization of various osteogenic cells [113,114]. Based on the hBMSCs' cellular behaviors as measured on days 7 and 10, the AN group showed stronger ALP staining intensity than did other groups (AM, RN, and RM). Similarly, ARS staining performed 21 days after seeding the stem cells on each substrate indicated that the AN group exhibited a significantly higher staining intensity than the other groups. These findings likely confirm that the aligned nanofibers are better osteoinductive bone scaffolds than the normal electrospun scaffolds that are usually fabricated using a conventional electrospinning apparatus. To create aligned nanofibers, numerous invented approaches have been addressed. A straightforward means of collecting aligned fibers is to use a rotating collector. Instead of using a static and flat collector, a study adopted a mandrel collector rotating at high speeds (e.g., 4500 rpm) while collecting nanofibers that were continuously ejected from a Taylor cone [115]. Further improvements in fabricating aligned nanofibers were also made by applying auxiliary counter electrodes onto the surface of a mandrel. The auxiliary electrodes created a converged electric field, thereby forming an aligned and dense electrospun scaffold without any apparent change in the average diameters of the aligned nanofibers [116]. As a different technique for alignment, a study used two separated parallel conductive collectors onto which the charged nanofibers were stretched and covered between the gap between the collectors [117]. The concept of the conductive parallel collector was also adopted in a mandrel [118]. In this technique, evenly spaced copper wires aligned through the barrel of a mandrel were placed to create a circular drum that served as a collector of the electrospun nanofibers. Because of the combinational effect of the mechanical stretching force by a mandrel and the electrostatic interactions between a parallel collector, aligned nanofiber sheets can be collected easily without disturbing the aligned structure. A different study introduced a new rotor-type collector with perpendicularly standing fins to assist wind-electrospun filaments in large amounts during electrospinning [119]. Whereas the above technologies have been focused on improving or modifying some of the four components in an electrospinning machine, magnetic electrospinning (MES) uses a polymer blend containing a small amount of magnetic materials to magnetize the ejected polymer, thereby creating aligned nanofibers under a magnetic field [120]. Additional forces, such as the post-drawing force and the centrifugal force, are also utilized to create aligned nanofibers. A study using PLLA nanofibers already aligned by approximately 60% confirmed that applying a post-drawing force in an oven at a high temperature (110 • C) using a manual drawing device results in the improved alignment of the electrospun PLLA nanofibers by up to 90% [119]. Using an additional centrifugal force in electrospinning, large-scale aligned nanofibers can be obtained as an electrospun mat with a rapid fabrication time [121]. Notably, the aligned electrospun nanofibers affect the morphological features of the attached cells. The aligned poly(D,L-lactic acid) (PLA) nanofibers changed the cellular morphology of the bone marrow stromal cells and showed an increased degree of calcium deposition during osteogenic differentiation [122]. An inverse relationship between alignment and osteogenic potential has recently been documented [123]. Human embryonic stem cell-derived mesenchymal progenitor cells (hES-MPs) on random gelatin-coated PCL electrospun nanofibers showed better rates of mineralization and osteogenic differentiation, as confirmed by both the ALP and ARS activities. Inversely, the mature osteoblast cell line MLO-A5 showed enhanced ALP activity and more calcium deposition in the same but aligned scaffold. The dependency of both the cell-specific and the nanofiber alignments on developing electrospun bone scaffolds should be considered as a design parameter for the pursuit of an ideal electrospun-based scaffold for BTE.
Multi-Axial Electrospinning
Compared to monoaxial electrospinning, multi-axial electrospinning requires a special spinneret and greater consideration when choosing appropriate experimental parameters to succeed in fabricating multi-layered electrospun nanofibers. However, the fabricated multi-layered electrospun nanofibers would expand the potential for electrospun scaffolds in multiple biomedical applications, including BTE. The most frequently studied multiaxial electrospinning technique is coaxial electrospinning, which is similar to monoaxial electrospinning, except for a special spinneret and an associated modification (two pumpdriven reservoirs). In terms of a spinneret in the coaxial electrospinning technique, a coaxial spinneret is an outer-shell spinneret that is concentrically aligned. For co-axial electrospinning, several experimental considerations have been addressed: (1) a sheath solution is usually being with higher viscosity and better conductivity compared to a core solution; (2) the flow rate of a sheath solution has to be faster than that of a core solution; (3) there is low interfacial tension present between the core and sheath solutions used; and (4) relatively volatile solvents are recommended for a sheath solution to create a stable Taylor cone formation [124]. In addition, the selection of solvents for multiple-axial electrospinning is also a cumbersome parameter for generating successful core/sheath nanofibers [125]. According to the author's comments, each solvent system for both the core and sheath polymeric solutions affects the final structure of the core/sheath nanofibers. When the core and sheath solvents are miscible, each polymer used should not be soluble in another solvent system. Otherwise, the solutes could precipitate, even within the tip of the spinneret used. In contrast, immiscible solvent systems create stable core/sheath nanofibers, even if each polymer can diffuse into another solvent system, while miscible solvent systems are suitable for creating nanofibers through coaxial electrospinning. In addition to the advantage of monoaxial electrospun nanofibers, the coaxial nanofibers bring more versatile features to the final tissue-engineered products [126]. The features can be summarized as follows: (1) the sheath portion can serve as a biophysical protective barrier for deposited drugs within the core portion; (2) the release of the drugs can be modulated by controlling the thickness of each portion while electrospinning; and (3) the mechanical properties of the coaxial nanofibers are adjustable to meet the mechanical requirement of the native bone tissues. Due to the morphological benefits of the coaxially electrospun scaffolds, for example, numerous studies in BTE have been found elsewhere. Using a rotating needle collector, a study fabricated a coaxial PCL/HA-added PLA electrospun tube that was capable of growing human mesenchymal stem cells [127]. Additionally, BMP-2 growth factors were slowly released from the fabricated tubes, irrespective of the presence of HA. A study used tussah silk fibroin to incorporate HA into the sheath of coaxial electrospun scaffolds [128]. This coaxial electrospun scaffold also used tussah silk fibroin for the core. The core/sheath nanofiber can also be used to deliver a drug that has been deposited within the core of the core/sheath nanofibers. A study incorporated TCP nanoparticles into the core of the core/sheath electrospun scaffolds and compared the release profile of the TCP nanoparticles from the coaxial PLA nanofibers with morphologically different PLA nanofibers, including monoaxial nanofibers [129]. When the TCP nanoparticles were added, the average size of the electrospun nanofibers was significantly changed. The monoaxial nanofibers had average fiber diameters of 450 ± 72 nm, whereas the coaxial nanofibers had approximately double diameters (890 ± 125 nm). Owing to the successful embedding of the TCP nanoparticles into the core part, the coaxial nanofibers had a smooth surface, indicating that no TCP nanoparticles were found on the surface of the nanofibers. As expected, the release of the TCP nanoparticles from the coaxial electrospun were markedly delayed and extended for 36 days. While the monoaxial electrospun released most TCP nanoparticles within several days, the coaxial electrospun maintained constant release profiles. Multi-axial electrospinning would be an excellent platform that delivers agents (e.g., BMP-2 or -7 growth factors) for promoting bone regeneration. In addition to serving as a biomimetic bone-like ECM, the core-sheath nanofibers can provide therapeutic agents in the desired manner. Moreover, the structural advantage of the core-sheath nanofibers is that these distinctive layers could retain the therapeutic activity of the incorporated agents Hence, the multi-axial electrospinning strategy makes electrospun scaffolds capable of satisfying the unmet needs of various bone defects.
Simulated Body Fluid for Bone Scaffold Mineralization
Mineralized collagen fibrils are responsible for the elastic modulus of bone and bone fracture toughness. In addition to the density of apatite, the deposited HA size, orientation, and localization are parameters that affect the strength of bone [130]. An SBF is a solution preparation that creates the bone-like apatite layer upon various substrates, including polymers, ceramics, and metals [131]. Inspired by human blood plasma, several SBFs have been formulated by modifying different ion compositions (Table 3). By convention, a conventional SBF is called c-SBF. Its compositional formulation is similar to human blood plasma, with the exception of two ions, chloride anion (Cl − ) and bicarbonate anion (HCO − 3 ) [132]. The concentration of Cl − in c-SBF was higher than that of blood plasma, whereas c-SBF has significantly lower HCO − 3 levels than in blood plasma. An interesting study recently showed that regular cell culture media might be an alternative to normal SBF [133]. Since the inception of an SBF formation, a series of studies has been performed to improve the efficacy of SBF formulations. Compared to the conventional SBF (c-SBF) formulation, for example, three improved formulations have been studied: (1) Revised SBF (r-SBF) represents a reduction in Cl − and an increase in HCO 3 − concentration compared to those of c-SBF; (2) ionized SBF (i-SBF) has the lowered concentrations of two divalent cations (Ca 2+ and Mg 2+ ) compared to those of r-SBF; and (3) modified SBF (m-SBF) has a moderate HCO 3 − level compared to the levels of both r-SBF and i-SBF [134]. By comparing three formations, m-SBF was selected as a suggested SBF formulation, since m-SBF makes bone-like apatite onto substrates and demonstrates a similar degree of storage stability to c-SBF when stored at 36.5 • C for 7 days. Revised SBF (r-SBF) and modified SBF (m-SBF) were much closer to the compositions of human blood plasma, but those formulations were lacking in creating bone-like apatite for calcium-based materials. A revised SBF formulation called n-SBF, which stands for a newly improved SBF, was also studied [135]. For example, a study created a microporous composite scaffold in which natural gellan gum (GG) and nanoparticulate bioactive glass (BAG) were blended in a solution containing calcium chloride (CaC1 2 ) and mineralized with flow SBF [136]. In this study, a perfused SBF flows in the axial direction with or without the presence of vertical direct compression to create the best biomimetic scaffold containing HA for BTE. The authors observed that the perfused SBF flow forms cauliflower-like HA within the GG-BAG scaffolds that are comparable to HA crystals observed in vivo. However, the application of direct compression reduces the formation of HA, followed by the destruction of the GG-BAG scaffolds.
Simulated Body Fluids for Electrospun-Based Bone Scaffolds
Mineralized electrospun scaffolds have been fabricated by the simple immersion of electrospun nanofibers in different SBFs, ranging from regular SBFs to concentrated SBFs (Table 4). In this section, various mineralized electrospun scaffolds have been addressed to confirm the potential of this excellent but straightforward strategy for electrospun biomineralization. PCL electrospun nanofibers were pretreated with NaOH (2N, 24 • C for 12 h) and mineralized in SBF for up to 21 days [137]. From the analysis of the selected area using electron diffraction (SAED) pattern and energy dispersive spectroscopy (EDS) measurements, it was confirmed that the mineralized PCL nanofibers exhibited a similar ring-shaped pattern to that of crystalline apatite. The ratio of Ca/P was approximately 1.71, which was comparable to the value of HA (Ca/P~1.67). When used to cultivate MC3T3-E1 subclone 4 cells, which are known to be a good model of in vitro osteoblast differentiation via ECM signaling, the HA-mineralized PCL nanofibers showed better osteogenic performance potential than in the case of the non-treated nanofibers [138]. In another study using PCL electrospun scaffolds, vitamin D3 (VD3) containing anionic SDS micelles (SDS; sodium dodecyl sulfate) were intercalated between layered double hydroxides (LDHs), which are known to be the most biocompatible inorganic nanocarriers in the drug delivery system [139]. Then, the fabricated VD3·LDH/PCL electrospun scaffolds were mineralized within concentrated SBF (10×) (Figure 3). Interestingly, an increase of the VD3·LDH nanohybrids within the PCL-based electrospun scaffolds enhanced apatite-like crystal formation in vitro. Based on recent findings regarding vitamin D3 (VD3) in inducing osteoblastic differentiation, this study incorporated VD3 and evaluated the osteogenic potential of the VD3 loading electrospun bone scaffolds with human osteoblast-like MG-63 cells.
For improving the mineralization of PLLA electrospun nanofibers, a concentrated SBF (×10) was utilized after treatment with either NaOH (0.1 M) or co-blending with gelatin (10%) [140]. Such pretreatments assist with the mineralization of the electrospun nanofibers within 2 h, in the concentrated SBF. However, co-blending with gelatin was better at yielding the stress and elastic modulus than the NaOH-treated PLLA nanofibers, indicating the gelatin-derived hydrophilic properties of the 10% gelatin/PLLA nanofibers could facilitate mineralization. In another study, amorphous calcium particles (ACPs) were added to enhance the mineralization process onto the PLA electrospun nanofibers, which are similar to another PLLA hydrophobic synthetic polymer [141]. Only a 1-day treatment in an SBF solution showed significant growth of inorganic HAs on the surface of ACP containing PLA nanofibers. In contrast, no change was observed on the surface of the pure PLA nanofibers, even after immersion in SBF for 7 days. Table 4. Exemplary uses of simulated body fluid (SBFs) in electrospun-based bone scaffolds.
Type of Electrospun Scaffold
Treated SBF Protocol Descriptions Ref.
PLGA/collagen/gelatin (2:1:1 weight ratio) 10× m-SBF The mineralized PCG nanofibers were fragmented and loaded with BMP-2 mimicry peptides 1 for alveolar bone regeneration in vivo. [142] Liginin/PCL 1.5× SBF The fibrous liginin/PCL films were completely coated by HA within 5 days. [143] Alginate/PLA 1.5× SBF The alginate/PLA composite was crosslinked by Ca 2+ and mineralized. Anionic alginate assists with the nucleation and growth of calcium phosphate apatites. [144] Polysilsesquioxane (POSS)-loaded PLA 1× SBF The POSS-PLA showed acceleration in HA mineralization. [145] Likewise, carbonate nano-hydroxyapatite (n-HA) was incorporated into an electrospinning solution to induce mineralization [146]. Although electrospun nanofibers fabricated from a copolymer of L-lactide and DL-lactide (PLDL) were not mineralized properly for a 7-day immersion within a 1.5-fold SBF solution, n-HA/PLDL nanofibers have successfully undergone the process of full HA mineralization after 3 days of immersion. Similarly, PCL nanofibers containing HA nanoparticles (NPs) were mineralized within an SBF solution for 10 days at 37 • C [147]. It was confirmed that the embedding of HA-NPs initiates the crystallization of HA from the SBF treatment, and that an incremental addition of HA-NPs in PCL colloidal solutions improved the formation of bone-like apatite. Scanning electron microscope images of as-prepared scaffolds and after 3 and 7 days incubation in 10× SBF. The formation of spherical apatite-like crystals increased significantly after adding nanohybrids to the scaffolds. For the legends, pure PCL and VD3·LDH/PCL electrospun scaffolds containing 1.25, 2.5, and 5 wt% of vitamin D3 are presented as PCL, 1.25VL/P, 2.5VL/P, and 5VL/P, respectively. Reprinted with permission from Ref. [139]. Copyright 2020 Elsevier. More details on "Copyright and Licensing" are available via the following link: https://www.mdpi.com/ethics#10, accessed on 15 April 2022.
For better mineralization, a study used gelatin and amino acids (e.g., glycine, aspartic acid, and arginine) [148]. A polymeric blend of PLLA and gelatin (1:1 weight ratio) was used for electrospinning, while each amino acid (2.5 mM) was supplemented into a concentrated SBF (2.5×). At different incubation periods, the authors observed significant differences in mineralization. Compared to concentrated SBF (2.5×), the presence of amino acid facilitates HA crystal formation, transforming it from amorphous calcium phosphate to hierarchical HA ( Figure 4). Among the amino acids, the authors also noted that glycine had promoted the formation of well-evolved needle-like HA crystals. It was speculated that adding amino acids into SBF would assist with inducing biomineralization for electrospun-based bone scaffolds.
Interestingly, a study used a charged protein that could enhance the process of mineralization while applying a concentrated SBF (10×) [149]. In this study, phosvitin (PV), one of the egg (usually hen eggs) yolk phosphoproteins, was utilized to obtain a better rate of mineralization on the surface of the collagen nanofibers [150]. The involvement of PV in the concentrated SBF (×10) resulted in the rapid formation of apatite within 4 h, which was comparable to HA, and confirmed by EDS analysis. Instead of incorporating an additive or a treatment with NaOH for modifying the surface of the pure electrospun nanofibers, a study employed a collagen coating technique onto the prepared PLGA nanofibrous mesh (NFM) [151]. After collagen coating, a concentrated SBF (5×) treatment resulted in the formation of tiny HA nanoparticles onto NFM. The HA-coated NFM supported the growth of the MC3T3-E1 osteoblasts and their subsequent differentiation. Additionally, the osteogenic differentiation of the BMSCs proved the potential of the HA-deposited collagen-coated PLGA electrospun mesh in BTE. Onto the blend of electrospun nanofibers from hydroxyethylcellulose (HEC) and polyvinyl alcohol (PVA), concentrated SBF (10×) was also utilized to coat them with bone-like apatite within 2 days [152].
The mineralization technique is also a promising strategy for electro-conductive biomaterials that are capable of transferring electric stimulations onto cells. The electrical stimulation would help to promote bone regeneration. Under periodic electrical stimulation (1 h per day, 0.4 ms pulse, and 20 Hz frequency) with different voltages (1, 5, 10, and 15 V), human fetal osteoblastic cells (hFOB 1.19) cultured on the surface of anodized nanotubular titanium were responsive [153]. Significantly, an electric field generated at 15 V enhanced osteoblast growth up to 72% after 5 days. In addition, osteoblasts move toward the cathode, and osteoclasts move in the opposite direction (the anode) [154]. As mentioned in this study, this cell galvanotaxis is a unique property that may contribute to the development of new BTE for difficult-to-access bone fractures. Electro-conductive electrospun carbon nanofibers (CNFs) would be an excellent substrate for electrical stimulation in BTE applications [155]. Using an SBF formulation, a study successfully synthesized HA-mineralized electrospun CNFs derived from polyacrylonitrile (PAN) [156]. This study confirmed that a typical SBF post-treatment created hydrophilic and biomimetic CNFs while preserving the conductive properties of carbon. Even after 24 h, SBF post-treatment resulted in the uniform mineralization of CNFs when pristine CNFs (P-CNF) were pretreated with a concentrated NaOH solution (5 M) at 45 • C. The pre-treated CNFs (T-CNFs) were thought to form mineral phase nucleation sites on the surface of the T-CNFs due to the exposure of the surface carbonyl groups. In a rat bone defect model where 6 mm segmental damage was created in the femurs of Wistar rats, the mineralized CNFs (M-CNF) completely restored the bone defects of the femurs within 8 weeks ( Figure 5). In summary, the SBF treatment is a standardized protocol for conferring biomineralization onto electrospun nanofibers. Interestingly, there have been attempts to boost the biomineralization process via the following approaches: (1) pretreating as-spun scaffolds to create a more wettable charged surface; (2) adding functional agents that induce or expedite SBF nucleation; (3) coating the surfaces of electrospun scaffolds to enhance the initiation of the mineralization process. . Diagnostic 3D imaging (CT scan) of femur bone defects after 8 weeks of injury. The arrow shows the unrepaired defective site in the control group (i) and the bone defect repaired by normal tissue growth caused by the M-CNFs (j). Reprinted with permission from Ref. [156]. Copyright 2020 Nature publishing group. More details on "Copyright and Licensing" are available via the following link: https://www.mdpi.com/ethics#10, accessed on 15 April 2022.
Conclusions
Improvements in the development of synthetic bone grafts have aspired to fill the gap between the constant need for bone substitutes and the shortage of bone tissues for bone defects and bone fractures, including osteoporotic fractures. Electrospinning has been extensively explored as a promising manufacturing strategy for creating a biomimetic bone scaffold. Electrospun scaffolds have high porosity, a large surface-area-to-volume ratio, and structural similarities to native bone ECM. For bone-like scaffolds, the imitation of the biphasic nature of native bone ECM is critical because mineralization makes the bone harder and more capable of mechanically supporting the body, implementing the ability to move, and supplying a dynamic reservoir of biologically essential elements (e.g., minerals, blood cells, and growth factors). Therefore, an in-situ mineralization technique with different simulated body fluids (SBFs) has been extensively adopted to create fully biomimetic electrospun-based bone scaffolds. This reliable and straightforward posttreatment has successfully fabricated various mineralized electrospun bone-like scaffolds. As reviewed in this study, recent research progress in SBF-based mineralization protocols has made mineralized electrospun scaffolds much more versatile in repairing bone defects and fractures. | 2022-05-29T06:22:43.438Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "a869aaef15b0545be1c791c298e6dea109358c27",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "05a208d7f22b769010de4e2874bd70e04af9cf70",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237460682 | pes2o/s2orc | v3-fos-license | Gabor Feature Representation and Deep Convolution Neural Network for Marine Vessel Classification
The Vessel Surveillance System (VSS), a crucial tool for fisheries monitoring, controlling, and surveillance, has been required to use for the reservation of the current depressed state of the world's fisheries by fisheries management agencies. An important issue in the vessel surveillance system is the classification of vessels. However, several factors, such as lighting, congestion, and sea state, will affect the vessel's appearance, making it more difficult to classify vessels. There are two main methods for conventional classifications of vessels: the traditional-basedcharacteristics method and the convolutional neural networks-used method. In this paper, we combine Gabor feature representation (GFR) and deep convolution neural network (DCNN) to classify vessels. Gabor filters in different directions and ratios are used to extract vessel characteristics to create a new image of vessels, which is DCNN's input. The visible and infrared spectrums (VAIS) dataset, the world's first publicly available dataset for paired infrared and visible vessel images, was used to validate the proposed method (GFR-DCNN). The numerical results showed that GFR-DCNN is more accurate than other methods.
황롱⋅권오흠⋅이석환⋅권기룡 image content. Among the deep-learning approach for ship classification, Shi et al. (2018) applied 2D discrete fractional Fourier transform (2D-DFrFT) and two-branch CNN to obtain features. 2D-DFrFT and LBP (CLBP), which extracted various traditional features, was used to get the high-level abstract features (Shi et al., 2019). Literature (Akilan et al., 2018) used Inception-V3, and AlexNet to obtain the image descriptions and normalizes various features to the identical feature space. The authors in (Zhang et al., 2015) used the VGG-16 CNN structure with the ImageNet database to extract features from the 15th layer. Literature (Khellal et al., 2018) used machine learning for getting the relationship of multi-color characteristics. Literature (Khellal et al., 2018) introduced a new method, ELM, detecting discriminative CNN features.
Authors in (Shi et al., 2019) created a classification framework, ME-CNN, using a multi-feature ensemble. The deep learning based-method presents powerful feature extraction in comparison with the conventional methods.
Gabor filters have many advantages, and they get higher accuracy when compare with other feature extraction techniques such as Local Binary Pattern (LBP), Principal Component Analysis (PCA) (Allagwail et al., 2019). This work introduces the novel method for marine vessel classification using a deep learning approach base on the Gabor filter. We create a Gabor feature representation to extract the features of vessel images and apply a deep convolution neural network to classify these extracted features.
Gabor Feature Representation and Deep
Convolution Neural Network
Gabor filter
The 2D Gabor filter, the core of Gabor filter-based feature extraction, will be created from the multiplication between an elliptical Gaussian and a sinusoidal plane wave (Kamarainen, 2010). A major and minor axis by γ and η control the sharpness of the filter. The filter response has a compact closed-form: where is the rotation angle of both the Gaussian primary axis and the plane wave, is the center frequency of the filter, η is the sharpness along the secondary axis and γ is the sharpness along the primary axis.
Gabor bank or Gabor features are created from replies of Gabor filters in equation (1) by applying multiple filters on various frequencies and orientations . Frequency, in this situation, agrees to scale data and is thus described by: where m ax is the maximum frequency wanted, is the frequency, and k > 1 is the frequency scaling element. The filter orientations are described by: where V is the entire amount of orientations and is the orientation.
Exponential spacing and orientations from linear spacing determine scales of a Gabor filter. Valid values of orientation vary from 0 to 2π. The bands of Gabor filters rotate with a variety of orientation values. IFFT. This spatial domain value is then used to create the final Gabor feature representation, as shown in the last row.
Deep convolution neural network
In the proposed method, derived from AlextNet, we did some
Dataset
We employ the VAIS dataset (Zhang et al., 2015) Table 1. We use only the visible images for the comparison with other methods. As seen in Table 1, some classes have 140 images, while other classes have 412 images in the training set. Fig. 3 shows some visible samples from each class in the dataset. The various factors, including the size of ships, the uneven illumination, and the environment set a high demand for the features classification and are complicated.
Experiment result
We compare the GFR-DCNN with other approaches, and the outcomes are presented in Table 2. The experimental conditions are similar in all approaches. Table 2 shows not only the deep learning in literature but also the conventional methods (HOG + SVM, LBP + SVM), SRDA, SFLPP, and the proposed GFR-DCNN gained the best accuracy.
The GFR-DCNN enhances results against the conventional classification methods due to the combination of the Gabor filter for feature extraction and DCNN for classification. In most cases, both GFR and DCNN contribute to higher precision.
For example, GFR-DCNN generates higher accuracy of 11.33% compared with the LBP+SVM method (Zhang et al., 2019). In this situation, GFR exceeds LBP, and DCNN surpasses SVM in the classification task.
The only case of the Gnostic Field + CNN method (Zhang et al., 2015), in which only GFR is more precise. Reference The proposed GFR-DCNN 87.60 Table 2. The comparison of various method base on the VAIS database. (Zhang et al., 2015) uses VGG16 as the CNN backbone, which outperforms in classification tasks compared to the AlexNet (Rangarajan et al., 2018). In this case, GFR is more accurate than Gnostic Field, and GFR-DCNN gets higher precision of 6.6% in comparison with the Gnostic Field + CNN method.
Conclusion
Vessel classification is an essential topic in computer vision. Table 3. The confusion matrix for multi-classification on the VAIS database. | 2021-09-09T20:45:18.647Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e1119eea98f30808720c42065e800ea9638b7d23",
"oa_license": "CCBYNC",
"oa_url": "https://www.jcdp.or.kr/upload/pdf/kscdp-2021-8-3-121.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b42badf28d691b986975988abb3928205e08d396",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
227135971 | pes2o/s2orc | v3-fos-license | UK Guidelines on the Diagnosis and Treatment of Breast Implant‐Associated Anaplastic Large Cell Lymphoma on behalf of the Medicines and Healthcare products Regulatory Agency Plastic, Reconstructive and Aesthetic Surgery Expert Advisory Group
Summary Breast implant‐associated anaplastic large cell lymphoma (BIA‐ALCL) is an uncommon T‐cell non‐Hodgkin Lymphoma (NHL) associated with breast implants. Raising awareness of the possibility of BIA‐ALCL in anyone with breast implants and new breast symptoms is crucial to early diagnosis. The tumour begins on the inner aspect of the peri‐implant capsule causing an effusion, or less commonly a tissue mass to form within the capsule, which may spread locally or to more distant sites in the body. Diagnosis is usually made by cytological, immunohistochemical and immunophenotypic evaluation of the aspirated peri‐implant fluid: pleomorphic lymphocytes are characteristically anaplastic lymphoma kinase (ALK)‐negative and strongly positive for CD30. BIA‐ALCL is indolent in most patients but can progress rapidly. Surgical removal of the implant with the intact surrounding capsule (total en‐bloc capsulectomy) is usually curative. Late diagnosis may require more radical surgery and systemic therapies and although these are usually successful, poor outcomes and deaths have been reported. By adopting a structured approach, as suggested in these guidelines, early diagnosis and successful treatment will minimise the need for systemic treatments, reduce morbidity and the risk of poor outcomes.
Since the first description of the use of a silicone prosthesis for breast augmentation in 1961, 1 it is estimated that up to 35 million women worldwide have had breast implants, with a recent prevalence rate as high as 3Á3% reported in the Netherlands. [2][3][4] As an implanted foreign body, breast implants are not without risks and remain one of the most researched medical devices of all time. Reoperations and local complications have traditionally been the most frequent cause for concern following reviews into the safety of silicone breast implants, reported by the United Kingdom (UK) Independent Review Group in 1998 5 and the United States Institute of Medicine in 1999. 6 However, they found no evidence of an increase in breast or other malignancies in women with implants, although a number of studies since then have identified an uncommon but unique form of lymphoma that arises in association with breast implants. This is called breast implant-associated anaplastic large cell lymphoma (BIA-ALCL). The first such case was reported in the literature in 1997, 7 with five additional cases over the next decade. 8 In contrast to other lymphomas involving the breast, breast cancer, or benign lesions of the breast, the parenchyma is usually not involved in BIA-ALCL except in cases where the malignancy extends through the implant capsule into the surrounding tissue.
For many years the association of breast implants with ALCL, a subtype of T-cell non-Hodgkin Lymphoma (NHL), was considered a random event as the total incidence of ALCL in the breast as a primary malignancy was 0Á037 per million women per year. 9 By contrast, breast cancer has an incidence of 936 per million women per year (a lifetime risk of 1 in 8), 10,11 a rate that is the same irrespective of whether a breast implant is present or not. The Food and Drug Administration (FDA) in the USA and the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK issued medical device alerts (MDA) in 2011 and 2014, by which time three cases of BIA-ALCL had been reported in the UK and 34 worldwide 12,13 with a cumulative total of 173 cases reported in the global review by Brody et al., in 2015. 14 Growing evidence indicated that BIA-ALCL is a distinct lymphoid malignancy unique to the patient cohort in which it was being observed, such that the World Health Organisation added it to its classification of lymphoid neoplasms the following year as a provisional entity alongside systemic/nodal and cutaneous ALCL. 15 As of April 2020, there are 68 confirmed reports of BIA-ALCL in the UK 13 and about 800 cases worldwide, with 33 deaths attributed to BIA-ALCL. 4,16 Both saline and siliconefilled implants are implicated. Of note, there have been no reported cases of BIA-ALCL in patients with a breast implant history that is confirmed to only include a smooth device, suggesting that textured implants are the causative factor. 14, [16][17][18][19][20] The absolute risk of developing BIA-ALCL is small ranging broadly depending on the study conducted and geographic location, from roughly 1:354 to 1:37 000 patients with textured implants. 3,[20][21][22][23][24][25] The figures may lack accuracy due to the various methods of reporting and dearth of knowledge of the true denominator. While causation and pathogenesis are still the subject of broad investigation, the higher rates are associated with macro-textured surfaces (higher surface area/roughness) whether placed for reconstructive or aesthetic reasons. [18][19][20]24 This information should not only inform on a prospective change in practice, but also emphasises the need for a systematic approach to investigate patients who present with problems with their implants.
This UK guideline for the diagnosis and treatment of BIA-ALCL builds further on the United States National Comprehensive Cancer Network (NCCN) [26][27][28] and UK pathology guidelines 29 to better reflect the unique differences that exist in UK practice where there is an approximate 50:50 split between implant operations in the private sector and those conducted within the National Health Service (NHS); the majority of implant breast reconstruction (86%) is performed within the NHS and virtually all cosmetic implant surgery (98%) takes place in the private sector. 30 Specialist surgery is usually undertaken by breast or plastic surgeons who should be members of one of the three Surgical Societies: the Association of Breast Surgery (ABS), the British Association of Plastic, Reconstructive and Aesthetic Surgeons (BAPRAS) or the British Association of Aesthetic Plastic Surgeons (BAAPS).
Implant monitoring
In the UK, routine radiological surveillance to assess implant health is not recommended within the NHS or private sector. 31 Any additional breast imaging is usually symptom-driven by patient/surgeon concern or following clinical review.
In order to monitor and improve patient safety, a Breast and Cosmetic Implant Registry (BCIR) was introduced in the UK in October 2016, recording the implants that have been used for patients and the organisations and surgeons that have carried out the procedures. The registry aims to provide the data needed to detect any early safety signals in relation to an implant and provide a mechanism for managing patients in the event of an implant recall. All providers of breast implant surgery are expected to participate. 30 Referral for breast assessment Patients with implants who develop breast symptoms may initially present to their general practitioner (GP) or the private surgeon/clinic. In the absence of private medical insurance, the cumulative cost of consultations and investigation in the independent sector can be prohibitive such that anyone with a breast symptom and a breast implant should be referred to the local NHS symptomatic breast clinic irrespective of the initial pathway for implant surgery, as long as they meet the NHS UK residency eligibility criteria. This will improve the quality of assessment, have no self-funding implications and should reduce the risk of missed or late diagnosis.
Patients without breast symptoms but concerned about BIA-ALCL and/or their breast implant health can be reassured by their GP or private breast surgeon/clinic and directed to the MHRA website. 13
Clinical presentation and investigation
Symptoms and signs BIA-ALCL presents at a median of 8-10 years following reconstructive or cosmetic breast implantation. Early occurrence has been reported and the diagnosis should therefore be considered in any cases where implants have been in situ for longer than 12 months. 14,17,28,32,33 The lymphoma develops from the luminal aspect of the peri-implant capsule (85%), commonly precipitating the rapid development of a periprosthetic effusion, resulting in distortion to the breast including breast swelling or new-onset breast asymmetry. While commonly only one breast is affected, rare bilateral cases have been reported. 33,34 Less commonly (15%) presentation is with a palpable mass, or a combination of effusion and mass. 17,[19][20][21]33 More subtle presentations may occur that are difficult to identify, particularly in the presence of pre-existing breast asymmetry (Fig 1).
Approximately one third of patients report pain and additional signs such as erythaema (14%), or skin lesions/ulceration (8%), 32,33 (Fig 1D). In a small proportion of cases, local dissemination presents with ipsilateral axillary, supraclavicular, internal mammary chain or mediastinal lymphadenopathy. In 9% of cases, systemic 'B' symptoms consisting of unexplained weight loss, fevers or night sweats, are observed. 33 BIA-ALCL may occasionally be an incidental Guideline finding on routine histology after capsulectomy for capsular contraction or implant rupture. 32,35 There are important differentials to consider in presentations of both the 'effusion-only' and 'mass-forming' subtypes of BIA-ALCL. Late seroma is a rare and usually benign complication that affects up to 0Á1% of all breast implant procedures. 36,37 Despite the broad range of causes (silicone bleed, trauma, infection, idiopathic, haematoma, BIA-ALCL and implant rupture), up to 10% of cases may be attributable to BIA-ALCL. In the assessment of patients presenting with a mass-forming lesion or lymphadenopathy, important differentials include reactions to silicone, primary breast cancer, other lymphoma subtypes, sarcoma and metastases from other primary malignancies such as melanoma; all occur at a significantly greater frequency than BIA-ALCL. In contrast to other lymphomas involving the breast, breast cancer or benign lesions of the breast, the parenchyma is usually not involved in BIA-ALCL except in cases where the malignancy extends through the implant capsule into the surrounding tissue. Skin lesions in isolation may represent primary cutaneous ALCL, rather than BIA-ALCL.
Initial assessment
Investigation for a diagnosis of BIA-ALCL should be conducted in clinical units equipped with the necessary diagnostic expertise and should follow the proposed UK guidelines diagnostic algorithm (Fig 2), based on the principle of triple assessment: clinical examination, imaging and biopsy.
A detailed medical history should be taken that includes the patient's family history of cancer as this may prompt a genetics referral in accordance with the updated National Institute for Health and Care Excellence (NICE) guidelines on familial breast cancer (CG164). 38 Of note, some cases of BIA-ALCL have been linked to Li Fraumeni Syndrome 39 and BRCA gene mutations, 3,40 and whether a genetic predisposition for breast cancer is similarly a risk factor for BIA-ALCL is as yet unanswered and needs further investigation to clarify the risk. At present, patients diagnosed with BIA-ALCL are not eligible for genetic testing under NHS commissioning guidelines unless the standard family history criteria are met. 41 Radiology in BIA-ALCL Imaging for BIA-ALCL can be challenging, due to its unique biology and frequently non-specific appearance. 35 It is important that those performing breast imaging and clinicians should consider the diagnosis of BIA-ALCL in the appropriate setting.
Ultrasound (US)
US is the initial investigation of choice to assess pain, swelling or a mass related to a breast implant. 28,42 It has a sensitivity of over 80% for detecting a peri-implant collection, with a specificity of less than 50% in elucidating the underlying cause. 35 US should include assessment for axillary lymphadenopathy and evaluation of the contralateral implant, where present.
US evaluation is operator dependent. The image should be optimised to evaluate the implant membrane, capsule and material contents along with the adjacent breast parenchyma. The entire implant should be integrated in the field of view. A high frequency (7)(8)(9)(10)(11)(12)(13)(14) MHz) linear array probe should be used to delineate the membrane and capsule. 43 Knowledge of implant type greatly assists assessment and will avoid potential misinterpretation. The appearance of dual lumen implants can mimic peri-implant effusions and intracapsular leaks. A small volume of fluid (< 10 ml) around an implant is often seen as a normal incidental finding and in an asymptomatic patient does not warrant further investigation. 28,43,44 . Effusions associated with BIA-ALCL are usually homogeneous ( Fig 3A) with inflammatory features in the periprosthetic breast tissue and in some cases a thickened irregular capsule. Masses can also be observed and may be solid or mixed cystic/solid and are usually ovoid ( Fig 3B).
Mammography
If the patient is over 40 years of age, mammography should also be undertaken. Mammography has a low sensitivity and specificity for BIA-ALCL, but it may be used to assess for any potential mimics/masses and other diagnoses including in situ and invasive primary breast malignancy. In cases of BIA-ALCL, the capsule may be thickened and the membrane contour may be disrupted. 35
Magnetic Resonance (MR) imaging
In cases of diagnostic uncertainty or inconclusive US, MR imaging should be undertaken to assess the implant, effusion, capsule and for any potential mass and local lymphadenopathy. When the diagnosis has been established from the initial US fine-needle aspiration (FNA) or core biopsy, MR imaging should be performed to assess disease extent and aid surgical planning. Non-contrast sequences (T2-weighted and siliconeselective) may show implant membrane disruption (rupture), pericapsular effusions and signs of gel bleed (Fig 3C, D). In addition, dynamic contrast-enhanced (DCE) sequences should be utilised to assess for capsular enhancement ( Fig 3E) and masses which may not have been detected on US. 45
FDG PET/CT
2-[Fluorine-18]fluoro-2-deoxy-D-glucose positron emission tomography/computed tomography (FDG PET/CT) plays an important role in the oncological staging of the majority of subtypes of lymphoma, although BIA-ALCL was recognised subsequent to the most recent Lugano Classification Guidelines and so the utility of FDG PET/CT in this context has yet to be formally validated. 46 be undertaken for pre-operative staging of local extent and distant disease (Fig 4A-C). 26 Carrying out PET/CT prior to the surgical intervention is necessary because post-surgical inflammation in the chest wall, surrounding breast and regional nodes, may at times persist for a few months and hinder the identification of the uncommon patients that have extracapsular or nodal involvement. PET/CT is also required for response assessment (pre or post-surgery), where systemic therapy has been administered (Fig 4D-F). 26,28,35,44 Evaluation of peri-implant effusion and/or mass lesions Where an effusion is present, the key to diagnosis of BIA-ALCL is FNA of the entire volume of peri-implant fluid for initial cytology and then secondary assessment where indicated (please see the subsequent section on preparation for pathology), and/or 14-gauge core biopsy of any associated capsular mass or pathological node as per the diagnostic algorithm presented in Fig 2. If capsular nodular lesions detected on imaging are not amenable to core biopsy, open surgical excision or repeat interval imaging should be considered.
The chance of an accurate diagnosis is greatest on the initial peri-implant aspirate, and small-volume aspiration or subsequent repeat aspiration(s) are associated with greater false-negative cytology, due to a dilution effect. 29 The main sample is sent as three separate specimens with these suggested volumes: Haematological Malignancy Diagnostic Service (HMDS), 10 cc in two purple-top EDTA tubes; microbiology, 5-10 cc in a white-capped sterile universal container; cytology should receive the entire remaining volume (at least 50 cc but can be over 500 cc), sent in multiple standard universal containers.
Local pathways must be established to ensure the correct handling and timely assessment of specimens and the request forms must clearly state concerns for a diagnosis of BIA-ALCL. It is prudent to alert the lab to prevent a delay in analysis. The cytology department should be advised that the entire effusion sample is to be analysed.
Primary histopathological assessment
We recommend that the evaluation of peri-implant associated effusions and tissue masses is conducted as a two-stage process where the primary assessment is morphological. Tumour cells may be found in both the fluid and the capsule mass (where both are present) but may be seen in only one or the other. 14,24 Cells within fluid samples are collected by centrifugation to produce cytospins which are stained according to local preferences (Fig 5). The cells in the remaining fluid should also be collected by centrifugation to improve the diagnostic yield. The cells should be fixed with liquid preservative, subject to centrifugation and processed to create cytoblocks for immunohistochemistry. The characteristic morphological abnormalities seen on standard cytology are regarded as a gold standard pre-requisite for a diagnosis of BIA-ALCL (Fig 5A, B). 29,47 This primary assessment should be conducted first; acellular samples and those composed entirely of bland inflammatory cells can be reported as negative and do not require further analysis. 29 While the vast majority of peri-implant effusions are not related to ALCL, patients should be made aware of the limitations of diagnostic testing and the possibility of false negative results, given cytologic assessment to detect BIA-ALCL has a sensitivity of about 78%. 35 If clinical or radiological suspicions remain after negative cytology, multidisciplinary team (MDT) discussion and referral to a tertiary centre are recommended. Secondary assessment of cytospins/cytoblocks may also be conducted as described below. In the absence of suspicion, patients should be followed up at three months to ensure that the swelling does not recur. Patients should be advised to report any symptoms that return.
Secondary assessment
All diagnoses of BIA-ALCL should be reviewed by a haematopathologist within a specialist integrated HMDS, as per NICE guidelines. 48 Where BIA-ALCL is suspected morphologically, further analysis is performed by immunohistochemistry using markers to confirm the haematopoietic origin of the cells: CD45; T-cell markers: for example, CD2, CD3, CD4, CD5, CD7 and CD8; cytotoxic markers: for example, TIA1 and Granzyme B; other markers: anaplastic lymphoma kinase (Alk-1) which by definition should be negative, CD30 which should be positive; and a B-cell panel: for example, CD20, CD79a, PAX5 and EBER to exclude the rare cases of fibrin-associated diffuse large B-cell lymphoma (DLBCL) as these can display aberrant expression of CD30 and other T-cell markers. 49 Other markers such as CD68, CD138, BCL-2, IRF4, Ki67 and pan-keratin may also be useful. It can be difficult to define an aberrant/neoplastic T-cell phenotype as the tumour cells often lack expression of T-cell antigens, and CD30 expression alone is not a defining feature as it is also present on normal activated T-, B-and natural killer cells. 29,47,50 For this reason, screening of effusion fluid for CD30-positive cells by flow cytometry in the absence of morphological abnormalities on cytology may lead to difficulties in interpretation and a false-positive diagnosis. 29 The clonality of T cells should be confirmed by polymerase chain reaction (PCR) for T-cell receptor (TR) gene rearrangements. 51,52 FISH can be used to assess the absence of the ALK translocation and to exclude other translocations seen in a proportion of systemic ALCL but not BIA-ALCL such as those involving IRF4/DUSP22 and TP63. While not part of the diagnostic algorithm, numerous recurrent mutations have been detected in BIA-ALCL. The most frequently reported are mutations that involve the JAK-STAT pathway genes such as JAK1 and STAT3 mutations and epigenetic modifiers, for example DNMT3A. [53][54][55][56][57] Processing samples for research purposes Hypotheses proposed regarding the pathogenesis of BIA-ALCL include chronic inflammation driven by a bacterial biofilm, microparticles shed from the implant shell, repetitive trauma/friction between the implant shell and the capsule, carcinogenic toxins leaching from the implant or genetic predisposition. 39,40,53,[58][59][60][61][62][63][64] Details of ongoing active UK research studies can be found here: https://www.cancerresearchuk.org/ about-cancer/find-a-clinical-trial/a-study-find-more-about-ca uses-breast-implant-associated-anaplastic-large-cell-lym phoma-bia-alcl, and clinicians are encouraged to contact the investigators for further details. It is hoped that in the future, a centralised biobank of samples for research purposes can be developed.
Management of an indeterminate breast assessment: reactive effusions
Approximately 90% of chronic delayed seromas are clear or haemo-serous, have no atypical cells and are not associated with malignancy. However, as a paucity of neoplastic cells can make diagnosis difficult (hence, the requirement to assess the whole seroma by cytologysee above), clinicians must always exercise caution when faced with a reactive seroma report in case of a false-negative result. If reasonable suspicion persists, consider the following options: referral to a tertiary centre with expertise in the diagnosis and management of BIA-ALCL, MDT discussion, repeat assessment with US and further aspiration for cytology, additional imaging with magnetic resonance imaging (MRI), diagnostic total en-bloc capsulectomy and explantation, or close monitoring. The pros and cons of each approach need careful case-by-case consideration, with shared decision-making, taking into account the degree of concern, differential diagnosis and the morbidity from interventions such as total en-bloc capsulectomy. This includes pneumothorax in up to 4% of cases, a possibility of chronic pain and significant cosmetic sequelae.
Management of confirmed cases
The optimal approach to patient treatment is based on the revised National Comprehensive Cancer Network (NCCN) guidelines and evolving experience from treated cases. 28,32 All confirmed BIA-ALCL patients must be referred to a tertiary centre for their further management. A proposed treatment algorithm is shown in Fig 6. Cases must always be recorded in the BCIR (https://clinica laudit.hscic.gov.uk) and reported to the MHRA using the yellow-card scheme for a medical device adverse incident (https://yellowcard.mhra.gov.uk/). After central registration of the case, clinicians will be contacted for further information. They must respond promptly and maintain up-to-date contact information for this process. The implant manufacturer has a regulatory duty to report device failures or adverse incidents to the regulatory authority. They should be contacted to collect and analyse the implant after removal and provide any findings in their Vigilance report.
Pre-operative investigations
Discussion of cases of BIA-ALCL must occur in the MDT meeting prior to any intervention. We advocate this due to the crucial requirement for shared management of these patients between haemato-oncology and breast surgery, therefore early collaboration is likely to improve patient outcomes. All imaging results including PET/CT and breast MRI should be reviewed. Routine pre-operative blood tests include: full blood count (FBC), urea and electrolytes (U&E), Liver function tests (LFTs), lactate dehydrogenase (LDH) and virology. Bone marrow aspiration and trephine to assess for the presence of marrow disease is recommended prior to surgery in all cases of BIA-ALCL where the disease is aggressive, defined as local-regional or distant lymph node involvement or unexplained cytopenia. 28,34 Explantation with total en-bloc capsulectomy Surgery is the recommended primary treatment for all patients with BIA-ALCL, except the minority who present with locally advanced or distant metastatic disease who may benefit from initial systemic therapy. Surgery should be performed by an experienced member of the oncoplastic breast or plastic surgery team, familiar with performing implant-Guideline based surgery and with additional expertise in capsulectomy and tumour extirpation.
Early BIA-ALCL is confined to the peri-implant effusion or contained within the capsule (Stages 1 and 2A). Total enbloc surgical excision plays a pivotal role in reducing stage progression, future recurrence and to improve overall survival (OS). 34 With a total en-bloc capsulectomy the specimen is removed as one complete piece comprising the entire unbreached capsule and any associated mass; the implant and associated effusion are fully retained within the explanted specimen (Fig 7). This technique is distinctly different to the piecemeal or partial capsulectomy approach sometimes used when dealing with capsular contraction. Where inadvertent dissection into a thin capsule results in effusion fluid contaminating the operative field, the cavity should be thoroughly irrigated before wound closure. One case of local recurrence in the UK series was thought to be related to the seeding of effusion fluid from the drain exit site. 17 If there is an axillary mass/enlarged axillary nodes, attempts should always be made to characterise these pre-operatively. However, it should not be assumed that enlarged axillary lymph nodes seen on the staging PET scan are definitive evidence of lymphomatous involvement as these can be reactive or enlarged due to silicone lymphadenopathy. 65,66 While there is no role for sentinel node biopsy, histological confirmation with excision of enlarged nodes at the time of surgery should be sought. 32,67 The capsule of sub-pectoral implants may be densely adherent or inseparable from the rib surface/intercostal muscles such that pneumothorax is a recognised risk. The operation note must record if the procedure was completed enbloc, or if any areas of capsule could not be removed for technical reasons. A simultaneous contralateral explantation and total capsulectomy should be strongly considered, as incidental BIA-ALCL may be found in 2-4Á6% of cases. 33,34 Data on immediate reconstruction are very limited and where this is requested, it is preferable to consider it in the delayed setting after a period of observation. 68 A repeat PET/ CT or MRI at least six months after surgery should be considered before any decision is made. Options that might be explored include autologous flaps, fat grafting and even reconstruction with implants although in this latter case, smooth implants should be the predominant option.
Processing the specimen post-explant
The contained peri-implant effusion, which commonly comprises turbid fluid, should be drained from the specimen by making a 2-mm cut into the capsule on the inferior pole and the fluid sent for cytology (Fig 7G, H) be opened as a full inferior capsulotomy that extends from the 9 o'clock to 6 o'clock to 3 o'clock position (clam shell capsulotomy) (Fig 7C, D, E, I).
All removed implants, intact or ruptured, must be treated as a biohazard, appropriately labelled and stored as such, until collected by the manufacturer to fulfil their Vigilance obligations. If there has been a catastrophic rupture and the filler material is not recoverable, the silicone shell of the implant should still be retained. Patients should be warned that this is a regulatory requirement and that it is not appropriate, therefore, for the patient to take them home. We recommend that photographs are taken of the implant after explantation, at the end of the procedure, whether intact or ruptured. This should include a close-up of the posterior 'patch' to show the manufacturer's name, implant style and lot number, which must be recorded in the operation note. These images will act as both a clinical and medico-legal record and will facilitate identification of the implant.
The capsule should be inspected to identify areas of concern to highlight to the pathologist. The capsule must be formally orientated by placing external sutures, for example, short silk to superior, long silk to lateral, medium silk to medial and double loop silk to anterior (Fig 7C). If a double capsule is present, the inner layer should be peeled off the implant and sent separately as BIA-ALCL may be present separately in this layer (Fig 7F). Primary analysis of the capsule is morphological and is usually carried out by the breast pathology team, working closely with haematopathology for secondary molecular assessment as described above (Fig 5C, D). As many capsules look macroscopically normal, or are affected by areas of silicone impregnation, it is essential that multiple representative sections are taken to increase the detection of small areas of tumour cells within the capsule or on the luminal surface, as these are challenging to detect. 69,70 Miranda et al. reported that no mass could be identified on macroscopic examination of the capsule in 42 of 56 (75%) cases in their report of 60 patients with BIA-ALCL. 32 In cases that despite exhaustive microscopic assessment fail to reveal any abnormal cells, the disease is categorised as effusion-limited (stage 1A). 29
Management of BIA-ALCL found incidentally on capsulectomy specimens
Capsulectomy is commonly performed at the same time as implant exchange or explantation for grade 3-4 capsular contracture, and it is recommended that capsules are removed in one piece wherever possible and submitted for routine histological analysis. Specimens should be orientated to maximise the value of histopathology. In the rare scenario where BIA-ALCL is found incidentally in this manner 14,17 subsequent patient management should follow the diagnostic and treatment algorithms provided in this document.
Staging
The American Joint Committee on the cancer tumour-nodemetastasis (TNM) staging system for solid tumours should be used in preference to the Lugano modification of the Ann-Arbor classification traditionally used for lymphoid neoplasms, as proposed by Clemens et al. 34 (Table I). This not only better reflects the in situ or local infiltrative patterns associated with the disease but also allows better prognostic classification. Patients with disease limited to the effusion or confined to the capsule have a good/excellent prognosis with up to 93% reported to achieve complete remission at a median follow-up of two years. 32 Presentation with a tumour mass that extends beyond the capsule, with lymph node or more distant involvement reflects aggressive disease and is associated with lower disease-free (DFS) or overall survival (OS). 17 One local/regional node involved 2B T1-3N1M0 N2 More than one local/regional node involved 34 Due to the rarity of advanced BIA-ALCL, optimal chemotherapeutic choice is extrapolated from studies investigating systemic ALK-negative ALCL. National guidelines for the treatment of systemic disease are published by the British Committee for Standards in Haematology (BCSH). 71 At present, CHOP (cyclophosphamide, hydroxydaunorubicin (doxorubicin), (onco)vincristine and prednisolone) chemotherapy is most frequently used for the upfront treatment of ALCL based on experience with this regimen from high-grade B-cell lymphoma, despite poorer outcomes in the T-cell lymphoma setting. 72 There is conflicting evidence as to whether the addition of etoposide leads to improved outcomes. [73][74][75][76][77] Recently, ECHELON-2, a large phase III double-blinded randomised trial, demonstrated an improved median progression-free survival (PFS) with BV-CHP [brentuximab vedotin (BV), an anti-CD30 antibody drug conjugate, given in place of vincristine] compared to standard CHOP, with PFS improving to 48Á2 from 20Á8 months (HR 0Á71, 95% CI 0Á54-0Á93, P = 0Á0110) with no significant differences in toxicity. 78 An OS benefit was also seen in favour of BV-CHP (HR 0Á66, 95% CI 0Á46-0Á95). This improvement was most significant in the ALCL subgroup. Following FDA approval of this combination for use in patients with untreated CD30positive T-cell lymphomas and European Medicines Agency (EMA) approval for untreated systemic ALCL, NICE approved its use in the NHS as per EMA indication in July 2020.
Single-agent BV is currently licensed and funded in the UK for relapsed/refractory ALCL based on data from a phase 2 study showing a favourable response rate of 86% with a complete response of 57%. 79 It is likely to be used less frequently in the refractory setting as its use as part of the upfront combination increases. There is some evidence that retreatment can lead to clinical responses, particularly if a long remission is achieved initially. 80 Currently, in the UK, patients not refractory to BV-CHP are eligible for retreatment with BV at relapse. There are two case reports of patients achieving long-term remission following adjunct treatment with BV as a single agent following surgery for BIA-ALCL due to associated lymphadenopathy, although neither case had histological confirmation of nodal spread of disease. 81,82 A further report details a patient with BIA-ALCL who progressed whilst having CHOP chemotherapy but responded to single-agent BV, allowing subsequent bilateral total capsulectomy and implant removal. 17 The prognosis for relapsed or refractory ALK-negative ALCL, is poor and where possible, treatment within a clinical trial should be considered. If a patient has already had BV or is intolerant, standard platinum-based salvage regimens such as GDP (gemcitabine, dexamethasone, prednisolone) are alternatives in the relapse setting. 71 The role of autologous stem cell transplantation as consolidation in first remission of advanced stage (stage 3 or 4) ALCL is controversial with poor quality and conflicting evidence. [83][84][85] Given the poor outcomes generally seen following relapse of T-cell lymphoma, allogeneic or autologous stem cell transplantation of appropriate patients as consolidation following salvage treatment should be considered on a caseby-case basis for BIA-ALCL.
Indications for radiation therapy
Adjuvant chest wall radiotherapy is not routinely recommended after total capsulectomy for histologically confirmed completely excised T1 and T2 tumours. It should be considered when complete excision has not been possible, if surgical margins are positive despite total capsulectomy or where there is chest wall invasion. It is unknown what the optimal radiotherapy dose should be, but doses similar to that given to patients with other high-grade lymphomas (24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36) have been proposed by the NCCN guidelines. 28
Ongoing surveillance
We advocate joint patient follow-up between the surgical and haemato-oncology teams every 3-6 months, for a minimum of two years and then as indicated. While clinical assessment is required, there is a lack of evidence to support routine imaging surveillance of BIA-ALCL whether limited or advanced stage. 86,87 This is analogous to the clinical principles that guide surveillance in other NHL subtypes, that routine imaging does not improve patient outcomes. [88][89][90] Our recommendation takes into consideration evidence from a range of lymphoma subtypes, the American Society of Hematology Choosing Wisely Campaign 91 and surveillance guidance provided by the NICE guidelines on NHLs. 48 Patients who become symptomatic should be re-referred to the tertiary centre if there is a concern for disease recurrence, to enable prompt investigation.
Conclusions
It is essential that we continue to promote widespread education of BIA-ALCL amongst healthcare workers in the UK. Patients who receive implants whether for cosmetic or reconstructive purposes must always be advised of the risk of developing BIA-ALCL and how this relates to implant surface type. They should always report symptoms such as delayed onset breast swelling or a mass to their GP and/or surgeon. There are no recommended changes to the routine medical care in asymptomatic patients with implants. Clinicians faced with anyone who has breast symptoms and a breast implant must consider the possibility of BIA-ALCL and should follow these diagnostic guidelines. The management of BIA-ALCL should be performed in a tertiary centre with multidisciplinary input.
Ongoing international surveillance and research will inform future recommendations for diagnosis and treatment. BIA-ALCL is associated with excellent outcomes for the majority of patients; with early recognition and increased detection there is greater opportunity for successful surgery with curative intent and an improved long-term prognosis. | 2020-11-24T14:07:33.999Z | 2020-11-22T00:00:00.000 | {
"year": 2020,
"sha1": "3df5f208b606278834c6189972c9b88083ca5496",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1111/bjh.17194",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "68f232ba269de23b1da496d38eb904f047430b34",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246319352 | pes2o/s2orc | v3-fos-license | Optogenetic stimulation of dynorphinergic neurons within the dorsal raphe activate kappa opioid receptors in the ventral tegmental area and ablation of dorsal raphe prodynorphin or kappa receptors in dopamine neurons blocks stress potentiation of cocaine reward
Behavioral stress exposure increases the risk of drug-taking in individuals with substance use disorders by mechanisms involving the dynorphins, which are the endogenous neuropeptides for the kappa opioid receptor (KOR). KOR agonists have been shown to encode dysphoria, aversion, and changes in reward valuation, and kappa opioid antagonists are in clinical development for treating substance use disorders. In this study, we confirmed that KORs were expressed in dopaminergic neurons in the ventral tegmental area (VTA) of male C57BL6/J mice. Genetic ablation of KORs from dopamine neurons blocked the potentiating effects of repeated forced swim stress on cocaine conditioned place preference (CPP). KOR activation inhibited dopamine neuron GCaMP6m calcium activity in VTA during swim stress and caused a rebound enhancement during the period after stress exposure. Transient optogenetic inhibition of VTA dopamine neurons with AAV5-DIO-SwiChR was acutely aversive in a real time place preference assay and blunted cocaine CPP when inhibition was administered concurrently with cocaine conditioning. However, when inhibition preceded cocaine conditioning by 30 min, cocaine CPP was enhanced. Retrograde tracing with CAV2-DIO-ZsGreen identified a population of prodynorphinCre neurons in the dorsal raphe nucleus (DRN) projecting to the VTA. Optogenetic stimulation of dynorphinergic neurons within the DRN by Channelrhodopsin2 activated KOR in VTA and ablation of prodynorphin blocked stress potentiation of cocaine CPP. Together, these studies demonstrate the presence of a dynorphin/KOR midbrain circuit that projects from the DRN to VTA and is involved in altering the dynamic response of dopamine neuron activity to enhance drug reward learning.
Introduction
People with psychiatric illnesses are vulnerable to stress-induced disruptions in behavior [1]. Stress alters reward seeking behaviors and increases the likelihood of relapsing drug-taking behaviors in individuals with substance use disorder [2]. Alterations in dopaminergic signaling in the brain is a commonly reported feature of substance use disorders [3], and aversive events can both increase and decrease dopamine neuron activity and dopamine release [4][5][6][7][8]. These stress-induced changes in dopamine neuron activity often contribute to relapsing drug-taking behaviors [9,10], suggesting that targeting the specific elements controlling stress reactivity in dopamine neurons would enable better therapeutic interventions for substance use disorders.
Kappa opioid receptor (KOR) antagonists have been in clinical development for decreasing stress-induced relapse of psychostimulants [11][12][13][14]. Activation of KORs in dopamine neurons is known to produce aversive effects in mice [15,16] and ablation of KORs from dopamine neurons increases locomotor sensitization to cocaine [17]. Previous studies have shown that there is a complex temporal relationship between KOR activation and cocaine reward. When KOR activation is paired with cocaine administration, the rewarding properties of cocaine are blunted, leading to decreased cocaine conditioned place preference (CPP) and decreases in dopamine release in the nucleus accumbens [18]. KOR activation can also enhance the extinction of cocaine self-administration [19]. However, when stress or KOR agonist administration occurs 30 to 60 min prior to the administration of a drug reward, it can enhance the reinforcing properties of that drug [20][21][22][23][24][25] and increase stimulated dopamine release [18]. Prior studies have aimed to understand these effects through measuring agonist-mediated changes in electrophysiological activity in the ventral tegmental area [16,[26][27][28][29] and voltammetric and microdialysis measurements of dopamine release in terminal regions [16,18,[30][31][32][33]. However, the effect of stress-induced dynorphin release on cell body activity in vivo has been challenging to measure. KOR is thought to be present in a significant proportion of dopamine neurons in mice [34,35], and our goal was to determine the cellular and circuit mechanisms that underlie stress-induced potentiation of cocaine reward.
Here using immunohistochemistry, we confirm that KOR colocalizes with a majority of VTA dopamine neurons. We found that stress-induced activation of KORs in dopamine neurons increases preference for cocaine reward, and we determined that stress relief, or removal of an aversive stimulus, increases post-inhibitory rebound of dopamine neuron excitability to drive increased reward seeking behaviors through a KOR-dependent mechanism. We then tested whether optogenetic inhibition of dopamine neurons could recapitulate the effects observed with physiological stress and found that the potentiation effect was present when optical inhibition of dopamine neurons preceded cocaine conditioning. Lastly, we identified a dynorphinergic projection from the dorsal raphe nucleus (DRN) to the VTA that is involved in these KOR-mediated effects, further demonstrating how coordination between the DRN and VTA guides reward learning during stress.
Subjects
Adult male C57BL/6 mice ranging from 2-6 months of age were used in these experiments. We have previously reported that estrogen-regulated sex differences in intracellular signaling pathways alter female responses to KOR agonists and antagonists [28,36,37]. In Abraham et al. [36], we found that stress-induced potentiation of cocaine conditioned place preference required a different drug dosing procedure than males, preventing pooling of data across sexes. However, qualitatively both males and females showed KOR-mediated potentiation of cocaine reward. Due to the variability produced by differing responses to KOR activation caused by estrogen, stress, and cocaine between sexes [36], we focused our initial studies of this circuit in male mice. All experimental procedures were approved by the University of Washington Institutional Animal Use and Care Committee and were conducted in accordance with National Institutes of Health (NIH) "Principles of Laboratory Animal Care" (NIH Publication No. 86-23, revised 1985). Mice were group housed (2-5 mice per cage). Food and water were available ad libitum in their home cages. All testing was conducted during the light phase of the 12 h light/dark cycle. Floxed KOR (KOR lox ) mice were generated by Dr. Brigitte Kiefffer (Institut Clinique de la Souris), in which exon 1 of KOR was flanked by loxP sites [16]. Floxed prodynorphin mice were generated by Dr. Richard Palmiter (University of Washington), in which exon 3 of the prodynorphin gene was flanked by loxP sites [38,39]. DAT Cre mice were bred with floxed KOR mice to generate DAT floxed KOR mice [15][16][17]40].
Genotyping
Transgenic mice were genotyped using Transnetyx (Cordova, TN, USA) genotyping services. Prodynorphin Cre , DAT Cre and DAT IRES-Cre mice were genotyped by DNA isolated from tail tissue obtained from weanling mice (21-28 days of age), and PCR screening was performed for the presence of Cre recombinase. For KOR lox mice, the following primers were used for PCR screening: Forward Primer: CACTTTTAAACATGGAGTAGGGTGATG; Reverse Primer: GGCCGCATAACTTCGTATAGCATA; Reporter: CCGGTGCTTCTGTGTATC. For pdyn lox mice, the following primers were used for PCR screening: Forward Primer: AGAGTACGTGGATTGTCTACAGAGA; Reverse Primer: GGAAAGGTTGAGAGCTGAGTAATCA; Reporter: CTGGGATCGGATCCTC.
Drugs
Cocaine hydrochloride (15 mg/kg) was provided by the National Institute of Drug Abuse Drug Supply Program (Bethesda, MD) and dissolved in saline to be administered intraperitoneally (IP) in a volume of 10 mL/kg.
Intracardiac perfusions and antigen retrieval
As described in Abraham et al. [41], mice were anesthetized with pentobarbital (Beuthanasia-D) and intracardially perfused with room temperature phosphate-buffered saline (PBS) and chilled 10% formalin. Thereafter, brains were stored overnight in 10% formalin. For antigen retrieval in KORp-IR experiments, brains were sectioned into 5 mm width sections and placed in a small basket in PBS (85-90° C for three min). The brains were agitated every thirty seconds. Immediately after 3-min incubation, the brains were removed from the warmed PBS and immersed in room temperature PBS. Brains were then put in 20% sucrose at 4° C for storage until sectioning.
Immunohistochemistry
Fixed sections of the midbrain containing the VTA were sliced at 40 μm, then washed in PBS before being placed in blocking solution (PBS containing 5% normal goat serum and 0.3% Triton X-100). VTA slices were incubated in an rabbit anti-KT2 (KOR tail) antibody (1:50 dilution; generated as previously described [42,43]) and chicken anti-tyrosine hydroxylase (TH; 1:1000 dilution; AB9702; MilliporeSigma, Burlington, MA, USA), rabbit anti-KORp antibody (1:25 dilution), mouse anti-Cre (1:500; MAB3120; MilliporeSigma, Burlington, MA, USA), or chicken anti-GFP (1:3000; ab13970; Abcam, Cambridge, UK) solution diluted in blocking buffer (Detailed protocol in Lemos et al. [44]) for 24 h (KT2, GFP, Cre) or 72 h (KORp) on a shaker in a cold room ( ° C). KORp peptide used for rabbit immunization was generated as described in [41] by Biomatik (Wilmington, DE, USA). Slices were then washed in PBS and incubated with an AlexaFluor 488 or 555 goat anti-rabbit (A11008; A32732), 488 goat anti-chicken (A11039), or 488 goat anti-mouse (A28175) secondary antibody (1:500 dilution; ThermoFisher Scientific, Waltham, MA, USA) for 2-h covered on the shaker. After two h, the tissue sections were washed in PBS and mounted on Superfrost Plus slides with Vectashield hardset mounting media and imaged at a Leica SP8X Confocal Microscope. To quantify the total number of KT2 and TH positive cells and changes in KORp-IR (previously described in [41]), we used ImageJ. For KT2 experiments, an investigator blinded to treatment conditions counted all immunoreactive cells in each section in a single plane across the rostrocaudal axis of the lateral and medial ventral tegmental area. For the KORp experiments, an investigator blinded to the treatment conditions randomly sampled 30 cells within the VTA in each slide at a single plane. Cells were circled to generate regions of interest (ROIs) and average fluorescence intensity (pixel intensity) within each ROI was recorded using ImageJ. KORp was stained with AlexaFluor 488 and incoming fluorescent fibers and other fluorescent artifacts were excluded from analyses to focus on changes in cell body fluorescence. The average background fluorescence of a sample was recorded by circling an area with no visible cells and recording pixel intensity. The background fluorescence was subtracted from the averaged cell fluorescence within each slide to account for differences in background across animals.
Stress potentiation of conditioned place preference
To test cocaine CPP, a balanced three-chamber apparatus was used [21]. Mice were given a pretest on day 1 and then cocaine (15 mg/kg) was paired with the less preferred side during conditioning on days 2 and 3. Saline was paired with the alternative chamber side four hours after the cocaine conditioning session. To induce stress, mice were exposed to a modified forced-swim test as previously described [21,22,36,41]. Briefly, the modified-Porsolt forcedswim paradigm used a 2-day procedure in which mice swam in 30 °C water in a 5 L opaque beaker for 15 min the first day within ten minutes following the pre-conditioning preference test, and four 6-min swims (separated by 6 min each) 10 min before the first cocaine conditioning session on Day 2. During day 4 (posttest), mice were allowed to freely explore the apparatus. Preference score was determined by subtracting time in the drug-paired compartment during posttest from time in the drug-paired compartment during pretest (post-pre).
Optogenetic stimulation
A small nestlet was placed under the head of the mouse prior to connecting the incoming fiberoptic patchcord to the indwelling fiberoptic cannula. Mice were placed into a novel cage with fresh bedding and allowed to freely explore with patchcord attached. On laser exposure days, mice received 473 nm light source (10 mW incoming laser power; OEM Laser, Midvale, UT) controlled through a waveform generator (Grass Instruments). For optical inhibition, laser was on for a thirty-minute session where mice received a 100 ms pulse of laser light every 3 s. For optical stimulation experiments, mice received 5 s of 20 Hz (10 ms pulse) laser on and 5 s laser off cycled over a 30 min session, based on Al-Hasani et al. (2015) [45] showing optically elicited dynorphin release with 20 Hz stimulation. Mice were perfused within ten minutes of the termination of optical stimulation in the DRN.
Fiber photometry
We used a real-time signal processor (RZ5P; Tucker-Davis Technologies) connected to Synapse Software (Fiber Photometry) to set frequency of light stimulation and record input from photodetectors. The RZ5P was connected to a light emitting diode (LED) driver (Doric Lenses) that controlled the power of a 465 nm and 560 nm Doric LED. The LED was attached with a low autofluorescence patchcord (400/430) to a Fluorescent MiniCube (Doric Lenses) with dichroic mirrors. Optical patchcords connected the MiniCube with a pigtailed rotary joint (FRJ; Doric Lenses) that allowed free animal movement and Newport (Irvine, CA) visible 2151 Femtowatt Photodetectors connected to the RZ5P for data collection. Prior to photometry sessions, patchcords were bleached with light for at least 4 h to minimize autofluorescence. Power of the LED at the fiber tip was set to 30 μW and was tested prior to the start of each session. Signals were collected at a sampling frequency of 1017 Hz. Each of the sessions were downsampled by a factor of 100 and normalized to a five-minute baseline period in the beginning of the recording. The sessions were then smoothed using a moving average filter (100 s window) to remove high frequency noise and detrended to remove linear drift. The control channel (560 nm) was fitted to the signal (465 nm) channel using a least-squares method and subtracted to remove motion artifacts. Each recording session started with a 5 min baseline recording period prior to behavioral manipulations to calculate fluorescent change from baseline (ΔF/F; change in fluorescence/baseline fluorescence) and each trial period was set to start at zero. Calcium events were defined as having a peak width greater than 1 s and peak height greater than 2.9x standard deviation from the mean, based on Calipari et al. [46].
Statistical analysis
All data are presented as mean ± s.e.m. Individual data points are shown when possible. Mice were removed from analyses if no viral expression was found postmortem. We used two-tailed t-tests, one-way and two-way ANOVA (incorporating repeated measures where appropriate), and performed post-hoc tests as specified in text. Photometry data were analyzed through custom-built MATLAB software (MathWorks Inc.; Natick, Massachusetts, USA). Behavioral data were analyzed using Ethovision XT 11.5 (Noldus; VA, USA) and statistical analyses were performed with Prism 9.0 (GraphPad Software; CA, USA).
Results
We first assessed the distribution of KORs in the VTA using antibodies targeting either tyrosine hydroxylase (TH), an enzyme involved in the synthesis of dopamine, or an epitope within the C-terminal tail of the KOR "KT2" [42,43]. We determined the number of dopamine neurons (TH + ) that expressed KOR (Fig. 1A,B), as well as the number of neurons showing only KOR or TH+ immunoreactivity. In a total of n = 6764 cells surveyed in n = 3 C57B6/J male mice, we found that a majority (6101) showed immunoreactivity for both KOR and TH. This confirmed previous observations suggesting that KOR is expressed in a majority of VTA dopamine neurons [27,34,47].
Our prior studies [21,25,36] have demonstrated that repeated forced swim stress prior to drug exposure enhances the expression of conditioned drug preference. In addition to serotonergic contributions to this effect [23,25,48] and based on VTA KOR-dependent aversion reported in Ehrich et al. [16], we hypothesized that dopaminergic signaling was also likely necessary for potentiation of cocaine reward. We tested the necessity of KOR activation in dopamine neurons for stress potentiation of cocaine reward using a conditioned place preference assay and compared control (DAT Cre ) to mice with a conditional deletion of KORs in dopamine neurons (Fig. 1C, D). There was a significant interaction between repeated forced swim stress (rFSS) exposure and genotype (F(1,41) = 4.104, p = 0.0493). In control (DAT Cre ) mice, there was a significant increase in preference score in mice receiving swim stress (n = 13; p = 0.008) compared to those that did not receive swim stress (n = 12). However, when KORs were deleted from dopamine neurons (by crossing DAT Cre with floxed KOR mice; DFK), rFSS (n = 11) did not significantly increase expression of cocaine preference compared to no rFSS mice (n = 9). This demonstrated that KOR activation in dopamine neurons is required for stress potentiation of reward. We were then interested in characterizing the calcium dynamics of dopamine neurons during and after stress that may contribute to these changes in behavior.
We measured the effect of rFSS and cocaine CPP on calcium activity in dopamine neurons in mice with KOR conditionally deleted from dopamine neurons (DAT Cre floxed KOR; DFK n = 5) and intact control mice (DAT Cre n = 5 no rFSS, 6 with rFSS; Fig. 2A). AAV1-DIO-GCaMP6m was injected into the VTA of all mice and a fiber was implanted above the injection site to record bulk calcium activity in DAT Cre neurons ( Fig. 2B; Supplement 1A). There were no significant differences in the magnitude of calcium transients during pre-test (Fig. 2C) or in ΔF/F during the first 15-min swim stress exposure between groups (Fig. 2D). However, KOR-mediated differences in neuronal calcium activity were observed during Day 2 of rFSS exposure (Fig. 2E). During Day 2 swim stress periods, mice with intact KOR showed significant differences in calcium activity during swim stress periods 1,3, and 4 compared to DFK mice (Genotype We quantified the overall change in fluorescence from the first minute of the session to the last minute of each test session (Fig. 2F) and found that during the fourth swim and post-swim period, there was a significant difference between activity during stress compared to activity in the post-swim period (Genotype x Time interaction: F (7,63) = 3.19, p = 0.006; Sidak's post hoc: p = 0.0036). This was also reflected by the total number of calcium events (Fig. 2G) that were also significantly increased (F (7,63) = 2.88, p = 0.011) compared to the first swim session during relief periods 2 (Sidak's post hoc: p = 0.047), 3 (p < 0.0001), and 4 (p = 0.004) in control mice. We confirmed that there was a significant main effect of conditioning (Fig. 2H) on the number of calcium events in the drug-paired chamber (F (1,13) = 11.1, p = 0.006; Sidak's post hoc: DAT rFSS p = 0.029). Our calcium measurements suggested that rFSS-induced KOR activation in dopamine neurons produced a period of inhibition followed by increased calcium activity in dopamine neurons. This post-inhibitory rebound in dopamine neurons may enhance associative learning and promote reward seeking behaviors.
We hypothesized that inhibition of dopamine neurons would be sufficient to mimic the effects of stress on cocaine reward preference (Fig. 3A). We tested this using a modified blue-and red-light responsive chloride channel opsin (Step-waveform inhibitory ChannelRhodopsin2; SwiChR [49]) to directly inhibit dopamine neurons (Fig. 3B). DAT Cre Mice were injected with AAV5-DIO-eYFP or AAV5-DIO-SwiChR in the VTA (Fig. 3C ). We first confirmed that SwiChR inhibition of dopamine neurons could produce aversion, as observed with other inhibitory opsins [7,50,51]. Mice (n = 4 eYFP; n = 4 SwiChR) received dopamine neuron inhibition (100 ms pulse every 3 s) when crossing into their preferred chamber, and inhibition was terminated with a 100 ms pulse of red light when mice crossed over to the other chamber side (Fig. 3D; Supplement 1D). SwiChR inhibition produced significant real-time aversion to the light-paired chamber compared to control mice injected with AAV5-DIO-eYFP (Group X Time interaction: F(2,12) = 4.07, p = 0.045; Sidak's post hoc: Laser Day 1 p = 0.039, Laser Day 2 p = 0.005). We then tested how dopamine neuron inhibition would alter cocaine CPP. Mice (n = 6 eYFP; n = 6 SwiChR) received laser stimulation (100 ms pulse every 3 s) during cocaine conditioning sessions (Fig. 3E). Mice that were treated with eYFP and cocaine showed a significantly higher preference than mice treated with SwiChR and cocaine concurrently (t 10 = 2.684, p = 0.023). Together with our real time preference data, this demonstrated that SwiChR inhibition of dopamine neurons is aversive and blocks cocaine CPP.
The temporal relationship between aversive and rewarding events is critical for stress potentiation of reward [18,20,21,22]. We tested whether pre-treatment with SwiChR inhibition could recapitulate the effects of stress on cocaine conditioned place preference potentiation (Fig. 3F). We found that mice (n = 7) that received a 30-min session of SwiChR inhibition 30-min prior to cocaine conditioning showed a significant increase in cocaine CPP compared to eYFP treated mice (n = 8; t 13 = 2.25, p = 0.042). This demonstrated that prior inhibition of dopamine neurons was sufficient to enhance cocaine reward learning and suggests that the inhibitory actions of KOR activation could initiate signaling mechanisms that result in stress priming of dopamine neuron activity.
To determine the neural circuits involved in controlling stress-mediated enhancements in drug reward, we identified sources of dynorphin into the VTA. Prodynorphin-Cre (Pdyn Cre ) mice (n = 3) were injected in the VTA with a retrograde canine adenovirus (CAV2) which produced Cre-dependent expression of a green fluorescent protein (CAV2-DIO-ZsGreen) [39] to label dynorphin inputs to the VTA (Fig. 4A). Following six weeks of viral expression, we observed significant labeling of prodynorphin Cre expressing neurons largely localized in the medial DRN (Fig. 4B) that project into the VTA, as previously reported in Fellinger et al. [39]. We then tested the contribution of dynorphin release from this neuron population to stress-mediated alterations in reward behaviors.
We first confirmed that DRN neurons release dynorphin into the VTA area by optogenetically stimulating DRN dynorphin neurons and measuring KOR activation in the VTA (Fig. 4C) with a phospho-selective antibody (KORp; Fig. 4D). We also measured the effect of endogenous dynorphin release evoked by rFSS on KORp immunoreactivity in the VTA. There was a significant effect of treatment (F (3,15) = 25.8, p < 0.0001). Phosphorylation of KOR in the VTA was significantly increased (Fig. 4E) after DRN optogenetic stimulation (473 nm; 10 mW; 20 Hz; 5 s on/5 s off) with Channelrhodopsin2 (ChR2; n = 4; p = 0.0001) compared to an eYFP control group (n = 4). Swim stress-elicited dynorphin release increased KOR phosphorylation (n = 6 rFSS; n = 5 no rFSS) in the VTA compared to a no rFSS group (Sidak's post hoc p<0.0001). Together, these experiments demonstrate that dynorphin-containing neurons in the DRN are functionally connected to VTA neurons to activate KORs.
Discussion
Our results show that dynorphin neurons in the dorsal raphe neuron project into the ventral tegmental area and activate KORs in the VTA, which are required for stressmediated enhancements in cocaine reward learning. Further, we observed that inhibition of dopamine neurons by stress produces post-inhibitory rebound periods which likely promotes associative learning. These effects contribute to motivating behavior to escape stressful situations and to seek rewarding stimuli to mitigate the aversive effects of stress.
It has been previously demonstrated that stress can alter functioning of both serotonergic [25] and dopaminergic neurons [16] through KOR activation, and our data indicate that dynorphin neurons in the DRN may be a critical node for coordinating the effect of stress between serotonergic and dopaminergic neurons. Dynorphin released from neurons in the DRN is likely to have effects on local KORs within the DRN that have been characterized on SERT neurons [52] and our data extend this observation by showing that DRN dynorphin neurons can also project to the VTA and are functionally connected to the VTA as observed with KORp measurements. This projection is also known to produce effects on fear generalization [39] indicating that there may be aversive and reward learning elements that are altered by activity in this system. The contribution of other dynorphin inputs to the VTA to stress-induced changes in learning is not well characterized. Although striatal dynorphin neurons have some projections to the VTA, stimulation of these neurons did not produce KOR mediated changes in the VTA [53]. Similarly, although dynorphin neurons in the bed nucleus stria terminalis project to the VTA, they did not affect discrimination learning [39]. Lateral hypothalamic sources of dynorphin input to the VTA has been shown to alter cocaine reward, and the co-release of orexin and dynorphin from these neurons likely contributes to these effects [54][55][56]. Interactions between the DRN and VTA have been characterized for glutamatergic and serotonergic populations [57,58], but our data suggest a subset of these populations that may be distinct from non-dynorphinergic DRN inputs to the VTA. Single cell transcriptomic data from the DRN indicate that dynorphin is primarily expressed in serotonergic neurons [59,60], but the interactions between serotonin and dynorphin release in the VTA remains unknown. Glutamatergic neurons (VGLUT3) also project to the VTA from the DRN [57], but SwiChR inhibition of these neurons blunted cocaine conditioned place preference [25], suggesting that transient inhibition of populations involved in positive valence is not sufficient to generate potentiation like effects. Instead, our data suggest multiple points of integration between DRN and VTA that are reflected by changes in downstream structures, including the striatum [16,25] and cortex [33,41]. Understanding the interactions between serotonergic and dopaminergic systems during stress and following stressful periods may help to better delineate the actions of KOR on reward learning behaviors.
We aimed to measure the KOR-mediated effects of stress on dopamine neuron activity using bulk calcium recordings. We observed both decreases and increases in calcium activity following repeated stress that were dependent on intact KOR in dopamine neurons. These slow changes in activity may reflect altered tonic activity, although we were not able to directly test this hypothesis. Dopamine neuron activity can increase in response to aversive events [4,6] and after the removal of an aversive stimulus [61,62]. Inhibition of dopamine neurons can lead to rebounding actions in a subset of dopamine neurons [63]. This pattern of responding may be important for shaping behaviors during and after stress, such as enabling both pauses in behavior during stress and motivating escape or avoidance behaviors after the termination of a stress. Circuit mechanisms of this type may also be important for encoding the emotional context in which learning occurs and enable better learning of cues that predict stress. Repeated stress may lead to a decreased ability to recover from inhibition and eventually lead to anhedonialike adaptations observed in dopamine neurons following KOR activation [64]. It is unknown whether a system that is not as tonically active as dopamine neurons would be expected to show similar effects following KOR activation, but our initial evidence in glutamatergic and serotonergic neurons [25] suggests that the effect is likely specific to a few systems. We have previously reported that optical inhibition of serotonin neurons with SwiChR can potentiate cocaine reward. It is currently unknown whether SwiChR or other inhibitory chloride channel opsins potentiate activity of dopamine or serotonin neurons following inhibition. SwiChR inhibition has not been reported to produce rebounding activity after termination of light delivery in the cortex [65] although this effect may depend on the particular population measured. Direct effects of SwiChR inhibition on behavior also decay within minutes of light delivery [66]. Although we did not directly measure electrophysiological changes thirty minutes after inhibition, prior research suggests that dopamine neurons could enter a primed state after inhibition or stress [67]. Our results suggest important considerations for the time course of inhibitory opsin activity when measuring behaviors.
Interactions between KOR and dopamine neuron activity is complex due to the distribution of KOR in the VTA and inputs and outputs of the VTA. We observed extensive colocalization of KOR with tyrosine hydroxylase, however different approaches suggest less widespread distribution [68] or different patterns of KOR expression [69,70]. Althoughthe detected KOR presence in dopamine neurons may be lower than observed with immunohistochemistry because of assay differences, we find that KOR activation within the VTA has a potent influence on dopamine neuron activity and behavior. In addition to dopamine neurons, KOR can modulate the activity of GABAergic neurons [71] and glutamatergic inputs to the VTA [72]. Dynorphin released in the VTA could have extensive effects on the function of these independent populations and may also produce self-regulatory or autocrine effects on dynorphin release. For example, KOR and dynorphin expression may overlap within the DRN to regulate dynorphin neuron projection activity to the VTA. Dynorphin release within the striatum also regulates serotonergic [23] and dopaminergic terminals [16] and distinct cellular mechanisms are likely involved in potentiating or suppressing release of neurotransmitters at these sites after KOR activation. This suggests a complex system of dynorphin/KOR interactions acting to alter dopamine and serotonin signaling throughout the brain that could contribute to altering reward behaviors.
Our findings suggest that KOR antagonists could promote stress resilience by decreasing the dynamic range of dopamine neuron activity during stress. KOR activation in both serotonergic and dopaminergic systems is known to contribute to aversion and stressmediated changes in cocaine reward, suggesting that dynorphin neurons in the DRN that project to the VTA may be a critical population that coordinates activity between these systems during stress.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. for stress potentiation of cocaine CPP. On Day 1, control (DAT Cre ) mice or conditional knockout mice (DAT Cre crossed with floxed KOR mice; DFK) received a 30-min test for chamber preference (Pre-conditioning Preference). Within ten minutes of the end of the initial preference test, mice received a 15-min forced swim stress exposure (Day 1 of rFSS) or brief handling (no rFSS). The following day, mice received four six-min swim stress exposures with a six min interval between each stress period (Day 2 rFSS). Within ten min following the last stress exposure, mice underwent a 30-min cocaine conditioning session. Four h later, mice received a 30-min saline conditioning session in the alternative chamber. On Day 3, mice received 30-min cocaine and saline conditioning sessions separated by 4 h. On Day 4, mice were again tested for chamber preference. D. Preference score is presented as the average of the time in the cocaine-paired chamber prior to conditioning subtracted from time in the cocaine-paired chamber following conditioning Stress potentiates cocaine CPP in control mice, but not in mice with genetic ablation of KORs from DAT Cre neurons. Error bars indicate S.E.M. **p < 0.01 (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.). C. Average of calcium transients (ΔF/F) during pre-conditioning preference test is shown, demonstrating that there was no significant baseline difference in the shape or magnitude of calcium transients between groups. D. There was no significant difference between DAT or DFK mice during the 15-min swim period on the first day of conditioning. E. Panel E shows A. A conceptual model for stress potentiation of cocaine reward is shown. We observed that stress inhibited dopamine neurons, but this inhibition did not persist. When stress was removed, dopamine neuron calcium activity rebounded above baseline. This effect was not present in mice with KOR conditionally deleted from dopamine neurons (DFK). When drug reward was administered during the period after stress, mice developed a greater preference for the drug-paired chamber. B. Schematic for SwiChR inhibition experiments is shown. Mice received a bilateral cannula targeted towards the VTA after injection with AAV-DIO-SwiChR. C. Expression of SwiChR in DAT neurons is shown in green. Scale bar shows 100 μm D. Real Time Place Avoidance. Schematic for experiment is shown in upper panel.
Mice that received SwiChR inhibition significantly decreased time in light paired chamber (during Laser 1 and Laser 2 sessions) compared to mice injected with eYFP. E. Concurrent inhibition. Upper panel shows schematic. Mice received cocaine conditioning paired with SwiChR inhibition. Lower panel shows that SwiChR inhibition of dopamine neurons during conditioning blunted cocaine CPP. F. Prior Inhibition. Upper panel shows schematic. Mice receiving SwiChR inhibition for 30 min, 30 min prior to cocaine conditioning showed a significantly higher preference for the cocaine-paired floor. Error bars indicate S.E.M. *p < 0.05 (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.). showed that phosphorylation of KOR in the VTA was significantly increased after DRN optogenetic stimulation with ChR2 compared to an eYFP control group. F. Schematic for dynorphin lox/lox injection. G. Schematic for CPP potentiation procedure. H. There was a significant increase in preference for cocaine in control mice, but deletion of prodynorphin from the DRN blocked stress potentiation of cocaine CPP. Error bars indicate S.E.M. *p < 0.05, ***p < 0.001, ****p < 0.0001 (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.). | 2022-01-28T16:47:26.663Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "c9301006b0d8b914ba2b21fb70ab17e10104bf1b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.addicn.2022.100005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "382ec33132afd154ffb96a1087add7839a6fb46c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
221521042 | pes2o/s2orc | v3-fos-license | Identifying Neuropeptide and G Protein-Coupled Receptors of Juvenile Oriental River Prawn (Macrobrachium nipponense) in Response to Salinity Acclimation
Neuropeptides and their G protein-coupled receptors (GPCRs) from the central nervous system regulate the physiological responses of crustaceans. However, in crustaceans, our knowledge regarding GPCR expression patterns and phylogeny is limited. Thus, the present study aimed to analyze the eyestalk transcriptome of the oriental river prawn Macrobrachium nipponense in response to salinity acclimation. We obtained 162,250 unigenes after de novo assembly, and 1,392 and 1,409 differentially expressed genes were identified in the eyestalk of prawns in response to low and high salinity, respectively. We used combinatorial bioinformatic analyses to identify M. nipponense genes encoding GPCRs and neuropeptides. The mRNA levels of seven neuropeptides and one GPCR were validated in prawns in response to salinity acclimation using quantitative real-time reverse transcription polymerase chain reaction. A total of 148 GPCR-encoding transcripts belonging to three classes were identified, including 77 encoding GPCR-A proteins, 52 encoding GPCR-B proteins, and 19 encoding other GPCRs. The results increase our understanding of molecular basis of neural signaling in M. nipponense, which will promote further research into salinity acclimation of this crustacean.
INTRODUCTION
Crustacean culture provides high-quality food as well as huge economic benefits to farmers and the economy. Among them, the Macrobrachium nipponense is an economically important economic species in aquaculture, with a production of in excess of 250,000 tons and an output reaching 2 billion RMB per year in China (1). In the aquaculture industry, culturing seawater species for desalination and using freshwater crustacean species for saltwater acclimation are new trends (2). In the past two decades, large numbers of the genus Macrobrachium have invaded freshwater habitats from the ancestral marine environment, and have exhibited high adaptability to slightly brackish and freshwater habitats (3)(4)(5). However, to date, few studies have investigated the mechanisms that regulate salinity adaptation in M. nipponense.
Salinity is an important environmental factor in estuarine and coastal systems, which affects the physiology of crustaceans and determines species distributions (6). There is a growing interest in improving prawn performance in aquaculture at low salinity. Previous studies have confirmed that a number of key neuropeptides participate in salinity stress responses of crustacean (7,8). Neuropeptides mostly bind to G protein-coupled receptors (GPCRs) on the cell surface (9). GPCRs, as seven-pass integral membrane proteins, play key roles as transducers of extracellular signals across the lipid bilayer (10,11), and act as salinity sensors in aquatic animal (12). Thus, the identification of neuropeptides and GPCRs represents an essential step to unraveling the roles of these molecules in response to salinity acclimation.
Rapid developments in RNA sequencing make it possible to use bioinformatics approaches to identify neuropeptides and their cognate GPCRs. Although neuropeptide sequences have been identified using in silico transcriptome analysis in many crustaceans (13)(14)(15)(16), no information to date was provided to identify neuropeptides and GPCRs from eyestalk tissues of female M. nipponense during salinity acclimation, especially, knowledge of the GPCRs is limited in crustacean. In the present study, we aimed to perform gene expression profile analysis (control vs. low salinity group and control vs. high salinity group) to identify neuropeptides and GPCRs from eyestalk tissues of prawns responded to salinity stress. We also aimed to validate target transcripts encoding for neuropeptides and their cognate that might have important functions in M. nipponense salinity adaptation. The results will provide insights into salinitymediated regulation of neuropeptide/GPCR signaling pathways in M. nipponense.
Experimental Animals and Salinity Treatment
Juvenile M. nipponense specimens were obtained from a farm in Shanghai (Qingpu) and acclimated to laboratory conditions for 14 days in fresh water (temperature 22 ± 1 • C, pH 7.7 ± 0.6, dissolved oxygen content 6.5 ± 0.5 mg/L). Thereafter, 360 healthy prawns (1.82 ± 0.46 g wet weight) were randomly and equally divided into 12 tanks (30 per tank), and the tanks were randomly assigned to three groups (three tanks per group). The salinity was gradually adjusted on the same day to reach the target salinity for each group: S0 = 0.4 (control group), S6 = 8 ± 0.2 (low salinity), S12 = 16 ± 0.2 (high salinity). Salinity and water quality were maintained as previously described (2), and the prawns provided with commercial feed (Zhejiang Tongwei Feed Group CO., Ltd) twice daily for 1 week at a ratio of 6-8% of their body weight.
Identification of Neuropeptides and Their Putative Cognate GPCRs
Total RNA extraction from nine prawns in each group, RNA-Seq library preparation and sequencing were carried out based on Illumina HiSeq TM 2500 paired-end sequencing technology, as previously described (17). Trinity was used to assemble a transcriptome data from eyestalk tissues and generated the unigenes. All unigenes were annotated based on the NCBI databases with a cut-off E value of 1.0 × 10 −5 , Further, the BLAST2GO program was used for GO analysis (http:// www.geneontology.org/), and Clusters of Orthologous Groups (COG) classification and signal pathway annotation of unigenes was performed by conducting BLASTx searches. EdgeR uses a negative binomial distribution method with pairwise test using Fisher for identified differentially expressed genes (DGE) between control and salinity treatment group. Subsequently, GO and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway classification of DEGs was carried out as previously described (17). The transcriptomic data (NCBI Sequence Read Archive: SRP251206) derived from eyestalk tissue were used to identify neuropeptides and receptors. To search for M. nipponense neuropeptides, the annotated sequences and the open reading frame (ORF) file were searched for keywords related to known neuropeptides and for conserved amino acid sequences, respectively (18,19). Finally, the identified sequences were combined with a list of previously obtaining and characterized neuropeptides (Supplementary Material 1).
The Pfam-v27 module in CLC Genomics Workbench v9.5 (Qiagen, Hilden, Germany) was used to predict the structural domains in the GPCRs (intra/extracellular loops and seven transmembrane domains (7-TM). Bioinformatic analysis was also carried out on previously reported neuropeptide GPCRs from decapods (20,21). Local BLAST was used to compare the GPCR sequences, followed by clustering analysis using BioLayout Express 3D (22) at an e-value cutoff of 1e-20. All GPCR sequences (those from our data and previously characterized receptors) were then combined into one list (Supplementary Material 2). Then, the GPCRs were multiply aligned using the CLUSTALW algorithm, imported into MEGA 7.0, and subjected to phylogenetic analysis (23,24).
Quantitative Real-Time Reverse Transcription PCR
The identification and enrichment analysis of differentially expressed genes (DEGs) were performed according to our previously published methods (17). The cDNAs from salinity treatments of M. nipponense were synthesized from total DNA-free RNA (1 µg) using a Prime Script RT reagent kit (TaKaRa, Japan) following the manufacturer's instruction. The Bio-Rad iCycler iQ5 Real Time System (Biorad Inc., Berkeley, CA, USA) was used for qRT-PCR validation of DEGs expression, with the Actb gene as the internal control (25). The amplification efficiency and threshold were automatically generated by standard curves. The primer sequences are shown in Supplementary Material 3. The 2 − CT Frontiers in Endocrinology | www.frontiersin.org comparative CT method (26) was used to calculate the relative transcript abundance.
Overview of the Transcriptomes
We generated nine eyestalk transcriptomes in prawns under the three experimental conditions in response to salinity acclimation, including freshwater, low salinity, and high salinity. Analysis using the BUSCO pipeline indicated that >92% of the arthropoda orthologs were present in the assembled transcriptome [Complete BUSCOs (C): 92.6%]. After removing adaptor sequences, ambiguous "N" nucleotides and low quality sequences, a total of 366,728,422 clean reads representing 54,659,786,418 clean nucleotides (nt) were shown in Supplementary Material 4. A total of 162,250 unigenes were obtained for the eyestalk transcriptome. In the GO analysis, 10,002 unigenes were enriched into 58 functional subgroups. Based on COG analysis, 8,755 of the unigenes were allocated to 25 COGs (Figures 1A,B).
DEGs Identification and Functional Analysis
We identified 1,392 and 1,409 genes that were differentially expressed under low salinity and high salinity, respectively (Figures 2A,B). The heat map of identified putative neuropeptide precursors and their RNA-seq FPKM expression levels, were compared between freshwater culture and salinity acclimation, such as CCAP, crustacean hyperglycemic hormone (CHH) and ion transport peptide (ITP), and so on (Figure 2C). The biological functions of the DEGs were determined using GO functional annotation (Figures 2D,E), which were significantly over-represented (p < 0.05, FDR < 0.01) as shown "G-protein coupled receptor signaling pathway" (GO:0007186), "response to external stimulus" (GO:0009605). In addition, KEGG pathway enrichment analysis identified the 15 most significant pathways (Q < 0.05) associated with salinity acclimation (Figures 2F,G), both including represented metabolism pathway "Glycolysis/Gluconeogenesis, " "Citrate cycle, " and "Fatty acid metabolism."
Bioinformatic Identification of Putative GPCRs
Clustering and phylogenetic analyses identified 223 putative GPCR genes based on the de novo nine transcriptome datasets. Phylogenetic analysis showed that 34 of the GPCRs could be classified as GPCR-A proteins (Figure 3A), which included receptors for red pigment concentrating hormone (RPCH), adipokinetic hormone-related neuropeptide/corazonin-related peptide (ACP), and CCAP. Forty-four of the putative GPCRs were classified as GPCR-B proteins (Figure 3B). Three putative GPCR families within the GPCR-B classification were identified using comparative phylogenetics with high-confidence, including the lipoprotein receptor, methuselah receptor, and pigment dispersing hormone (PDH) receptor. The third group comprised the remaining uncharacterized GPCR families (Figure 3C), for example the metabotropic GABA-B receptor and smog receptor.
Verification Neuropeptide Expression
Eight predicted significant DEGs encoding neuropeptides were identified, including those encoding isoforms of CCAP, CHH, short neuropeptide F (sNPF), PDH, gonad-inhibiting hormone (GIH), and neuroparsins (NP), as well as a CCAP receptor (GRCP-A56). The expression trends of the eight DEGs identified in eyestalk of prawns in response to salinity acclimation from the RNA-seq data were verified using RT-PCR (Figures 4A,B). The expression levels of the eight DEGs were significantly higher in the low salinity group compared with that in the control group. By contrast, two DEGs (encoding GIH and CHH) showed the opposite trend in the high salinity group compared with that in the control group. Additionally, DEGs encoding CCAP, GRCP-A56, sNPF, NP I, NP, II, and PDH were significantly upregulated in the high salinity group.
DISCUSSION
The assembled transcriptome contained sequences representing 52 different neuropeptide precursors, most of which are present in other crustacean species. Importantly, our study was the first to indicate that certain neuropeptides in prawns play an important role in response to salinity acclimation. Interestingly, some neuropeptide transcripts that were detected previously in other decapod crustacean species were not identified in this M. nipponense transcriptome, such as crustacean female sex hormone (CFSH) (27,28). Notably, our previous M. nipponense de novo transcriptome assembly did include these neuropeptides, which partially disagrees with the results of the present study. A reasonable explanation is that differences in the identified neuropeptides were closely related to crustacean habitats (freshwater vs. estuary) and developmental stage (adult vs. larval). Data analysis predicted 148 different GPCRs, which is similar to the number predicted in Chilo suppressalis (29). A lack of close homologs of known function from related species made confident annotation of these GPCRs difficult. In addition, certain neuropeptide GPCRs identified previously in other arthropods (e.g., Crz, sulfakinin, and pyrokinin receptors) were not observed on the present phylogenetic analysis.
KEGG analysis identified energy metabolism pathways that were significantly affected by salinity, such as glycometabolism, which were similar to previous study in Litopenaeus vannamei (30), our further study will focus on the aspects of energy metabolism of prawns under salinity acclimation. Interestingly, GO functional annotation of the DEGs was associated with "G-protein coupled receptor signaling pathway" of prawns responded to salinity acclimation. Thus, we identified differentially expressed neuropeptides and GPCRs genes, which are plausibly related to salinity acclimation. The neuropeptides and their putative cognate receptors were analyzed using qRT-PCR. For example, CCAP is a C-terminal amidated nonapeptide hormone found in many crustacean species, such as blue crab (Callinectes sapidus) (31). In addition to its role in heartbeat regulation, direct evidence points to a role for CCAP in the regulation of homeostasis in L. vannamei (8), which was consistent with our results that CCAP and its receptor mRNA expression was upregulated under high-and low-salinity conditions in M. nipponense.
In agreement with the results of the present study, previous studies confirmed that salinity changes in crustaceans upregulated the transcript levels of peptide hormones (32,33), such as CHH and ITP. The injection of purified CHH increased the Na+ concentration and osmolality in the hemolymph (34). Notably, crustacean CHHs showed high sequence homology to ITP (35). Our results indicated much higher levels of ITP transcripts in the high salinity and low salinity groups than in the control group, suggesting that ITP might function in ionic transport or osmo-regulation, or both, in prawns. GIH has an important function in crustacean ovarian maturation inhibition (36). The results of the present study showed that high salinity downregulated GIH expression. This indicated that salinity and gonadal development might correlate strongly in M. nipponense. Therefore, further study is required to gain a better understanding of the functions of these neuropeptides and their GPCRs associated with the effects of salinity on the prawn reproduction system.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/Supplementary Material. | 2020-09-08T13:06:58.471Z | 2020-09-08T00:00:00.000 | {
"year": 2020,
"sha1": "d619ccdbde30e14a50dbc384ee6362cb84845fb5",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.00623/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d619ccdbde30e14a50dbc384ee6362cb84845fb5",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
227190111 | pes2o/s2orc | v3-fos-license | Viability and Antioxidant Effects of Traditional Cooling Rice Powder (bedak sejuk) Made from Oryza sativa ssp. Indica and Oryza sativa ssp. japonica on UVB-Induced B164A5 Melanoma Cells
Background: Traditional cooling rice powder (bedak sejuk) is a fermented rice-based cosmetic that is applied topically on one’s skin, as an overnight facial mask. According to user testimonies, bedak sejuk beautifies and whitens skin, whereby these benefits could be utilised as a potential melanoma chemopreventive agent. Objective: Hence, this study aimed to determine the effects of bedak sejuk made from Oryza sativa ssp. indica (Indica) and Oryza sativa ssp. japonica (Japonica) on UVB-induced B164A5 melanoma cells, and also identify the antioxidant capacities of both types of bedak sejuk. Methods: The optimum dose of Indica and Japonica bedak sejuk to treat the cells was determined via the MTT assay. Then, the antioxidant capacities of both types of bedak sejuk were determined using the FRAP assay. Results: From the MTT assay, it was found that Indica and Japonica bedak sejuk showed no cytotoxic effects towards the cells. Hence, no IC50 can be obtained and two of the higher doses, 50 and 100 g/L were chosen for treatment. In the FRAP assay, Indica bedak sejuk at 50 and 100 g/L showed FRAP values of 0.003 ± 0.001 μg AA (ascorbic acid)/g of bedak sejuk and 0.004 ± 0.0003 μg AA/g of bedak sejuk. Whereas Japonica bedak sejuk at 50 g/L had the same FRAP value as Indica bedak sejuk at 100 g/L. As for Japonica bedak sejuk at 100 g/L, it showed the highest antioxidant capacity with the FRAP value of 0.01 ± 0.0007 μg AA/g of bedak sejuk which was statistically significant (p < 0.05) when compared to other tested concentrations. Conclusion: In conclusion, Japonica bedak sejuk has a higher antioxidant capacity compared to Indica bedak sejuk despite both being not cytotoxic towards the cells. Regardless, further investigations need to be done before bedak sejuk could be developed as potential melanoma chemoprevention agents.
Introduction
The largest organ of the human body is the skin, which accounts for 16 % of the human body weight. The epidermis of the skin functions as a protective layer that separates organisms from the external environment. This is crucial in counteracting environmental stressors such as ultraviolet (UV) light. When UV light penetrates the epidermis, the melanin pigments in keratinocytes form a protective cover above the keratinocytes nuclei. This confers protection for the skin against UV light penetration as well as neutralising reactive oxygen species (ROS) that Abstract Background: Traditional cooling rice powder (bedak sejuk) is a fermented rice-based cosmetic that is applied topically on one's skin, as an overnight facial mask. According to user testimonies, bedak sejuk beautifies and whitens skin, whereby these benefits could be utilised as a potential melanoma chemopreventive agent. Objective: Hence, this study aimed to determine the effects of bedak sejuk made from Oryza sativa ssp. indica (Indica) and Oryza sativa ssp. japonica (Japonica) on UVB-induced B164A5 melanoma cells, and also identify the antioxidant capacities of both types of bedak sejuk. Methods: The optimum dose of Indica and Japonica bedak sejuk to treat the cells was determined via the MTT assay. Then, the antioxidant capacities of both types of bedak sejuk were determined using the FRAP assay. Results: From the MTT assay, it was found that Indica and Japonica bedak sejuk showed no cytotoxic effects towards the cells. Hence, no IC 50 can be obtained and two of the higher doses, 50 and 100 g/L were chosen for treatment. In the FRAP assay, Indica bedak sejuk at 50 and 100 g/L showed FRAP values of 0.003 ± 0.001 μg AA (ascorbic acid)/g of bedak sejuk and 0.004 ± 0.0003 μg AA/g of bedak sejuk. Whereas Japonica bedak sejuk at 50 g/L had the same FRAP value as Indica bedak sejuk at 100 g/L. As for Japonica bedak sejuk at 100 g/L, it showed the highest antioxidant capacity with the FRAP value of 0.01 ± 0.0007 μg AA/g of bedak sejuk which was statistically significant (p < 0.05) when compared to other tested concentrations. Conclusion: In conclusion, Japonica bedak sejuk has a higher antioxidant capacity compared to Indica bedak sejuk despite both being not cytotoxic towards the cells. Regardless, further investigations need to be done before bedak sejuk could be developed as potential melanoma chemoprevention agents.
However, increased exposure towards UV light is the main cause of hyperpigmentation and skin damage (Chan et al., 2014;American Cancer Society, 2020). The UV light from the Sun that penetrates skin are UVA (90 -95 %) and UVB rays (5 -10 %), whereby the extent of skin damage is dependent on the wavelength of each ray. UVA rays have longer wavelengths (320 -400 nm) that penetrate deep into the dermis and result in free radical formation, especially ROS. As for UVB rays, with shorter wavelengths (280 -320 nm) that penetrate until the epidermal layer only, the rays damage DNA which in turn causes mutations. This combination of ROS formations and mutations ultimately contribute to the development of skin cancer (Sander et al., 2003;Pfeifer and Besaratinia, 2012;D'Orazio et al., 2013;Kamarulzaman et al., 2017;Pavel et al., 2017;Nagapan et al., 2018).
A type of skin cancer that arises due to overexposure of UV light is melanoma. Melanoma occurs when melanocytes grow out of control due to mutations in DNA. Although melanoma accounts for only less than 10 % of all skin cancers, it contributes to the majority of skin cancer related deaths. This can be attributed to melanoma's high metastatic potential and resistance towards therapy (Pfeifer and Besaratinia, 2012;American Cancer Society, 2020). Therefore, melanoma chemoprevention strategies are more suitable in tackling this disease occurrence (Chhabra et al., 2017).
In line with melanoma chemoprevention strategies, the idea to use a local cosmetic product that has the potential to decrease ROS generation and mutation on human skin, emerged. This product, traditional cooling rice powder, more commonly known as bedak sejuk in Malaysia, is a traditional fermented rice-based cosmetic product. Rice grains that have been fermented in previous studies to produce bedak sejuk are Oryza sativa ssp. indica (Indica) and Oryza sativa ssp. japonica (Japonica) (Dzulfakar et al., 2015a). Indica rice grains are long, flat, slender, shatter easily and have high amylose content while Japonica rice grains are short, round, do not shatter easily and have low amylose content (Ricepedia, n.d.). Bedak sejuk as a product of rice grain fermentation, is in the shape of water droplets or cone-shaped pastilles. When these pastilles are mixed with water and applied onto the skin as an overnight facial mask, it gives off a cooling effect. It has been supported by user testimonies from generation to generation, that claimed bedak sejuk is able to beautify as well as whiten skin (Dzulfakar et al., 2015b;Dzulfakar et al., 2016c;Johar et al., 2018). However, these testimonies have never been tested in the laboratory setting especially on its effects towards skin cells both normal and malignant.
Hence, our study aimed to determine the effects of bedak sejuk made from the Indica and Japonica rice subspecies using UVB-induced B164A5 melanoma cells, as well as identifying the antioxidant capacities of both types of bedak sejuk.
Preparation of Indica and Japonica bedak sejuk
The bedak sejuk made from the Indica and Japonica rice subspecies were kindly provided from the Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor. In order to prepare bedak sejuk, Indica and Japonica rice grains were soaked in tap water at a ratio of 1:1 (w/v) in separate closed containers that were not sterilised. The rice grains were then allowed to undergo natural fermentation for 14 days at ambient temperature. On the 14 th day, the rice grains were filtered using a muslin cloth and soaked again in a new batch of water (w/v). The soaking of the rice grains was repeated six times for every 14 days, bringing the overall soaking process to 84 days. After 84 days, the resulting rice paste from each rice subspecies was collected and dried in an oven to produce the powdered form of bedak sejuk (Dzulfakar et al., 2016a). The bedak sejuk was then kept in a refrigerator at 4 o C until further use. Prior to experiments, the bedak sejuk was dissolved in distilled water and filtered through a 0.22 µm Millipore syringe filter for sterilisation.
Cell culture
The B164A5 murine melanoma cell line was purchased from the European Collection of Authenticated Cell Cultures (ECACC). The cells were cultured in Dulbecco's Modified Eagle Medium (DMEM) enriched with 10% foetal bovine serum (FBS), 1% Penicillin-Streptomycin mixture (Pen/Strep, 10,000 IU/ml), glucose and L-glutamine. The cells were then incubated in a humidified atmosphere at 37 o C in 5% CO 2 . When the cell confluency reaches 80%, the cells were sub-cultured (Public Health England, n.d.). Before the assays were conducted, the growth curve of the B164A5 cells was plotted. From the growth curve, the doubling time of 24 hours was obtained, whereby this finding was supported by a study from Danciu et al., (2013).
MTT assay
The cytotoxicity of both types of bedak sejuk towards B164A5 cells were evaluated through the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) cell viability assay according to the method of Mosmann (1983) with slight modifications. 200 µL of 5 x 10 4 cells were seeded in a 96-well flat-bottom plate and incubated for 24 hours in a humidified atmosphere at 37 o C in 5% CO 2 . After 24 hours of incubation, the media was discarded and replaced with 200 µL of PBS. Then, the cells were exposed to UVB radiation at 30 mJ/ cm 2 for 36.4 seconds (Lin et al., 2002). Right after the UVB exposure, the PBS was discarded and the cells were treated with both types of bedak sejuk in serially diluted concentrations of 6.25, 12.5, 25, 50 and 100 g/L. As for the positive control, the cells were treated with menadione in serially diluted concentrations of 0.0625, 0.125, 0.25, 0.5 and 1 mM according to Basri et al., (2015), while the negative control was the untreated cells. Following treatment, the cells were incubated for another 24 hours. After the incubation period, 20 µL of 5 mg/mL MTT solution was added into each well and incubated for 4 hours. Then, 190 µL of mixture was discarded from each well and 200 µL of DMSO was added and incubated for 15 minutes. Finally, the plate was shaken for 5 minutes and absorbance readings were taken at 570 nm using a microplate reader. The percentage of cell viability was calculated using the following formula: Cell viability (%)= (Mean OD of treated cells)/(Mean OD of negative control) ×100
Ferric Reducing Antioxidant Power (FRAP) assay
The antioxidant capacities of both types of bedak sejuk were determined via the FRAP assay as described Each test data from three independent experiments (n = 3) were expressed as mean ± SEM. One-way ANOVA test was used for comparison between means. Alpha used was 0.05 and p value < 0.05 was considered as statistically significant.
Cytotoxicity of Indica and Japonica bedak sejuk
The cytotoxicity of both types of bedak sejuk was evaluated against UVB-induced B164A5 melanoma cells via the MTT assay. Menadione, which was used as the positive control showed cytotoxicity with an IC 50 of 0.04 ± 0.02 mM (Figure 1). Each concentration of menadione was statistically significant (p < 0.05) when compared with the negative control. Whereas Indica and Japonica bedak sejuk showed no cytotoxicity, hence no IC 50 values were obtained. Both types of bedak sejuk had a reduction of cell viability at the beginning but gradually increased as the concentration increased (Figures 2 and 3). In addition, all concentrations of both types of bedak sejuk were not statistically significant (p > 0.05) when compared with the negative control. The results demonstrated that both types of bedak sejuk were not cytotoxic to UVB-induced B164A5 melanoma cells.
by Benzie and Strain (1996). It is a method to evaluate the antioxidant power through the reduction of ferric (Fe 3+ ) to ferrous ions (Fe 2+ ) at a low pH that result in the formation of coloured ferrous-tripyridyltriazine complex. Firstly, the FRAP working reagent was prepared freshly by mixing acetate buffer (30 mM, pH 3.6), iron (III) chloride (FeCl 3 ) solution (20 mM) and TPTZ solution (10 mM) in the ratio of 10:1:1. The prepared FRAP working reagent was then kept in a water bath at 37 o C and protected from light. This is followed by the preparation of the iron (II) sulphate (FeSO 4 ) calibration curve using serially diluted concentrations that range from 100 -1,000 µM. Ascorbic acid was used as the positive control and had been prepared using serially diluted concentrations that range from 3.125 -50 µg/mL, in the dark (Hasiah et al., 2011). As for the experimental steps, 50 µL of FeSO 4 , ascorbic acid and both types of bedak sejuk were added into their allocated wells in a 96-well plate. After that, 175 µL of warmed FRAP working reagent was added subsequently into each well. Then, the plate was incubated at 37 o C for 5 minutes. Finally, the absorbance readings were taken at 595 nm using a microplate reader and the FRAP values were expressed as ascorbic acid equivalent antioxidant capacity (AEAC) (Gashahun and Solomon, 2019).
Statistical analysis
The SPSS v25 software was used for data presentation.
Antioxidant capacities of Indica and Japonica bedak sejuk
The reducing capabilities of both types of bedak sejuk as antioxidants were determined through the FRAP assay. The FRAP values were expressed as ascorbic acid equivalent antioxidant capacity (AEAC) in the unit µg AA (ascorbic acid)/g of bedak sejuk. The FRAP values for 50 and 100 g/L of Indica bedak sejuk were 0.003 ± 0.001 and 0.004 ± 0.0003 µg AA/g of bedak sejuk respectively ( Figure 4). As for Japonica bedak sejuk, the FRAP values for 50 and 100 g/L were 0.004 ± 0.0003 and 0.01 ± 0.0007 µg AA/g of bedak sejuk respectively, showing a dose dependent manner (Figure 4).
When the FRAP values of Indica bedak sejuk for 50 and 100 g/L were compared, it was not statistically significant (p = 0.803). But, the FRAP values of Japonica bedak sejuk for 50 and 100 g/L was statistically significant (p = 0.006) when compared. In addition, the FRAP value of Japonica bedak sejuk for 100 g/L showing the highest antioxidant capacity was statistically significantly when compared to FRAP values of Indica bedak sejuk for both 50 (p = 0.001) and 100 g/L (p = 0.003).
Discussion
Lately, the usage of natural products in skincare has been showing an increasing trend. Most of these natural products have been proven to have antioxidant characteristics in addition to providing protection to the skin against UV light (Abdul Wahab et al., 2014). One example of such a product is bedak sejuk, a fermented rice-based cosmetic product. For generations, bedak sejuk pastilles are mixed with water and applied topically on one's skin as an overnight facial mask (Dzulfakar et al., 2016c;Johar et al., 2018).
In the fermentation of rice grains to produce bedak sejuk, lactic acid bacteria (LAB) are usually involved, alongside mould and yeast (Dzulfakar et al., 2015a;Dzulfakar et al., 2016b). The usage of cosmetic products that have this LAB fermented rice component results in the expansion and smoothness of the product as well as a wet feeling upon application on the skin (Sawaki et al., 2010). A similar experience is also reported by bedak sejuk users, whereby the application of bedak sejuk produces a cool feeling (Dzulfakar et al., 2015a). There is also an interesting relationship between the skin and the fermentation by LAB, whereby the LAB soaking water used during the fermentation of rice grains, contains lactic acid and other amino acids that contribute to skin hydration. These benefits make the LAB soaking water useful as a cosmetic product source. Hence, the combination of a substrate or medium like rice grains and the LAB strains may bring about cosmetic effects such as antioxidant effects, pH control and prevention of cell stress (Izawa and Sone, 2014). These benefits are in line with melanoma chemoprevention strategies. However, bedak sejuk has yet to be investigated for its effects on malignant skin cells in the laboratory. Hence, as a preliminary study for a potential melanoma chemoprevention agent, the B164A5 murine melanoma cells that have been UVB-induced is a suitable cancer model.
Firstly, in the MTT assay, it was found that Indica and Japonica bedak sejuk were not cytotoxic towards UVB-induced B164A5 melanoma cells. However, it was noted that there was a decrease in cell viability at lower concentrations of bedak sejuk, followed by an increase in cell viability at higher concentrations of bedak sejuk. This decrease in cell viability can be attributed to the fact that UVB rays are cytotoxic towards cells, in this case B164A5 cells (Pavel et al., 2017). At the same time, lower concentrations of bedak sejuk were not enough to increase the number of cells that have been UVB-induced. Regardless, the decrease in cell viability did not fall below the 50% mark and hence, no IC 50 was able to be obtained for both types of bedak sejuk. As a result, two of the higher doses, 50 g/L and 100 g/L were chosen as the treatment doses for the next assay.
In the FRAP assay, it was found that Japonica bedak sejuk had a higher antioxidant capacity compared to that of Indica bedak sejuk. The differences could be explained by the fact that Japonica rice grains are cultivated in temperate and colder regions of Asia, while Indica rice grains are cultivated throughout tropical Asia (Garris et al., 2005). Environmental temperature plays an essential role in antioxidant activities of plants, and plants cultivated in colder weather have more pronounced antioxidant activities as opposed to plants that were cultivated in warmer weather. This increase in antioxidant activity can be attributed to the production of more phytochemicals as the plants undergo stress in colder weather (Kumar et al., 2017).
There are also other fermented rice products that have exhibited promising results. Firstly, there is Galactomyces ferment filtrate (GFF) which is a byproduct of rice fermentation by the Galactomyces yeast. The GFF extract contains a unique blend of vitamins, minerals, small peptides and oligosaccharides that are used as cosmetic ingredients in skincare products. The extract has demonstrated antioxidant characteristics by protecting normal human epidermal melanocytes (NHEM) from oxidative stress (Woolridge Cooper, 2018). These findings of GFF was similar to the findings of bedak sejuk in this study.
Rice bran also possesses strong antioxidant activities to the point of resulting in cytotoxicity towards melanocytes. But, when the rice bran is fermented, the cytotoxicity of the rice bran extract towards B16F1 cells have been eliminated. This shows that the fermentation of rice brain produced new beneficial compounds with biological functions, although the exact compounds have yet to be elucidated (Chung et al., 2009).
Finally, the rice soaking water from the powder of bedak sejuk actually contains amino acids that could be beneficial for cosmetic applications. There are 16 out of 17 amino acids detected in Indica bedak sejuk as well as its soaking water. Glutamic acid was the highest concentration of amino acid found in both Indica bedak sejuk and its soaking water (Johar et al., 2018). Glutamine (a glutamic acid derivative), alongside arginine, tyrosine and lysine that are detected in bedak sejuk and its soaking water are among the main amino acids that are used in cosmetic industries (Ha et al., 2018). Besides, amino acids in cosmetic products function as antioxidants (Ivanov et al., 2013) and the amino acid content in Indica bedak sejuk was much higher compared to its soaking water. Hence, the application of bedak sejuk was more effective in terms of amino acids content when compared to the soaking water (Johar et al., 2018).
In summary, Indica and Japonica bedak sejuk need to be investigated further before they can be said to have potential as a melanoma chemoprevention agent that is able to (1) prevent melanoma, (2) prevent development of malignant melanoma from pre-malignant lesions or (3) prevent reoccurrence of melanoma after successful melanoma treatment (Chhabra et al., 2017). | 2020-11-29T14:08:13.674Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "7894e5fd4d015dc048ec72873242d0f92c147eff",
"oa_license": "CCBY",
"oa_url": "http://journal.waocp.org/article_89362_8de5334ca3a3746cadeaa64535fca529.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2360a8ad8d3f719d1b5d8e4864aefd22e5d2e265",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
245014804 | pes2o/s2orc | v3-fos-license | Structural family factors and bullying at school: a large scale investigation based on a Chinese adolescent sample
Backgrounds Various family factors have been identified in association with school bullying and the involvement of children and adolescents in bullying behaviors. Methods A total of 11,919 participants (female = 6671, mean age = 15) from 22 middle schools in Suzhou City, China completed the questionnaire. The associations between structural family factors (family socio-economic status, living arrangement, number of siblings, whether they were local residents/migrants, had an urban/rural hukou [a household registration system in China], parental and maternal education levels, and other various bullying-related constructs (i.e. bullying witnessing, bullying involvement, bystander intervention, and fear of being bullied) were all examined. Odds ratios (ORs) adjusted for covariates were calculated for the four bullying-related constructs (bullying witness, bullying involvement, bystander intervention, and reactions to being bullied) using structural family factors. Results The result showed that all demographic household characteristics were associated with bullying at school except for being from a single-child family. Adolescents from rural families witnessed more bullying incidents than those from local families (OR = 1.35, 95% CI: [1.09, 1.68]). Adolescents who come from migrant families (OR = 1.12, 95% CI: [1.07, 1.43]) with a rural hukou (OR = 1.31, 95% CI: [1.00, 1.74]) and low parental education levels (OR = 1.42, 95% CI: [1.01, 2.57]) were more likely to be bullies. Adolescents who came from migrant families (OR = 1.37, 95% CI: [1.03, 1.82]), with low maternal education levels (OR = 1.42, 95% CI: [1.06, 1.91]) engaged in more negative bystander intervention behaviors. Furthermore, adolescents with less educated mothers experienced a higher fear of being bullied (never versus sometimes: OR = 1.33, 95% CI: [1.00, 1.85]; never versus usually OR = 1.39, 95% CI: [1.01, 1.20]). Conclusions A systematic examination of the relationship between school bullying and demographic household characteristics may be used to inform school policies on bullying, such as training management on the importance of paying attention to adolescents from disadvantage household backgrounds. Identifying demographic factors that may predict bullying can also be used to prevent individuals from becoming involved in bullying and reduce the related negative consequences from being bullied. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-12367-3.
Introduction
School bullying has been widely identified as a risk factor for adolescents' poor psychological wellbeing and a major challenge for school management. It is estimated that the percentage of students involved in bullying Open Access *Correspondence: szgjyy@126.com † Haoran Wang, Yuanyuan Wang and Guosheng Wang contributed equally to this work. 3 Suzhou Guangji Hospital, 11# Guangqian Road, Suzhou 215137, China Full list of author information is available at the end of the article ranges from 8 to 40% [1]. In China this number is estimated to be nearly 20% [2]. Considering the pervasive and detrimental effects of bullying on the psychosocial development of adolescents [3], it is necessary to further explore both the risk and protective factors of school bullying. Previous literature has showed that socio-demographic factors, including sex, race, age, family socio-economic status etc. may influence the risk of bullying on campus [4].
Family systems have long been recognized as a source of influence on adolescents' social behaviors at school, including aggressive behaviors like bullying [5]. Previous studies have shown that several family characteristics are closely related to school bullying, such as family environments, parent-child relations, and family norms [6]. Family environments have been conceptualized as a multi-component factor including general feelings of safety at home, perceived parental support, and so forth [7]. Previous research has further found that negative family environments may be associated with higher risks of bullying victimization at school [7]. Oliveira and colleagues [8] found that positive family interactions may protect adolescents from being involved in school bullying (both as perpetrators and victims). Evidence also shows that adolescents who have good relationships with their parents are less likely to be bullies or victims [9]. Also, Orozco-Vargas [10] found that family moral values were closely related to the likelihood of being bullied among adolescent girls.
Living in a multi-generation family could also be associated with bulling at school. Research has shown that grandparents' involvement in the family education could be associated with less adjustment problems for adolescents [11]. Zhang [12] found that co-habiting with grandparents could significantly facilitate adolescents' academic performance, which is also a well-established protective factor that buffers against involvement in school bullying. Considering the positive effects of grandparents' involvement, further studies are needed to examine the protective effects of living with grandparents on involvement into school bullying.
Having siblings has also been identified as a protective factor for school bullying. Living in a single child family could significantly influence adolescents' social adjustment at school [13], which is closely related to school bullying [14]. However, such influence is complex. On the one hand, single-child families are prone to overly attending to the only child, which could hinder normal social development and lead to maladjustments, such as peer conflicts and loneliness [15]. On the other hand, single-child families are able to allocate more resources to the only child and thus could enhance the general well-being of the child (Yang J: Has the one-child policy improved adolescents educational wellbeing in China?, Unpublished).
Besides, there are various other household demographic characteristics, such as family structures and socio-economic status that could also be related to school bullying [16]. For example, Ackerman and colleagues [17] found that adolescents from unmarried families or cohabiting families are more likely to engage in delinquent and aggressive behaviors, such as bullying. Similarly, Fung and colleagues [18] found that adolescents from single-mother or step-mother families engaged in more aggressive behaviors compared to their counterparts. Previous evidence also showed that adolescents from a disadvantageous socio-economic background (e.g. lower household incomes, lower parental/maternal education level, and living in rural areas) scored higher on traits like impulsivity and showed more anti-social behaviors than their socio-economically advantaged counterparts [19]. However, more recent evidence suggests that family characteristics do not act as predictors to involvement in school bullying [20]. These inconsistent findings about the effects of household characteristics on bullying warrant further investigation.
Moreover, the relationships between household demographics, Chinese culture, and bullying have had little exploration. For example, single child family is defined as a family composed of one child [21]. As the "one-child" policy was enforced in China since 1979, being a singlechild family is predominant in urban areas of China [21]. It is important to note that China now operates under a 'two-child' policy since 2015 [22]. While the current urban families of China is skewed towards having an only child, it is likely that in the future, due to the two and three child policy changes, that there will be a more diverse range of family compositions in China. Internal migrant family is defined as rural households who have moved to Chinese cities [23]. Due to large-scale migration of Chinese families from rural areas to cities in the last two decades, internal migrant families are much more prevalent in China than in Western countries [23]. This suggests that migration is then an important sociodemographic characteristic in the Chinese context, and thus Western research cannot be applied to China without adaption due to the difference in the number of internal migrants. Multi-generation family refers to families that consist of more than two generations in one household. Again it is more prevalent in China than in Western countries as part of China's cultural traditions [24]. It is therefore necessary to examine whether adolescents from these types of families are more or less likely to be involved in school bullying. Previous evidence showed that living in an internal migrant family could be associated with the probability of adolescents' involvement in bullying. One recent investigation [25] showed that adolescents from such families are more likely to perpetrate bullying than their local counterparts. However, there is also research evidence [26] suggests that internal migrated adolescents did not experience more peer problems than non-migrant adolescents. Again, considering such inconsistencies in the literature, whether being a single child and living in an internal migrant family and how this could be related to adolescents' bullying experience at school should be further explored.
Taking into consideration the contradicting findings above and the need for examining school bullying in the Chinese cultural context, there are several gaps in the literature that this study aims to address. First, the relationships between some Chinese household types (e.g. single-child family, intergenerational family, internal migrated family) and bullying have rarely been explored in research. Second, most of the studies focus only on the association between family structures and the roles in bullying (i.e. bully, victim, and bully/victim), which neglects other bullying-related constructs [27]. In order to fill this gap in the literature, the current research focused on the association between household demographic characteristics and a full range of bullying indicators including: bullying witnessing, bullying involvement, bystander intervention, and reactions to being bullied. The researchers hypothesized that: (1) low paternal/maternal education levels and holding a rural hukou would be associated with a higher risk of being bullied or bullying others at school; (2) living in a multigenerational family could be associated with a lower risk of being bullied or bullying others at school. Associations between other household family demographics and bullying-related constructs could not be predicted due to the paucity of research evidence so far.
Participants
In January 2019, 22 middle schools (grades 7-11) in Suzhou, a major city in Eastern China, were invited to participate in the research, with no school declining the invitation to participate. Cluster sampling methods were used to select middle and high schools in one of the districts in Suzhou City. A total of 11,919 questionnaires were returned with the response rate being 83.2%. Assent was obtained from participates and passive informed consent was obtained from their main guardian prior to the pencil-and-paper questionnaires being filled in at school. Teachers were involved in obtaining passive parental consents and an information sheet was given to parents/guardians allowing them the opportunity to consider whether their child should take part in the study and providing them the opportunity to inform the teacher if they did not want their child to participate. If the parent did not inform the teacher that they objected to the research, it was passively assumed that the teacher had their consent for the child to participate (Hollmann & McNamara, 1999). A non-anonymous survey format was adopted in the current research. Participants were instructed to sign their name on the questionnaire and were assured that their name would be kept confidentially. The data inputted was anonymous to the research team. The research was approved by the Ethics Committee of the Mental Health Center of Suzhou (approval SGLS2017-037).
Social demographics checklist
The socio-demographic collected included: (1) sex and age; (2) rural/urban hukou; (3) migrant status (local residence/moved from other areas of China); (4) living arrangement (living with parents/grandparents/other relatives); (5) education level of parents; (6) family economic status. Considering the difficulty for adolescents to report the household income precisely, we used an item "are you living in your own house or in a rented house?" to roughly measure their family economic status; and finally we collected information on (7) being a single child or having sibling(s).
Bullying questionnaire
Items in the bullying questionnaire include: (1) Bullying Witness: "During this school year how often have you seen someone being bullied?" (2) Bullying Involvement: "During this school year how often have you been bullied at school?" and "During this school year how often have you bullied others?" (3) Bystander Intervention: "If you saw bullying at school, what would you do?", bystander reactions were classified as not-intervening (i.e. look on and do nothing), negatively intervening (teasing those who were being bullied), and positively intervening (helping those who were being bullied). (4) Fear of Being Bullied: "During the past year how often did you miss school because you felt unsafe, uncomfortable, or nervous at school or on your way to/from school?" In the current research, two traditional forms of bullying, verbal insult (i.e. teasing) and physical assaults (i.e. pushing, shoving, kicking, slapping or hitting) were measured [28].
Statistical analysis
All statistical analyses were conducted using SPSS 26.0 and Mplus 7.0. Considering the hierarchical nature of the data (i.e. individuals nested into schools), multilevel regression modeling was used to take possible clustering effects into consideration. First, intra-class correlations (ICCs) were calculated to determine the degree of homogeneity of the outcome variables within the clusters [29]. Then, the two-level logistic regression models with random intercepts [30] were estimated and multivariateadjusted odds ratios (ORs) for the four bullying-related constructs (bullying witness, bullying involvement, bystander intervention, and reactions to being bullied) were calculated with the various socio-demographic characteristics of family households.
As bullying witnessing and fear of being bullied were coded as an ordinal categorical variable (never, sometimes, usually, and almost every day), a parallel line test was conducted to examine whether the associations between predictive and outcome variables were different across categories.
Preliminary results
Sample characteristics are shown in Table 1. Results showed that 26.4% of boys and 14.3% of girls had been involved in bullying at least once in this study and all participants were classified in four categories using a dichotomous response of yes or no to each of the four categories: bullies (n = 1515, 12.7%), victims (n = 463, 3.9%), bully/victim (n = 480, 4.0%), and participants not involved in bullying (n = 9461, 79.4%).
Results showed that ICCs for all four dependent variables (bullying witnessing, bullying involvement, fear of being bullied, bullying intervention) ranged from 0.020 to 0.043, which means that 2 to 4.3% of the variation in the dependent variables could be attributed to the variation of level-2 variable (i.e. school).
Family characteristics and bullying witnessing
Odds ratios (ORs) for bullying witnessing are listed in Table 2. A parallel line test showed that the associations between family characteristics and witnessing bullying were not different across categories (p = 0.51). Results showed that adolescents who came from rural families witnessed more bullying scenarios than those who came from urban families (never versus often OR = 1.35, 95% CI: [1.09, 1.68]). Other family demographics (family status, economic status, etc.) were not related to witnessing bullying.
Family characteristics and bullying involvement
Adjusted ORs for bullying involvement are listed in
Family characteristics and bystander intervention
ORs for bystander intervention are listed in Table 4. Results showed that adolescents who came from migrant families (OR = 1.37, 95% CI: [1.03, 1.82]) with low maternal education levels (OR = 1.42, 95% CI: [1.06, 1.91]) engaged in more negative intervention based behaviors such as teasing those who were bullied. Other family factors (family status, economic status, etc.) were not associated with bullying intervention.
Family characteristics and fear of being bullied
Adjusted ORs for fear of being bullied are listed in Table 5. A Parallel line test showed that the associations between family factors and witnessing bullying were not different across categories (p = 0.66). Results showed that adolescents with a less educated mother suffered higher levels of fear of being bullied (never versus sometimes: OR = 1.33, 95% CI: [1.00, 1.85]; never versus usually
Discussion
In the current research, the associations between household socio-demographic characteristics including family migrant status (migrants/non-migrants), hukou status (urban/rural hukou), parental/maternal education level, and living arrangements were explored in relation to four bullying-related constructs. Results indicated that the aforementioned demographic factors were closely related to adolescents' bullying behaviors at school. To the best of our knowledge, this is the first research study to systematically examining the relations between family demographics and bullying-related constructs. This study found that migrant status (migrants/local residents) was the only sociodemographic factor associated with bullying witnessing, with adolescents from migrant families observing more incidents of bullying at school. Previous research has also shown that migrant adolescents do not have equal access to various social welfare services (i.e. entrance to public schools and health care) compared to their urban counterparts, and they are also more likely to experience peer exclusion or discrimination [31]. Previous evidence showed that individuals who witnessed bullying scenarios were more likely to get involved in bullying than counterparts [32]. Together with the current results, it could be suggested that the disadvantages faced by migrant adolescents may render them more susceptible to school bullying. Further research is required to explore the impact of migrant status on bullying in schools to better understand this finding.
As hypothesized, being migrants, holding rural hukou, and low parental education levels were positively associated with the risk of being bullies (or bully/victims). This result is consistent with previous findings which show that having a low household income and low parental social statuses are predictive of delinquency during adolescence [33]. Migrant adolescents may hold different social norms than their local counterparts, and intolerance to customary differences may instigate a bully environment in schools. Research further shows that bullying behaviors may have different implications for rural and urban adolescents. As shown in the current study and Table 2 Odds ratios for bullying witnessing (1) bullying witnessing is coded as an ordinal categorical variable (never, sometimes, usually) and never group was chosen as the reference; (3) *p < 0.05, **p < 0.01; ***p < 0.001
Bullying witnessing (never versus sometimes)
Bullying witnessing (never versus often) several previous studies, a large proportion of internalmigrants in China are from rural areas [23]. Because of this, there is a noticeable difference in bullying perception between rural and urban adolescent. A large proportion of rural adolescents (especially males) regard bullying others as a status symbol (i.e. masculine capital), while urban adolescents usually interpret bullying as an act of "lack of self-restraint" and "rudeness" [34]. This differentiation in social values may also explain why migrant adolescents were more likely to be bullies in this study. In addition, adolescents who have a less educated father (but not mother) were also more likely to be bullies in this study. Parental education level has long been regarded as an important socio-economic factor, considering the significant effect of socio-economic characteristics on adolescents' social behaviors; the association between fathers' education level and bullying may partly be due to fathers' dominant role in maintaining the family's socio-economic status, which the child mimics [35]. Contrary to our original hypothesis, adolescents who lived with their grandparents did not have a higher or lower likelihood of being bullies. This result is inconsistent with some of the previous research that suggests cohabitating with grandparents could be positively associated with adolescents' social adjustments. Previous research [11] evaluated how grandparents' involvement, but not living only with grandparents, could affect the possibility of involvements in bullying for adolescents. However, the family dynamics could be quite different when grandparents' are only involved with the child's upbringing, but not living with the child. The child living only with their grandparents could possibly explain the inconsistency presented in research results. Future studies are required to clarify the role of grandparents in preventing or exacerbating bullying at school, with special care paid to exploring the differences in family composition (living with grandparents and parents, living only with grandparents, living with parents and receiving care from grandparents).
It is important to note that all the aforementioned family demographic characteristics were not associated with a high risk of being victims of bullying. Bullies (or bully/victims) are different from victims in terms of behavioral patterns. Bullying perpetration (or perpetration-victimization) has been conceptualized as "being proactively aggressive", referring to "… cold blooded and goal-directed bullying behaviors" [36]. Looking at the current results of this study, adolescents from Table 3 Odds ratios for bullying involvements (1) bullying involvement was divided into four categories, and the category for reference is "not-involved"; (2) *p < 0.05, **p < 0.01; ***p < 0.001 disadvantageous families had a higher risk of being proactive rather than reactive bullies. Further research is needed to explore the different relations between family demographics and proactive or reactive bullying at school. In our study, adolescents from migrant families also had a higher likelihood of negatively reacting to those being bullied (e.g. teasing). According to previous research [37], adolescents with better social skills (such as high empathy, high self-control etc.) are more likely to intervene and provide help when they witness bullying scenarios. Therefore, having better social skills might explain the association between family economic status and intervention as a bystander. The results from our study are consistent with previous findings that adolescents from low economic backgrounds have a higher rate of delinquency [35]. Adolescents then with highly-educated parents are less likely to experience negative emotions (i.e. fear of school) than their counterparts. This indicated that both parental and maternal education levels could be a protective factor against bullying.
There are several limitations to the current research. First, all participants were from Suzhou, an economically advantageous city in Eastern China. Considering the main focus is on socio-demographic factors, such sampling methods may render the results prone to selection bias. Second, considering the cross-sectional nature of the current research, only correlations between bullying and the demographic factors could be explored. Longitudinal design should be adopted in future studies to examine causal relationships between the two constructs. Third, the measurements of some bullying-related constructs were simplified for concision. For example, only two forms of bullying (i.e. verbal/physical) were measured in the current research, with other forms, such as cyber-bullying and social isolation being unexamined. Also, the widely recognized features of school bullying (such as intention and power imbalances) were not considered appropriately in the current research. Participants may misunderstand the complex concept of bullying. For example, they could possibly regard some conflicts without power imbalances (e.g. arguments or fights) as bullying behaviors, which is not the aim of the current research. Future studies should explore how family demographic factors relate to a wide range of different forms of bullying, with a more accurate definition. Considering the complexity of behavioral patterns of bystanders, the current categorization (non-reaction, positively intervening, and negatively intervening) may not capture all possible ways of intervening behaviors in a bullying scenario. Also, among all the possible consequences of being bullied, we only adopted "fear of bullied" as the indicator. Although fear of being bullied has been identified as the most direct consequence of school bullying, other long-term consequences (such as depression and maladjustment) should further be incorporated in future research. Finally, the current research adopted a non-anonymous survey format during data collection. Anonymous survey format is the preference for most existing studies on bullying, with the assumption that participants may reveal more truthful answers when personal information is not required. However, previous evidence on how anonymity might influence research validity is mixed. Some researchers propose that an anonymous survey could encourage participants to "exaggerate or make irresponsible responses" [38]. There is also evidence showing that results from anonymous and non-anonymous bullying surveys were not statistically different [39,40]. O'Malley and colleagues [41] found that the assurance of confidentiality (but not anonymity) could be sufficient to obtain good validity, which was the practice in this study. Based on these results, the effects of anonymity could be mixed. Anonymity should be taken into consideration and multi-methods (i.e. peer-nomination, teacher assessment) should be used to evaluate bullying experiences in future studies.
Conclusions
Our research discovered that family household demographic characteristics were related to the constructs of adolescents' bullying at school. These risk characteristics can be used to inform practical guidance for school consolers about which students are at the higher risk of being involved in bullying and which students are at risk from suffering from negative consequences of being bullied. In particular, although abundant evidence has shown that children with or without siblings behave in many different ways [37], being an only child was not associated with any bullying-related constructs in the current research. Future research efforts are Table 5 Odds ratios for reactions to being bullied (fear of being bullied) (1) fear of being bullied is coded as an ordinal categorical variable (never, sometimes, usually) and the never group was chosen as reference; (2) *p < 0.05, **p < 0.01; ***p < 0.001
Fear of being bullied (never versus sometimes)
Fear of being bullied (never versus usually) | 2021-12-11T14:21:34.435Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "2ddda9774e31cd44cb7669db1da94eb28a5861c8",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-12367-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ddda9774e31cd44cb7669db1da94eb28a5861c8",
"s2fieldsofstudy": [
"Sociology",
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12044146 | pes2o/s2orc | v3-fos-license | Bacterial Toxin–Antitoxin Systems: More Than Selfish Entities?
Bacterial toxin–antitoxin (TA) systems are diverse and widespread in the prokaryotic kingdom. They are composed of closely linked genes encoding a stable toxin that can harm the host cell and its cognate labile antitoxin, which protects the host from the toxin's deleterious effect. TA systems are thought to invade bacterial genomes through horizontal gene transfer. Some TA systems might behave as selfish elements and favour their own maintenance at the expense of their host. As a consequence, they may contribute to the maintenance of plasmids or genomic islands, such as super-integrons, by post-segregational killing of the cell that loses these genes and so suffers the stable toxin's destructive effect. The function of the chromosomally encoded TA systems is less clear and still open to debate. This Review discusses current hypotheses regarding the biological roles of these evolutionarily successful small operons. We consider the various selective forces that could drive the maintenance of TA systems in bacterial genomes.
Introduction
Although bacteria have long been known to exchange genetic information through horizontal gene transfer, the impact of this dynamic process on genome evolution was fully appreciated only recently using comparative genomics (reviewed in [1]). Bacterial chromosomes are composed of genes that have quite different evolutionary origins (reviewed in [2]). The set of genes that is preferentially transmitted vertically over long evolutionary time scales composes the core genome. Core genes are relatively well conserved among different monophyletic groups and encode the cellular core functions. These core genes are interspersed with groups of genes that have been acquired from other prokaryotic genomes by horizontal transmission. These genomic islands mostly originate from integration events of mobile genetic elements, such as insertion sequences, transposons, phages, and plasmids. They might, therefore, be found in phylogenetically distant species and are not conserved among different isolates belonging to the same bacterial species. This set of genes constitutes the flexible genome.
Both gene influx and efflux processes are important in shaping bacterial-genome content. A vast majority of horizontally transferred genes are quickly lost after integration [3], although some remain interspersed in the genome (reviewed in [2]). Bacterial toxin-antitoxin (TA) systems appear to be subjected to this flux. Indeed, these small gene systems are found in plasmids as well as in chromosomes, and they are thought to be part of the flexible genome [4]. Although their role, when they are located in plasmid, is fairly clear, the involvement in physiological processes of the TA systems' chromosomally encoded counterparts is still open to debate.
Here we discuss current hypotheses regarding the biological roles of chromosomally encoded TA systems and consider the various selective forces that could drive the maintenance of TA systems in bacterial genomes.
Diversity and Abundance of Bacterial TA Systems
Bacterial TA systems are of two different types depending on the nature of the antitoxin; the toxin always being a protein. The antitoxin of type I systems is a small RNA (antisense or adjacent and divergent to the toxin gene) showing complementarity to the toxin mRNA (for recent reviews on type I systems, see [5,6]). Type I antitoxins regulate toxin expression by inhibiting the toxin's translation. The toxins of type I systems are small, hydrophobic proteins that cause damage in bacterial cell membranes. In type II systems, the antitoxin is a small, unstable protein that sequesters the toxin through proteic complex formation (for a recent review on type II systems, see [7]). Much more information is available for type II systems, especially in terms of their biological roles. We will focus on the type II systems and use the term TA systems for brevity.
Type II TA systems are organised in operons, with the upstream gene usually encoding the antitoxin protein. The expression of the two genes is regulated at the level of transcription by the antitoxintoxin complex. Nine families of toxins have been defined so far based on amino sequence homology [4]. Their targets and the cellular processes that are affected by their activities are shown in Table 1.
Comprehensive genome analyses have highlighted the diversity in the distribution of TA systems [4,8,9]. Some genomes such as that of Nitrosomonas europeae, Sinorhizobium meliloti, and Mycobacterium bovis contain more than 50 putative TA systems. Some others contain no or very few (less than three) putative TA systems, such as Rickettsia prowazeki, Campylobacter jejuni, or Bacillus subtilis. No correlation between the number of TA systems, the lifestyle, the membership of a phylum, or the growth rate (as it was proposed [4]) could be drawn [9]. Another level of diversity in distribution of TA systems among bacteria is added when comparing the occurrence of TA systems between different isolates of the same species. Table 2 shows the distribution of the nine toxins in seven sequenced Escherichia coli strains.
As an example, homologues of the CcdB, MazF, and HipA toxins are frequently represented (at least in five chromosomes), whereas others appear to be absent (Doc and VapC) or present in only one chromosome (RelE and ParE). This implies that these TA systems were integrated in chromosomes through horizontal transfer, most probably in very recent events. The copy number of TA systems within one genome may also vary from one isolate to another. For instance, the MazE and HigB toxins are present in two copies in at least two genomes. Thus, TA systems are part of the flexible genome. They might be located in cryptic prophages such as relBE in the E. coli K-12 Qin prophage or constitute genomics islets by themselves such as ccd O157 [10].
TA Systems: Just Selfish Entities? Table 2 implies that the integrations of TA systems in E. coli chromosomes are recent events, because the distribution of the different TA systems varies from one isolate to another, raising the possibility that chromosomally encoded TA systems might have no physiological function. An attractive possibility is that TA systems act as selfish entities. Toxin and antitoxin genes show a strong interdependence (the functionality of the antitoxin is indispensable for the survival of cells carrying the toxin gene). They are closely linked, and they are capable of moving from one genome to another through horizontal gene transfer, as well as maintaining themselves in bacterial populations even at the expense of their host cell, at least when they are encoded in plasmids. Indeed, their stabilisation properties might be a consequence of their selfish behaviour (see below).
TA Systems: More than Selfish Entities?
Plasmid-Encoded TA Systems and Plasmid Fitness
Natural plasmids are often present in bacteria at very low copy number (one copy per chromosome). They are also able to spread by conjugation or mobilization with the help of other conjugative plasmids. They thus constitute a substantial proportion of the flexible genome and contribute importantly to bacterial evolution. TA systems increase the plasmid prevalence (number of plasmidcontaining cells/total number of cells) in growing bacterial The targets and the types of activities of the nine toxins as well as the cellular processes that are affected by the expression of the toxins are shown. This table is adapted from [7] except where indicated. ND, not determined. 1 The CcdB toxin does not generate double-strand breaks by itself.
Overexpression of CcdB inhibits the re-ligation step of the DNA gyrase, a type II topoisomerase, which leads to the generation of double-strand breaks. 2 Overexpression of RelE induces cleavage of mRNAs at the ribosome A-site. 3,4 ParE was shown to poison DNA gyrase and to generate double-strand breaks in vitro. 5 As CcdB, it induces inhibition of cell division and therefore, it is assumed that it inhibits replication. 6 Overproduction of the Doc toxin activates the relBE TA system and indirectly causes mRNA cleavage [53]. 7 Doc inhibits translation elongation by association with the 30S ribosomal subunit [54]. 8 See [55]. Although VapC shows an endoribonucleolytic activity, it has not been reported whether or not VapC is able to inhibit translation. 9 The f toxin is part of a three-component TA system (v2e2f) in which the antitoxin and autoregulation properties are encoded by separate polypeptides. 10 See [56]. 11 At a high overexpression level, the f toxin inhibits replication, transcription, and translation, eventually leading to cell death [57]. However, the specific target(s) is (are) unknown. 12 See [34]. 13 See [33]. 14 See [32,33,34]. 15 The genetic organisation of the higBA system is unusual; the toxin gene is upstream of the antitoxin gene in the operon. 16,17 See [40,58]. doi:10.1371/journal.pgen.1000437.t001 Homologues of the nine toxins were identified by Psi-Blast [59] in the chromosomes of seven E. coli isolates. Homologues are either present in one copy (+), in two copies (+(2)) or absent (2 populations by selectively eliminating daughter cells that did not inherit a plasmid copy at cell division [11,12] ( Figure 1A). This post-segregational killing mechanism relies on the differential stability of the toxin and antitoxin [13,14]. In daughter bacteria devoid of a plasmid copy, because TA proteins are not replenished, the antitoxin pool rapidly decreases, freeing the stable toxin. These plasmid-free bacteria will eventually be killed by the deleterious activity of the toxin. Plasmid-encoded TA systems are also called addiction modules [15], since this property renders the cell addicted to antitoxin production and therefore to the TA genes. Cooper and Heinemann [16] showed that TA systems might also function in plasmid-plasmid competition, as proposed for restriction-modification systems in the ''selfish theory'' of Kobayashi and colleagues [17,18]. They showed that plasmid-encoded TA systems allow a conjugative plasmid (PSK + plasmid) to outcompete a conjugative plasmid belonging to the same incompatibility group (identical replicon) but devoid of the TA system (PSK 2 plasmid) [16]. Therefore, TA systems increase the relative fitness of their host DNA molecules by eliminating competitor plasmids in the bacterial progeny through postsegregational killing ( Figure 1B). Mathematical models demonstrate that the post-segregational killing phenomenon allows the propagation of TA systems in bacterial populations, independently of their original frequencies [19]. This might provide rational explanation for the evolutionary success of TA systems.
Chromosomally Encoded TA Systems
Some chromosomally encoded TA systems might be integrated in host regulatory networks and thereby confer a fitness advantage to the bacterial-host cells and/or populations. Several models supporting this view have been proposed.
A secure way to survive: Being integrated in host regulatory networks? The programmed-cell death model is based on the study of the chromosomally encoded mazEF TA system of E. coli (reviewed in [20]). mazEF-mediated programmed cell death was observed by Engelberg-Kulka and colleagues under a wide variety of unrelated stressful conditions (e.g., amino-acid starvation, short-term antibiotic treatments, high temperature, and oxidative shock). Stress conditions are thought to affect the production of the mazEF-encoded proteins in a manner dependent on ppGpp, an alarmone synthesised under starvation [21][22][23][24] and through a quorum-sensing-like small peptide (extracellular death factor or EDF) [25]. This particular combination of stress conditions and EDF is thought to shut off mazEF transcription and lead to MazF toxin liberation as a consequence of MazE degradation by the ClpAP ATPdependent protease. The outcome of this activation has been shown to be fatal for at least 95% of the bacterial population. Altruistic death of a fraction of the bacterial population is proposed to provide nutriments for the siblings. The molecular mechanisms underlying this proposed stochastic activation, as well as those by which killing is achieved, are still unknown. Whether MazF induces cell lysis also remains to be established.
The growth-modulation model is built on data mostly obtained on the E. coli relBE system and to a lesser extent on mazEF and chpB (which encodes a toxin homologous to MazF) [26,27]. This model relies on the primary observation that amino-acid starvation inhibits cell growth without leading to cell death [26], in Daughter bacteria that inherit a plasmid copy at cell division grow normally. If daughter bacteria do not inherit a plasmid copy, degradation of the labile antitoxin proteins by the host ATP-dependent proteases will liberate the stable toxin. This will lead to the selective killing of the plasmid-free bacteria (in gray). When considering only vertical transmission, TA systems increase the prevalence of the plasmid in the population as compared with plasmids devoid of TA systems (PSK 2 plasmid in black, right panel). (B) Horizontal transmission. Plasmidplasmid competition. The PSK + plasmid (in purple) and the PSK 2 plasmid (in black) belong to the same incompatibility group and are conjugative. Under conditions in which conjugation occurs, conjugants containing both plasmids are generated. Because the two plasmids are incompatible, they can not be maintained in the same bacteria. The ''loss'' of the PSK + plasmid will lead to the killing of bacteria containing the PSK 2 plasmid through the PSK mechanism (in gray), thereby outcompeting the PSK 2 plasmid. On the contrary, the loss of the PSK 2 plasmid will be without any deleterious effect on the PSK + plasmid. Through multiple events of conjugation, the fitness of the PSK + plasmid will be increased (arrow). doi:10.1371/journal.pgen.1000437.g001 contrast with the programmed cell-death model. However, growth inhibition was subsequently shown to be independent of the presence of relBE, mazEF, chpB, and two other type II systems [28]. Nevertheless, upon amino-acid starvation, the rate of translation drastically drops in a wild-type E. coli strain and to a lesser extent in a DrelBE mutant strain [26]. Gerdes and collaborators therefore proposed that relBE is a stress-response module that functions in quality control of gene expression to regulate the global level of translation, together with the trans-translation ssrA system [29]. Amino acid starvation activates relBE transcription through the Lon-dependent degradation of RelB and in a ppGpp-independent manner. As a consequence, RelE inhibits translation and induces a dormant state until favourable growth conditions return. Data obtained on mazEF and chpB by the group of Gerdes are consistent with the growth-regulator model and disagree with the programmed cell-death model [27], although each model could be true under different circumstances [21].
The persistence model describes an epigenetic trait that allows a small fraction of bacteria (,10 26 ) to enter into a dormant state that renders them able to survive stress conditions, notably antibiotic treatments (reviewed in [30]). A nontoxic mutant of the HipA toxin (hipA7) has been shown to confer high persistence in E. coli [31]. Mutations abolishing the production of the ppGpp alarmone eliminated the high persistence phenotype, suggesting that hipA7 might induce a high level of ppGpp [31]. Persistence and toxicity might be independent, because the HipA7 mutant seems to be less efficient for inhibition of macromolecule synthesis as compared to the wild-type HipA [32]. However, the protein kinase activity of HipA was shown to be required for persistence and growth arrest [33]. The central elongation factor Tu (EF-Tu) was recently shown to bind and to be phosphorylated by HipA [34]. EF-Tu in its nonphospohorylated form catalyses the binding of aminoacyl-tRNAs to the ribosome. Phosphorylation of EF-Tu by HipA might lead to translation inhibition [34] and therefore to ppGpp synthesis. Single-cell analysis revealed that several TA systems are up-regulated in persister cells [35]. The biological meaning of this observation remains unclear, since the deletion mazEF and relBE did not impair persister frequency under ofloxacin (a fluoroquinolone) or mitomycin C treatments. However, the DhipBA mutant strain was strongly affected (10-to 100-fold), showing that this TA system is involved in persistence [36]. The molecular mechanisms underlying this stochastic phenomenon are unknown.
The development model was proposed recently for fruiting body formation in Myxococcus xanthus. A homologue of the mazF toxin gene (mazF-mx), which is devoid of any mazE antitoxin gene homologue, was identified in the chromosome of M. xanthus [37]. The solitary mazF-mx toxin gene constitutes an interesting example of integration in host regulatory networks. M. xanthus forms multicellular structures called fruiting bodies under nutrientstarvation conditions. During this process, 80% of the population engaged in fruiting-body formation die by lysis; only 20% will develop into myxospores. The mazF-mx gene is integrated in a regulatory cascade controlled by the key developmental regulator MrpC, which presents a dual activity towards mazF-mx: it positively regulates mazF-mx expression at the transcriptional level and it negatively controls its endoribonuclease activity at the posttranslational level by acting as its antitoxin. During vegetative growth, MrpC transcriptional activity is controlled negatively by its phosphorylation through a Ser/Thr protein kinase. When M. xanthus engages in fruiting body formation, MrpC transcription activity is activated most likely by a LonD-dependent cleavage. MazF-mx is then produced and cleaves mRNAs, thereby inducing cell death. mazF-mx is essential for fruiting body formation, because a DmazF-mx mutant shows a dramatic reduction of myxospore formation.
In the above models, chromosomally encoded TA systems are thought to be integral parts of their host genetic networks. mazEF has been extensively reported as being responsible for programmed cell death, although this observation failed to be reproduced in various labs and is still a subject of debate [26][27][28]. Nevertheless, TA systems are thought to allow cells and/or populations to cope with stress conditions, and should therefore confer a clear selective advantage in these conditions. Indeed, mazF-mx and hipBA appear to be essential components of host regulatory networks, since their deletion caused a drastic phenotype [36,37]. However, it is less clear for mazEF, relBE, and chpB of E. coli, since no fitness gain could be attributed to their presence neither under stress conditions nor during post-stress recovery phases [28].
The two following models provide an alternative to the previous ones by illustrating how TA systems can confer selective advantages to their bacterial host without being integrated into regulatory networks.
TA systems in dynamic genome evolution. The stabilisation model proposes that because of their addictive characteristics, chromosomally encoded TA systems could act against large-scale deletion of otherwise dispensable genomic regions [38]. Super-integrons are plastic platforms composed of numerous gene cassettes (more than a hundred in the Vibrio cholerae super-integron) and repeat sequences (reviewed in [39]). Superintegrons encode many functions (e.g., antibiotic resistance). Super-integrons may advantage bacterial populations over long time scales by maintaining nonessential genes and allowing bacterial lineages to better cope with unpredictable changes of environmental conditions. Gene cassettes are excised, integrated, and rearranged by the action of the SI-encoded integrase. They contain in general a single gene devoid of promoter, except for the TA systems encoding cassettes. In this case, the entire TA operon is present in the cassette and is most likely expressed. Several TA systems from super-integrons belonging to various Vibrionaceae are able to stabilise otherwise unstable plasmids or large genomic regions in E. coli [38,40,41]. Moreover, super-integrons are extremely stable. Attempts to delete the super-integron of V. cholerae have failed, strongly suggesting that TA systems serve to stabilise the super-integron platform and counteract gene efflux (D. Mazel, personal communication).
While it becomes clear that TA systems in such genetic structures or in cryptic prophages such as relBE of Qin [42] have retained their stabilisation properties, the generalisation to more ''classical'' chromosomally encoded TA systems should be taken with caution. Although only a few systems have been tested (E. coli dinJ-yafQ and ccd O157 systems), they appear to be unable to prevent large-scale deletion or to stabilise an otherwise unstable plasmid [10,38]. Wide surveys of stabilisation properties of TA systems from various locations (mobile genetic elements, core, genomic islands, remnants) will test whether a correlation between stabilisation function and localisation exists.
The anti-addiction model proposes that chromosomallyencoded systems can selectively advantage their host in postsegregational killing conditions. In theory, chromosomally-encoded antitoxins sharing sufficient identity with homologous plasmidencoded TA systems might act as anti-addiction modules by preventing post-segregational killing (Figure 2). The ccd Ech chromosomally encoded TA system of Erwinia chrysanthemi 3937 was shown to have this property with respect to its E. coli F plasmid-encoded ccd F homolog [43]. In an E. coli strain containing the ccd Ech system inserted in its chromosome (ccd Ech strain), no post-segregational killing was observed upon the loss of a plasmid carrying ccd F . Moreover, competition experiments showed that under post-segregational killing conditions, the ccd Ech strain had a selective advantage compared to the wild-type strain. Therefore, the fitness advantage conferred by the newly acquired antiaddiction module under post-segregational killing conditions might allow its fixation in the bacterial population. In turn, the plasmid-encoded system will lose its addictive character. On the one hand, variants able to evade anti-addiction modules are expected to be selected and out-compete their post-segregational killing-defective relatives. Anti-addiction might thus be one of the evolutionary forces driving selection of the plasmid encoded TA systems. On the other hand, chromosomally encoded TA systems might lose their anti-addictive properties [10] and decay [44].
Conclusions
There is no doubt that bacterial TA systems are evolutionarily successful entities. Some bacterial genomes harbour several dozen of them [4,9]. Even obligatorily intracellular species that undergo massive genome reduction contain TA systems [9,45]. There is increasing evidence that these small entities move between genomes through horizontal gene transfer. Their phylogeny is not congruent with the bacterial one [4,46], and their distribution varies greatly between isolates belonging to the same bacterial species ( [44,46], Table 2), implying that TA systems are highly mobile. Pandey and Gerdes also reported recently that TA systems are preferentially associated with genomic islands [4]. However, how horizontally acquired TA systems are fixed within the population is not yet understood. One can argue that their addictive ''selfish'' characteristics enable them to be stabilised and refractory to gene efflux. As a consequence, in specific genomic locations such as plasmids or genomic islands, they may contribute to the maintenance of these structures in bacterial population by post-segregational killing and be subjected to selection. In other genomic locations, such as the core genome where they are not subjected to selection, some TA systems might accumulate mutations that reduce or inactivate their addictive properties simply by genetic drift. Indeed, deletion of both type II [24,26,47] and type I systems [48,49] in E. coli K-12 was possible, at least under the conditions used in these experiments, suggesting that these systems have lost their addictive characteristics. Signs of ''loss of addictive properties'' were detected for several type I and II systems. For instance, the five copies of the type I hok-sok system located in the E. coli K-12 chromosome are inactivated by insertion sequences, point mutation, or large rearrangements [50], and the ccd O157 system appears to undergo a degenerative process within the E. coli species [44]. Similar observations have been reported for restriction-modification systems that share the addiction and apparent mobility characteristics of TA systems [51,52]. Another route for TA system evolution is their integration into host regulatory networks. This is exemplified by the MazF-mx toxin in M. xanthus that had been hijacked by the developmental network controlling fruiting-body formation. The canonical antitoxin has been replaced by a complex cascade of signal transduction proteins involving a Ser/Thr protein kinase and a transcriptional activator/antitoxin protein [37].
Many scenarios might occur depending notably on the bacterial species and the type of toxin. The TA field should avoid generalisation regarding the biological role of these interesting entities. These small modules are highly diverse and ubiquitous. They might have multiple biological roles, if any, that depend on their age, their genomic location, the nature of the toxin, and most likely on many not-yet-discovered factors that influence their evolution. Figure 2. The anti-addiction model. The chromosomally encoded anti-addiction system is represented in black; the PSK + plasmid in purple. In this model, the antitoxin of the chromosomally encoded TA system is able to counteract the toxin of the plasmid-encoded system. Therefore, daughter bacteria that do not inherit a plasmid copy at cell division will survive post-segregational killing. doi:10.1371/journal.pgen.1000437.g002 | 2016-05-12T22:15:10.714Z | 2009-03-01T00:00:00.000 | {
"year": 2009,
"sha1": "967dc99fa8a47195af5277a3b05867c86039b3e7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1000437&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "967dc99fa8a47195af5277a3b05867c86039b3e7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
189861124 | pes2o/s2orc | v3-fos-license | A scorpion venom peptide derivative BmKn‒22 with potent antibiofilm activity against Pseudomonas aeruginosa
Pseudomonas aeruginosa is a leading cause of nosocomial and serious life-threatening infections and infections caused by this bacterium continue to pose a major medical challenge worldwide. The ability of P. aeruginosa to produce multiple virulence factors and in particular to form biofilms makes this bacterium resistant to all known antibiotics. As a consequence, standard antibiotic therapy are increasingly become ineffective to clear such infections associated with biofilms. In search for novel effective agents to combat P. aeruginosa biofilm infections, a series of the BmKn‒2 scorpion venom peptide and its truncated derivatives were synthesized and their antibiofilm activities assessed. Among the peptides tested, BmKn‒22 peptide, which was a modified peptide of the parental BmKn‒2 scorpion venom peptide, clearly demonstrated the most potential inhibitory activity against P. aeruginosa biofilms without affecting the bacterial growth. This peptide was not only capable of inhibiting the formation of P. aeruginosa biofilms, but also disrupting the established biofilms of P. aeruginosa. Additionally, BmKn‒22 peptide was able to inhibit the production of key virulence factor pyocyanin of P. aeruginosa. Our results also showed that BmKn‒22 peptide significantly reduced lasI and rhlR expression, and suggested that BmKn‒22 peptide-mediated inhibition of P. aeruginosa biofilms and virulence factors was achieved through the components of quorum-sensing systems. Combination of BmKn‒22 peptide with azithromycin resulted in a remarkable reduction P. aeruginosa biofilms. Since this peptide exhibited low toxicity to mammalian cells, all our results therefore indicate that the BmKn‒22 peptide is a promising antibiofilm agent against P. aeruginosa and warrant further development of this peptide as a novel therapeutic for treatment of P. aeruginosa‒associated biofilm infections.
Introduction amino acid sequence and physico-chemical properties of the studied peptides are presented in Table 1. Their molecular weight, net charge and % hydrophobicity were calculated using APD3: Antimicrobial Peptide Calculator and Predictor [22] whereas helix and secondary structures were predicted by NPS@: network protein sequence analysis [23]. BmKn-2 peptide and its derivatives were dissolved in their vehicle, dimethyl sulfoxide (DMSO; � 99.5%, Sigma, France) and further diluted in culture medium to obtain desired concentrations.
Bacterial strain and growth condition
P. aeruginosa PAO1 was obtained from the Spanish Type Culture Collection (CECT, Valencia, Spain). This bacterial strain was frozen and kept at -80˚C. Prior to each experiment, two subcultures were prepared on Luria-Bertani (LB) agar (BD Difco™, Le Pont de Claix, France) and incubated under aerobic condition at 37 O C for 24 h. A single colony was then taken and bacterial suspensions were freshly prepared in LB broth for subsequent experiments.
Biofilm susceptibility assay
The effect of BmKn-2 peptide and derivatives on biofilm formation of P. aeruginosa PAO1 was determined according to the method published previously [24,25] with some modifications. Briefly, 100 μL of the test peptides was plated into the wells of flat-bottomed 96-well microtiter plates (Nunc™, Roskilde, Denmark) at the final concentrations of 200 -800 μM. Aliquots of the P. aeruginosa PAO1 suspension were then inoculated to the wells of 96-well microtiter plates to obtain a final concentration of 10 6 CFU/mL. Culture without the test peptides was used as the untreated control. After incubation at 37 O C for 24 h without agitation, planktonic cells were removed and the plates were gently rinsed twice with phosphate buffered saline (PBS) pH 7.4. Biofilms were then stained with 0.1% crystal violet solution (Merck, Darmstadt, Germany) for 10 minutes at room temperature. Excess stains were rinsed off with PBS pH 7.4 and the plates were left dry at 37 O C for 2 h. Subsequently, biofilm biomass was solubilized with 30% acetic acid and the optical density (OD) measured at 550 nm using a microplate reader (BioTek Synergy HT).
Antibiofilm activity of the test peptides was also examined against established biofilms as follows. Aliquots of the P. aeruginosa PAO1 suspension adjusted to a final concentration of 10 6 CFU/mL were seeded into the wells of flat-bottomed 96-well microtiter plates (Nunc). Biofilms were established at 37˚C for 24 h. After incubation, nonadherent planktonic cells were removed by gently washing the wells with sterile PBS pH 7.4. Preformed biofilms were then treated with the test peptides at concentrations of 200 -800 μM and incubated at 37˚C for 24 h. Culture without the test peptides served as the untreated control. After the incubation, the test peptides were aspirated gently and the plates were rinsed twice with PBS pH 7.4. Biofilm biomass was subsequently quantified by 0.1% crystal violet staining as previously described.
Growth assay
The effects of BmKn-2 peptide and its derivatives on growth of P. aeruginosa PAO1 were carried out as described previously [25] with some modifications. Bacterial suspension (10 6 CFU/ mL) and the test peptides (800 μM, final concentration) were incubated at 37 O C at 150 rpm. Bacterial culture without the test peptides was used as the untreated control. After 24 h, bacterial concentration was evaluated. Cultures from each treatment were taken and 10-fold serial dilution was performed; 100 μL of each dilution was then spread on LB agar plates. Following incubation at 37˚C for 24 h, the colonies were counted and expressed as log 10 CFU/mL.
Pyocyanin assay
Pyocyanin pigments produced by P. aeruginosa PAO1 after exposure to the BmKn-22 peptide were determined according to the protocol described previously [26]. Briefly, 750 μL of P. aeruginosa suspension (10 7 CFU/mL, final concentration) was mixed with 250 μL of LB broth containing the test peptide at final concentrations of 200 -800 μM and incubated at 37 O C for 24 hours with agitation (150 rpm). Control culture without the test peptide was simultaneously propagated. After centrifugation at 4,500 rpm for 10 minutes, supernatant was collected and pyocyanin was extracted with chloroform followed by 0.2 N HCl. The suspension was centrifuged once at 4,500 rpm for 10 minutes, and the pink phase layer was subjected to optical density determination at 380 nm using a microplate reader (BioTek Synergy HT).
Quantitative real-time polymerase chain reaction (qRT-PCR)
The effects of BmKn-22 peptide on the expression of quorum sensing-related genes in P. aeruginosa PAO1 were assessed by qPCR. Briefly, bacterial suspension at approximately 10 7 CFU/ mL was cultured in the presence or absence of the test peptide (800 μM, final concentration) at 37 O C for 10 h using a shaking incubator. After centrifugation at 4,500 rpm for 10 minutes, the bacterial cells were harvested and subsequently subjected to total RNA extraction using TRIzol 1 reagent (Invitrogen, USA) as per the manufacturer's instruction. The concentration of the extracted RNA was measured using Nanodrop spectrophotometer (NanoDrop Technologies, USA). Thereafter, 1 μg of extracted RNA was reverse-transcribed into cDNA using Random Hexamer primer and RevertAid First Strand cDNA synthesis kit (Fermantas), which was amplified by real-time PCR. The primers for the genes lasI, lasR, RhlI, RhlR and 16S rRNA were used ( Table 2). All primers were synthesized by the Integrated DNA Technology (IDT), Canada. The reaction mixture consisted of 1× AccuPower 1 2X GreenStar™ qPCR Master Mix, 0.4 μmol/L each forward primer and reverse primer and 1 μL of cDNA in a final volume of
Determination of minimum inhibitory concentration (MIC)
MIC of azithromycin against P. aeruginosa was determined by using broth microdilution assay according to the protocol previously described [29] with some modifications. Twofold serial dilutions of azithromycin (AZM; Sigma, St. Louis, USA) were prepared in LB broth in the wells of flat-bottomed 96-well microtiter plates (Nunc). Aliquots of P. aeruginosa PAO1 suspension at a final concentration of 10 6 CFU/mL was added to the wells of flat-bottomed 96-well microtiter plates (Nunc). After incubation at 37 O C for 24 h, the MIC value was examined which was defined as the minimum concentration in the first well that showed no visible growth.
Antibiofilm activities of BmKn-22 peptide in combination with azithromycin
Briefly, P. aeruginosa PAO1 suspension (10 6 CFU/mL, final concentration) was added to the wells of flat-bottomed 96-well microtiter plates (Nunc) containing the test peptide, azithromycin (AZM; Sigma) alone or in combinations. The final concentrations of the test peptide ranged from 200 -800 μM, and the concentration of azithromycin was 64 μg/mL (1/2 MIC). Biofilm formation was employed by measuring biofilm biomass stained with crystal violet as a protocol described in previous section.
Hemolytic assay
Hemolytic activity of the test peptides was assayed according to a protocol described previously [30]. Suspension of 2% sheep red blood cells (100 μL) prepared in PBS pH 7.4 was incubated with 100 μL of the test peptides (800 μM, final concentration). After incubation at 37 O C for 1 h, the suspension was centrifuged at 1,000 g for 5 min followed by transferring 100 μL of the supernatant to 96 well-microtiter plate (Nunc). Released hemoglobin was then determined by measuring an absorbance at 405 nm using a microplate reader (BioTek Synergy HT). Positive and negative controls in this assay were 1% Triton X-100 and PBS pH 7.4, respectively. Hemolysis (%) was calculated using an equation: (OD405 nm peptide -OD405 nm PBS pH 7.4)/(OD405 nm 1% Triton X-100 -OD405 nm PBS pH 7.4) × 100.
Statistical analysis
The results were obtained from independent experiments as indicated, and expressed as mean ± standard error of mean (SEM). Difference between test and control was analyzed by two-tailed student's t-test using SPSS version 20 software (SPSS, Chicago, IL, USA). P < 0.05 was considered statistically significant unless otherwise specified.
Antibiofilm activities of BmKn-2 peptide and its derivatives against P. aeruginosa
BmKn-2 peptide and its derivatives were initially assessed for their inhibitory activities against biofilms formed by P. aeruginosa, and the results are presented in Fig 1. It was found that biofilm biomass of P. aeruginosa was reduced when exposed to the test peptides, as compared with the untreated control. Although individual variabilities were observed, marked reduction of P. aeruginosa biofilms was obtained with the BmKn-2, BmKn-21, BmKn-22 and BmKn-23 peptides; significant antibiofilm activity observed with BmKn-21 and BmKn-22 peptides. Treatment with BmKn-24, BmKn-25 and BmKn-26 peptides, however, produced less biofilm reduction activities.
Effect of BmKn-2 peptide and its derivatives on P. aeruginosa growth
To ensure that the observed reduction of P. aeruginosa biofilms by BmKn-2 peptide and its derivatives was not caused by its growth inhibitory activity, growth of P. aeruginosa in the presence of the test peptides were quantified in terms of viable cell number. The results in Fig 2 showed the reduction in bacterial counts after incubation with BmKn-2 and BmKn-21 peptides. However, no significant (P > 0.05) differences in bacterial counts were observed with peptides BmKn-22, BmKn-23, BmKn-24, BmKn-25 and BmKn-26, as compared with the untreated control. These results thus demonstrated that the BmKn-2 and its derivatives, except the BmKn-2 and BmKn-21 peptides, had no effects on growth of P. aeruginosa and suggested that biofilm reduction activities of such peptides were not due to growth inhibitory activities.
Hemolytic activity and cytotoxicity of BmKn-2 peptide and its derivatives
The hemolytic activities of BmKn-2 peptide and its derivatives against sheep red blood cells were determined as an indication of their toxicity towards mammalian cells. As presented in Fig 3A, it was found that incubation of red blood cells with BmKn-2 and BmKn-21 peptides resulted in complete (100%) lysis of red blood cells. In contrast, less or no hemolysis was observed with BmKn-22, BmKn-23, BmKn-24, BmKn-25 and BmKn-26 peptides. It is also interesting to note that % lysis of red blood cells after the exposure to the BmKn-22, BmKn-23, BmKn-24, BmKn-25 and BmKn-26 peptides was remarkably reduced as compared with the parental BmKn-2 peptide.
To further examine the toxicity of the test peptides against mammalian cells, the MTT assay was also performed on L929 cells. As shown in Fig 3B, significant decrease (P < 0.001) in viability of cells treated with BmKn-2 and BmKn-21 was obviously seen, compared with the untreated control. On contrary, treatment with BmKn-22, BmKn-23, BmKn-24, BmKn-25 and BmKn-26 peptides produced low effects on cell viability, suggesting low toxicity of these peptides.
Dose-dependent inhibitory effects of BmKn-22 and BmKn-23 peptides on biofilm formation and established biofilms of P. aeruginosa
Through the combined results of antibiofilm activity, bacterial growth and toxicity towards mammalian cells, BmKn-22 and BmKn-23 peptides were selected for additional assessments for their dose-dependent inhibitory effects on biofilm formation as well as established biofilms of P. aeruginosa. As presented in Fig 4, BmKn-22 and BmKn-23 peptides exhibited the dosedependent inhibitory activities against the formation of P. aeruginosa biofilms, with % inhibition ranged from 21.23-49.21% and 32.60 -54.92%, respectively. The BmKn-22 peptide also showed strong inhibitory activity against 24 h-preformed P. aeruginosa biofilms and this effect appeared to be dose-related (% inhibition ranged from 23.38 -44.31%). No eradication activity on preformed P. aeruginosa biofilms was, however, observed with BmKn-23 peptide.
Effect of BmKn-22 peptide on pyocyanin production of P. aeruginosa
Effect of the BmKn-22 peptide on pyocyanin production of P. aeruginosa is presented in Fig 5. It was found that BmKn-22 peptide significantly (P < 0.05) decreased the production of pyocyanin from P. aeruginosa, with % inhibition ranged from 39.84-52.60%.
Effect of BmKn-22 peptide on mRNA expression of quorum sensingrelated genes in P. aeruginosa
To gain insight into the mechanisms by which the BmKn-22 peptide inhibited biofilms and pyocyanin production of P. aeruginosa, the mRNA expression of the quorum sensing-related genes lasI, lasR, rhlI and rhlR was examined by quantitative real-time PCR. As shown in Fig 6, BmKn-22 peptide significantly (P < 0.05) decreased the mRNA expression of lasI and rhlR genes, while no alteration in lasR and rhlI mRNA expression was detected.
Antibiofilm activity of BmKn-22 peptide in combination with azithromycin
In order to determine a possible enhancement of activity when BmKn-22 peptide was combined with the commonly used antibiotic, biofilm biomass of P. aeruginosa was examined at different concentrations of BmKn-22 peptide in combination with sub-MIC (1/2 of MIC) dose of azithromycin (64 μg/mL). As seen in Fig 7, biofilm biomass of P. aeruginosa was considerably reduced when BmKn-22 peptide was combined with azithromycin, compared with Antibiofilm activity of BmKn-22 peptide against P. aeruginosa peptide or antibiotic alone. Up to 51.39 and 62.05% biofilm reduction was observed when BmKn-22 peptide at 200 and 400 μM, respectively was combined with 64 μg/mL azithromycin, and increased peptide concentration to 800 μM displayed a substantial biofilm reduction activity (96.97%).
Discussion
The emergence of multidrug resistance and the reduced effectiveness of conventional antibiotic therapy, together with the fast running out of treatment options for P. aeruginosa-associated biofilm infections have set the priority to search for new and effective molecules against such bacterial biofilms. Using a series of the BmKn-2 scorpion venom peptide and its derivatives, this study clearly showed that among the peptides tested, BmKn-22 peptide displayed the most promising inhibitory activity against P. aeruginosa biofilms without affecting the bacterial growth. This peptide was not only capable of inhibiting the formation of P. aeruginosa biofilms, but also disrupting the 24-h preformed biofilms of P. aeruginosa. Our findings thus suggested that BmKn-22 peptide was effective against both forming and established biofilms of P. aeruginosa. Additionally, BmKn-22 peptide also exerted inhibitory activity against pyocyanin production of P. aeruginosa. Pyocynanin is a potent virulence factor of P. aeruginosa that has the ability to generate reactive oxygen species by the direct oxidation of the reduced glutathione pool of mammal cells and the concomitant reduction of oxygen [32], and are related directly to host damage. Pyocyanin also plays a significant role in promoting P. aeruginosa biofilm development which occurs via extracellular DNA release through H 2 O 2 mediated cell lysis [33]. The ability of BmKn-22 peptide to inhibit such a potent virulence factor therefore strengthens the powerful antibiofilm activity of this peptide. Since biofilm inhibitory activity of BmKn-22 peptide observed in this study was not related to its growth inhibition of P. aeruginosa, this peptide may apply milder evolutionary pressure that does not favor the development of the troublesome antibiotic resistance [34]. The fact that BmKn-22 peptide exhibited very low toxicity against mammalian cells, our observations thus indicated antibiofilm potential of BmKn-22 and warrant further development of this peptide for treatment of P. aeruginosa-related biofilm infections.
P. aeruginosa employed two major quorum-sensing systems, the lasI/R and rhlI/R systems, to orchestrate the production of virulence factors and to regulate the biofilm development [35]. In these systems, lasI and rhlI are involved in autoinducer synthesis, and lasR and rhlR code for transcriptional regulators [36]. When the threshold concentration of the autoinducer acylated homoserine lactones is reached, the binding to a transcriptional activator induces target virulence gene expression. Several lines of evidence have also demonstrated that the las system is implicated in the formation and development of biofilm [37], and regulates the expression of the rhl system [38]. Moreover, the las gene has been reported to be expressed in a large number of cells during the initial phase of biofilm development [39], and its expression remained constant throughout the infection [40]. A study carried out by Davies and colleagues [37] reported that a P. aeruginosa wild type formed structured biofilms with large mushroomshaped structures, while the corresponding lasI quorum-sensing mutant formed flat and undifferentiated biofilms. The flat biofilms formed by the lasI mutant were susceptible to treatment with the detergent sodium dodecyl sulphate (SDS), while the structured biofilms formed by the wild type were tolerant [37]. Similar study by Allesen-Holm and colleagues [41] also found that biofilms formed by lasIrhlI mutant contained less extracellular DNA than biofilms formed by the wild type, and the mutant biofilms were more susceptible to treatment with SDS than the wild-type biofilm. Extracellular DNA functions as a cell-to-cell interconnecting matrix component in biofilms and is important for biofilm formation and stability. Moreover, the lasI and rhlI mutants of P. aeruginosa greatly reduced transcription of the pel operon, which is essential for the production of a glucose-rich matrix exopolysaccharide [42]. However, chemical complementation of the lasI mutant with 3-oxo-dodecanoyl homoserine lactone restores pel transcription to the wild-type level and biofilm formation ability [42]. Asides, the rhl system is required for maintaining noncolonized channels surrounding macrocolonies biofilm architecture [43] and promotes microcolonies formation, thereby facilitating threedimensional mushroom-shaped structures formation in later stage [44]. Our study herein demonstrated that BmKn-22 peptide significantly reduced lasI and rhlR expression, suggesting that the BmKn-22 peptide-mediated inhibition of P. aeruginosa biofilms and virulence factors is achieved through the key components of quorum-sensing systems. Considering the central role of quorum-sensing systems in regulating biofilm formation and virulence factor production, interference of such significant systems rather than direct killing by inhibiting growth of bacteria would produce less selection pressure for the development of resistance we are currently facing. Interference of quorum-sensing has become a promising approach for the development of novel therapies to control infectious diseases, in particular those of biofilmassociated [45]. In this context, BmKn-22 peptide would represent a promising molecule for control P. aeruginosa-related biofilm infections.
In the present study, a series of BmKn-2 scorpion venom peptides were assessed for their inhibitory activities against P. aeruginosa biofilms. These peptides were generated by sequentially removing amino acids from C-terminus of the parental BmKn-2 peptide. Our results revealed that while BmKn-2, BmKn-21, BmKn-22 and BmKn-23 peptides exhibited strong antibiofilm activities, BmKn-24, BmKn-25 and BmKn-26 peptides showed less pronounced inhibitory activities. Considering the amino acid sequences and biofilm inhibitory activities of these peptides, our findings suggested that "FIGAIARLLS" be the minimum amino acid sequences required for such inhibitory activities. Although substantial reduction in P. aeruginosa biofilms was observed with BmKn-2 and BmKn-21 peptides, complete hemolytic activity of these peptides was obviously evident. Toxicity of antimicrobial peptides towards higher eukaryotic cells has always been a major barrier that limits their clinical utility [46], thereby preventing their development as future therapeutics. Helicity and net charge of cationic antimicrobial peptides has been described to be directly correlated with hemolytic activity [47,48]. Thus, it is likely that high percentages of helicity together with net charges of BmKn-2 and BmKn-21 peptides would contribute to the complete lysis of red blood cells observed in this study. Nevertheless, modification of the parental BmKn-2 peptide by truncation of isoleucine-phenylalanine (IF) and lysine-isoleucine-phenylalanine (KIF) at C-terminal ends to obtain the respective BmKn-22 and BmKn-23 peptides resulted in the dramatically decreased hemolysis, implying a possible role of such amino acid residues in hemolytic activity. Strong influences of phenylalanine in the hemolytic activity have been reported by several studies [49,50]. When tested for antibiofilm activity, both the BmKn-22 and BmKn-23 peptides displayed inhibition of P. aeruginosa biofilm formation. However, BmKn-22 was the only peptide that exerted inhibitory activity against preformed biofilms of P. aeruginosa. These two peptides differed in a single amino acid residue at the C-terminal end, and BmKn-22 having more promising activity. In this regard, the presence of lysine residue (K) in BmKn-22 peptide, but not in BmKn-23, may contribute to the differences in physico-chemical properties including net charge, hydrophobicity and helix, and these parameters would generate structural basis most favorable for the potent inhibitory activity. In light of our observations, the findings reported here provide valuable evidence for the successful design and development of a potent peptide against P. aeruginosa biofilms.
Combination of antibiotics with different killing mechanisms remains nowadays the best solution of the treatment of biofilm-related infections. However, high doses of antibiotics often lead to significant undesirable side effects for the patients, and repeated exposure of antibiotics can give rise to the development of multidrug resistance [51,52]. In the present study, antibiofilm potential of BmKn-22 peptide in combination with sub-MIC dose (1/2 MIC) of azithromycin was assessed. Azithromycin is a known antibiotic for treatment of P. aeruginosa infections and has been in use for several years [53]. It was found that combination of BmKn-22 peptide with azithromycin resulted in the dramatic increase in the antibiofilm activity against P. aeruginosa. Combination of BmKn-22 peptide and azithromycin also reduced dose of peptide required for antibiofilm activity. It is also interesting to note that while significant biofilm inhibition was not evident with sub-MIC dose (1/2 MIC) of azithromycin alone, remarkable inhibitory activity was obtained when combined with BmKn-22 peptide. The observation from this present study suggested that such combination would potentiate the antibiofilm activity of azithromycin against P. aeruginosa, thereby its efficacy against P. aeruginosa-related biofilm infections increases.
Conclusions
To our knowledge, this study demonstrates for the first time the antibiofilm potential of the modified scorpion venom peptide, BmKn-22 against P. aeruginosa. BmKn-22 peptide was shown to be effective against both forming and preformed biofilms of P. aeruginosa. This peptide was also capable of inhibiting the production of virulence factor pyocyanin of P. aeruginosa. The inhibitory mechanisms involved the down-regulation of lasI and rhlR, the key components of quorum-sensing systems. Combination of BmKn-22 peptide with antibiotic azithromycin led to a remarkable reduction P. aeruginosa biofilms. Since this peptide exhibited very low toxicity, all our results therefore indicate that the BmKn-22 peptide is a potential antibiofilm agent against P. aeruginosa for the development of agents against P. aeruginosarelated biofilm infections.
Supporting information S1 File. The primary data underlying our results. (DOCX) | 2019-06-16T13:13:01.410Z | 2019-06-14T00:00:00.000 | {
"year": 2019,
"sha1": "3a723cd349c39571c1728844aad3d59b949b3877",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0218479&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a723cd349c39571c1728844aad3d59b949b3877",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
54496099 | pes2o/s2orc | v3-fos-license | A Constructive Lower Bound on Szemer\'edi's Theorem
Let $r_k(n)$ denote the maximum cardinality of a set $A \subset \{1,2, \dots, n \}$ such that $A$ does not contain a $k$-term arithmetic progression. In this paper, we give a method of constructing such a set and prove the lower bound $n^{1-\frac{c_k}{k \ln k}}<r_k(n)$ where $k$ is prime, and $c_k \rightarrow 1$ as $k \rightarrow \infty$. This bound is the best known for an increasingly large interval of $n$ as we choose larger and larger $k$. We also demonstrate that one can prove or disprove a conjecture of Erd\H{o}s on arithmetic progressions in large sets once tight enough bounds on $r_k(n)$ are obtained.
Introduction
In 1927, van der Waerden [14] proved that for all positive integers r and k, there exists a positive integer N such that every r-partitioning of {1, 2, . . . , N} contains a k-term arithmetic progression in one of the parts of the partition. The smallest such integer N is denoted w(r, k). Van der Waerden's theorem has become the cornerstone of what is known today as the study of Ramsey theory on the integers.
In 1936, Erdős and Turán [5] conjectured that every set of consecutive integers with a positive natural density contains arbitrarily long arithmetic progression. An equivalent restatement of the conjecture claims that every set that does not contain arbitrarily long arithmetic progressions has a natural density that must converge to zero. Let r k (n) denote the maximum cardinality of a set A ⊂ {1, 2, . . . , n} such that A does not contain a k-term arithmetic progression. Proving that for some positive integer k, r k (n) n → 0 as n → ∞ would prove the conjecture of Erdős and Turán to be true.
In 1953, Klaus Roth [11] made the first step towards solving this conjecture as he proved the upper bound r 3 (n) < cn log log n .
This was followed by an upper bound on r 4 (n) obtained by Endre Szemerédi [12] in 1969. In 1975, Szemerédi [13] proved the conjecture to be true in general by a clever extension of his proof for the case k = 4.
To this day the true growth rate of r k (n) is still unknown and the problem is still widely studied. Obtaining the true growth rate on r k (n) has important consequences as it can be used to prove or disprove a famous conjecture by Erdős [5] on arithemtic progressions. This conjecture states that if the sum of the reciprocals of a subset of the natural numbers diverges, then the set contains arbitrarily long arithmetic progressions. This conjecture has become of great interest in recent years as Ben Green and Terrence Tao [6] proved a special case of this conjecture by showing that the primes contain arbitrarily long arithmetic progressions.
Currently, the best known general upper bound on r k (n) is due to Gowers [8]. In 2001, Gowers used fourier analysis and combinatorics to prove that r k (n) < n (log log n) 2 2 k+9 .
The best known general lower bound is given by Kevin O'Bryant who built upon earlier results by Behrend [1], Rankin [10], and Elkin [4]. In 2011, O'Bryant [9] proved that cn where a = ⌈log k⌉. In this paper, we provide a new recursive construction that gives a lower bound on r k (n). More specifically, in Section 2 we prove that if k is a prime, then We then use this to obtain the following theorem.
Theorem 2.1. If n is positive integer and k is a prime, then where c k → 1 as k → ∞.
We obtain this bound by modifying a construction by Blankenship, Cummings, and Taranchuk [2] that was used to prove a recursive lower bound on the van der Waerden numbers. In particular they proved that if p is a prime and p ≤ k then Our theorem provides the best known bound for n < ck k 3/2 log k , however O'Bryant's bound is better as n → ∞.
In recent years, extensive research of cases k = 3 and k = 4 has yielded tighter bounds on r k (n). The case k = 3 has been of particular interest and the bounds on r 3 (n) have seen steady, incremental improvements through the years. Currently, the best known bounds are n 2 √ 8 log n < r 3 (n) < cn(log log n) 4 log n .
The lower bound is due to O'Bryant [9] and the upper bound was given by Thomas Bloom [3] in 2016. In 2017, Ben Green and Terrence Tao [7] provided the upper bound r 4 (n) < c 1 n (log n) c 2 for absolute constants c 1 and c 2 .
In the following section we prove our main theorem and in Section 4 we discuss a potential method for improving our bound. 1. The acronym k-AP stands for k-term arithmetic progression.
2. Let a 1 , a 2 , . . . , a k be an arithmetic progression. Then we call the difference between any two consecutive elements, a i − a i−1 = d the common difference. Note that this method of construction is equivalent to replacing each term in A with a consecutive sequence of length k, and then excluding the last element. This excluded element is always a multiple of k and thus, A k contains no elements that are congruent to 0(mod k). This definition is the basis for our recursive construction. Our next lemma will prove a key property about A k .
Proof. We split this proof into two cases.
Case 1 : Assume there exists a k-AP in A k such that the common difference of this k-AP is d, and d ≡ 0(mod k). This implies that for some positive integer m, d = mk.
Such an AP would be representative of finding an AP with common difference m in the original set A, which we defined as being k-AP free. This is due to the expansion of each term in A to a block of k − 1 elements in A k . Thus, this is a contradiction.
Case 2 : Assume there exists a k-AP in A k such that the common difference of this k-AP is d, and d ≡ 0(mod k). Recall that since k is a prime and if i is an arbitrary integer such that i ≡ 0(mod k), then i ≡ 2i ≡ · · · ≡ ki(mod k). This is important because it implies that multiples of i must first cycle through all possible congruence classes mod k. Similarly, if the common difference d of a k-AP is not congruent to 0(mod k), then each element in the k-AP is in a unique congruence class mod k. More importantly, there must be an element in every congruence class of k for such a k-AP. However, note that the definition of A k excludes elements congruent to 0(mod k). Thus, this would also be a contradiction. So A k is k-AP free.
Corollary 2.1. If n is a positive integer and k is a prime, then Theorem 2.1. If n is a positive integer and k is a prime, then where c k → 1 as k → ∞.
Proof. It is known that k − 1 = r k (k). By Corollary 2.1 we obtain that (k − 1) 2 ≤ r k (k 2 ). Continuing this recursive process of construction we obtain that for any positive integer r, (k − 1) r ≤ r k (k r ).
Let n = k r , so r = log k (n) = ln n ln k . Define f k (n) = (k − 1) r < r k (n). Considering the ratio of n over f k (n), we obtain that Thus, we obtain that If k r−1 < n < k r , then we can obtain the construction of size k r and only consider the elements up to n.
By comparing the bound obtained by Theorem 2.1 and O'Bryant's bound, it is easy to check that Theorem 2.1 is the best known lower bound on r k (n) for all n < ck k 3/2 log k .
Erdős's Conjecture and Szemerédi's Theorem
A set of positive integers is called large if the sum of the reciprocals of its elements diverges, otherwise the set is called small. Erdős conjectured that every large set contains arbitrarily long arithmetic progressions. The contrapositive of the conjecture is as follows; if there exists a k such that a set A does not contain a k-AP, then A is small. The only progress to be made on this conjecture was made by Green and Tao(cite), who proved that the primes contain arbitrarily long arithmetic progressions. Although a relationship between Szemerédi's theorem and this conjecture is clear, there has been no explicit connection made. In this section we show an intimate relationship between these ideas. Lemma 3.1. If f (n), g(n) and h(n) are unbounded monotonically increasing functions with f (n) < g(n) < h(n) Proof. Assume that f (n), g(n), and h(n) are unbounded monotonically increasing functions with f (n) < g(n) < h(n). Note that this implies that f −1 (n), g −1 (n), and h −1 (n) are also unbounded monotonically increasing functions. It is clear that However, since f (n) < g(n) < h(n), then we obtain that which implies that Using this lemma, we will prove the following proposition which is the key concept behind proving or disproving Erdős's conjecture. Proof. Assume A ⊂ N is an infinite set with f (n) and h(n) as lower and upper bounds on g(n), respectively. Define g(n) to be the true growth rate of the cardinality of A. Note that g(n) = a implies that A contains approximately a elements up to n. We then bring the readers attention to the fact that g −1 (a) = n. This implies that the first a elements in A are from the subset of {1, 2, . . . , n}. In other words, g −1 (a) = n implies that the a th element is approximately n. We now apply Lemma 3.1 to see that f −1 (a) > g −1 (a) > h −1 (a), which implies that f −1 (a) is an upper bound on the a th element in A, and likewise, h −1 (a) is a lower bound on the a th element in A.
As an example of this proposition we give the following example. Let A = {1, 2, 4, 5, 9, 14, 16, . . . } be a random subset of N. Then g(14) = 6, and let f (14) = 5 and h(14) = 7. Then by Lemma 3.1 we have that Corollary 3.1. If f k (n) and h k (n) are lower and upper bounds on r k (n) respectively, and A is a set whose density is defined by r k (n), then k (n) are bounds on the sum of the reciprocals of the elements in A, where a n is the n th element in A.
Proof. Proposition 3.1 gives us that since f k (n) and h k (n) are lower and upper bounds on r k (n) respectively, then f −1 k (n) and h −1 k (n) are upper and lower bounds on the size of the n th element in a set A ⊂ N that does not contain a k-AP, then .
Thus, the key to proving or disproving Erdős's conjecture is in studying and tightening the bounds on r k (n). At this point some analytic tools will help us identify the growth rate of r k (n) required to prove or disprove the conjecture.
The following result is not difficult to show using Cauchy's Condensation test, Hardy proves it in his textbook Course of Pure Mathematics on page 376. The result states that for a large enough N ∈ N, converges for s > 1 and diverges otherwise. The following two theorems hold true for defined bounds in theorem. However, it is clear that both cannot be true, thus only one of the theorems hold true in relation to r k (n). For more detailed results on converging and diverging series as related to densities we refer the reader to a recent paper by Niculescu and Prǎjiturǎ(cite). Proof. Assume that f k (n) is a lower bound for r k (n) such that f k (n) = c · n (ln n)(ln ln n) . . . (ln ln . . . ln n d times . Obtaining an exact f −1 k (n) solely as a function of n is not possible. However, if for some positive integer k, there exists an n ∈ N and a function g(n) such that f k (g(n)) > n, for all n > N, then it is clear that g(n) > f −1 k (n) for all n > N. This implies that g(n) is a worse upper bound on the n th element in a set A whose density is defined by r k (n). Consider g(n) = n(ln n)(ln ln n) . . . (ln ln . . . ln n (d+1) times ).
We can show there exists an N ∈ N such that n < f k (g(n)) for all n > N, by considering that g(n) < n 2 and using this for our denominator in f k (g(n)).
Then note that there exists an absolute constant C such that c · g(n) (ln n 2 )(ln ln n 2 ) . . . .
Recall that we defined g(n) to have d + 1 iterations of natural log terms. Thus we obtain that c · g(n) C · (ln n)(ln ln n) . . . for some new constant c. Since the constant does not grow, then we have that there exists an N ∈ N, such that f k (g(n)) > n for all n > N. This implies that f −1 k (n) > g(n), which further implies that 1 g(n) < 1 , for all n > N. Thus we obtain that .
Relating back to Cauchy's condensation test, we have that ∞ n=N 1 g(n) diverges. Thus, ∞ n=N 1 f −1 k (n) diverges. Thus, for some k there exists large set A that contains no k-AP. Having concluded the forwards direction of the proof, note that the backwards implication of the proof which assumes that for some k, there exists a large set not containing a k-AP, becomes an easy consequence of working back from Cauchy's Condensation test. The existence of such an A implies that the growth rate of r k (n) must atleast be that of our defined f k (n) for some fixed d. Proof. If such a lower bound for r k (n) exists, then we use Theorem 3.1 to claim that there exists a large set A that does not contain k-APs. This is contradictory to the contrapositive statement of Erdős's conjecture, since such an A is not small. Similarly as from the proof of Theorem 3.1, it is not possible to find an exact inverse function for h k (n) in terms of solely n. In this case, we want a g(n) such that for all positive integers k, there exists an N ∈ N, such that h k (g(n)) < n for all n > N. This would imply that g(n) < h −1 k (n) for all n > N. Consider g(n) = n(ln n)(ln ln n) . . . (ln ln . . . ln n d times ) s−ǫ for some ǫ > 0 and s − ǫ > 1. Clearly such an epsilon exists since s > 1 and s ∈ R.
Expanding g(n) as defined gives us, c · g(n) (ln g(n))(ln ln g(n)) . . . (ln ln . . . ln g(n) Clearly, ln g(n) > ln n, ln ln g(n) > ln ln n, and so on. Also, since our last term is raised to the s − ǫ < s, then we know that for all positive integers k there exists an N ∈ N, for which h k (g(n)) < n for all n > N. Thus g(n) < h −1 k (n) for all n > N. This further implies that 1 . Here we note that .
Since s − ǫ > 1, by Cauchy's condensation test we have that ∞ n=N 1 g(n) converges. Thus ∞ n=N 1 h −1 k (n) converges, which implies that every set A that does not contain a k-AP is small.
The backwards direction of the double implication also comes from the definitions. If every set A that does not contain a k-AP for some k, is small, then this places an automatic upper bound on r k (n) given that too large a density creates a large set. Working backwards from Cauchy's condensation test, we again see that the we can obtain that an upper bound on r k (n) is atleast that of our defined h k (n) for some positive integer d, and real number s > 1. Proof. If such an upper bound r k (n) exists, then we use Theorem 3.2 to claim that then every set that does not contain a k-AP is small. Which is an equivalent statement to that of Erdős's conjecture.
Potential Improvements
Using the same general idea, we can consider how does using a prime smaller than k affect our construction. It is likely that this would allow us to add elements that would normally be part of the empty blocks between elements in our construction. Although this method would provide a smaller starting bound, it could provide a bound that grows with n, which would be an asymptotic improvement over the bound given by Theorem 2.1.
Acknowledgments
I would like to thank Craig Timmons for his insightful comments that helped improve the quality of this paper. | 2017-11-19T20:04:51.000Z | 2017-11-11T00:00:00.000 | {
"year": 2017,
"sha1": "5c6e4ec57bc700180698fb5ee4e341450afb587d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5c6e4ec57bc700180698fb5ee4e341450afb587d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
145822906 | pes2o/s2orc | v3-fos-license | The wicked problem of healthcare student attrition
Abstract The early withdrawal of students from healthcare education programmes, particularly nursing, is an international concern and, despite considerable investment, retention rates have remained stagnant. Here, a regional study of healthcare student retention is used as an example to frame the challenge of student attrition using a concept from policy development, wicked problem theory. This approach allows the consideration of student attrition as a complex problem derived from the interactions of many interrelated factors, avoiding the pitfalls of small‐scale interventions and over‐simplistic assumptions of cause and effect. A conceptual framework is proposed to provide an approach to developing actions to reduce recurrent investment in interventions that have previously proved ineffective at large scale. We discuss how improvements could be achieved through integrated stakeholder involvement and acceptance of the wicked nature of attrition as a complex and ongoing problem.
| INTRODUC TI ON
Student attrition is a costly challenge for higher education (Beer & Lawson, 2017), and voluntary early withdrawal of midwifery and nursing students is a concerning trend Hughes, 2013). Research across the higher education sector has identified that a broad range of factors can impact student success, with the most frequently cited barriers being personal issues, financial problems and academic difficulties (Cameron, Roxburgh, Taylor, & Lauder, 2011a, 2011bLancia, Petrucci, Giorgi, Dante, & Cifone, 2013;Tinto, 1994;Urwin et al., 2010;Yorke, 2004). While single factors can be mitigated at a local level (e.g., that of the institution or programme), the range of factors impacting student decisions makes it difficult to predict student success and retention.
Thus, identifying and targeting appropriate interventions to support student progression remains a challenge for which there is no easy solution (Jones-Schenk & Harper, 2014).
The high rate of healthcare student attrition has generated a significant body of international research over several decades (see, for instance, the reviews by Merkley, 2015;Mulholland, Anionwu, Atkins, Tappern, & Franks, 2008), underscoring the widespread nature of the problem. While a variety of factors have been identified as contributing to healthcare student attrition, the clearest finding is that the factors are numerous and that they interact with one another in complex ways (Sabin, 2012). Constraints such as local resource allocation and political policy change have added to the challenge of mitigating drivers of healthcare student attrition. Hamshire, Barrett, Langan, Harris, and Wibberley (2017) demonstrated that despite considerable investment in actions designed to address particular issues, such as targeted personal support, campus-based supporting structures and managing expectations and experiences of placements, there is little evidence that these efforts significantly impact overall student retention. There is consensus that addressing the problem requires flexible and inclusive approaches to overcome the lack of success of simple solutions (Harris, Vanderboom, & Hughes, 2009). Educators therefore need to confront the drivers of attrition (Abele, Penprase, & Ternes, 2013) and apply processes that involve the consideration of multiple interacting systems to help to interrogate the evidence and propose solutions. Approaches for addressing student attrition need a shift in thinking, to recognise this complexity and to manage the potential consequence of interventions.
One approach to managing large, complex problems is to consider solutions in the context of the 'wicked problem' framework (Rittel & Webber, 1973), which was specifically proposed to address problems that are difficult to manage. Wicked problems are characterised as dynamic, complex and impossible to solve, with simple solutions addressing only one dimension of the whole (Sherman & Peterson, 2009).
These properties distinguish wicked problems from 'tame' problems', for which a clear problem and workable solutions can be identified (Varpio, Aschenbrener, & Bates, 2017). Wicked approaches conceptualising complex issues and developing policies have been proposed for many areas of enquiry, for example in public policy-making (McGrandle & Ohemeng, 2017;Sherman & Peterson, 2009) and in environmental conservation (DeFries & Nagendra, 2017). In this paper, the wicked problem approach is applied to the challenge of student attrition in healthcare education, using data from a cross-sectional study.
| THE WI CK ED PROB LEM FR AME WORK
For wicked problems, the use of the word 'wicked' is an indication of the complexity, importance and persistence of a problem, rather than an indication of aberrance (Coyne, 2005). Wicked problem solutions have competing and changing requirements, and involve many stakeholders, each with their own values and priorities (Harris et al., 2009). Stakeholder views will typically be shaped by both personal and professional characteristics, influencing how they explore and address causal factors contributing to wicked problems and the validity of solutions (Roberts, 2000). There has been interest in using the framework of wicked problems in diverse policy areas such as educational quality (Jordan, Kleinsasser, & Roe, 2014;Krause, 2012), mental health (Hannigan & Coffey, 2011;Harris et al., 2009) and curriculum design (Hawick, Cleland, & Kitto, 2017). Such approaches conceptualise wicked problem solutions as a process, focussing on continuous problem-solving and evaluation, rather than focussing on short-term outcomes.
The framework for mitigating a wicked problem was laid out by Rittel and Webber (1973), with consideration to problems, solutions and stakeholders. The structure of a wicked problem is, by definition, difficult to identify and may change through time or be different in different contexts. As a consequence, it follows that solutions may not have clear outcomes or stopping points. Because of these properties, it is argued that the context of wicked problems should constitute aspects of process and of policy, so that formulation of the problem is revisited along with efficacy and changing effectiveness of actions to alleviate the problem. Perhaps the most challenging aspect of wicked problems, however, is the fact that different stakeholder segments may agree, disagree or even fail to perceive important aspects of both the problem and of solutions, and can vary in the complexity of stakeholder relationships (Alford & Head, 2017).
| THE PROB LEM OF HE ALTH C ARE S TUDENT AT TRITI ON
The complex blend of stakeholders associated with healthcare education results from students working both within universities and in publicly funded clinical environments. In the United Kingdom (UK), the competencies required to register as a qualified Nurse are stated by the Nursing & Midwifery Council (NMC, 2010). Education is split equally between the University and clinical environment, which is similar to the approach across Europe and other countries such as the United States and Australia (Saarikoski, Marrow, Abreu, Riklikiene, & Özbicakçi, 2007). The high rate of withdrawal from UK Nursing courses is a matter of national interest; about a quarter of those enrolled on Nursing courses do not go on to qualify as nurses (Mulholland et al., 2008).
Hamshire, Willgoss, and Wibberley (2013) explored reasons why UK students considered leaving pre-registration courses in nursing and allied healthcare programmes. Around a thousand online survey responses from students across healthcare courses and year cohorts showed that almost half had considered leaving. Three distinct themes emerged to explain this: dissatisfaction with high academic workload and poor academic support; difficulties associated with clinical placements; and personal concerns and challenges. A large number of students identified a combination of reasons for leaving (within and across themes). Key factors that influenced the decision to continue their studies included support from family and friends, personal determination, interesting and enjoyable placements, and support from staff. The outcomes of this example study led to considerable investment in improving students' experiences, including implementation of peer mentoring schemes and a variety of specialised first-year support courses. However, there appeared to be little change in students' perceptions when the survey was repeated 4 years later with new cohorts on the same courses (Hamshire et al., 2017;Jack et al., 2018). This lack of improvement in students' qualification rates has been noted by others, for example Varpio et al. (2017) who suggested that the challenges for educators in some health professions are so complex that they defy resolution. With the increasing complexity in healthcare systems, changes in policy and practice in one area will inevitably affect the workplace elsewhere, sometimes with unexpected results that appear impossible to undo (Hannigan & Coffey, 2011;Harris et al., 2009).
| THE WI CK ED PROB LEM OF S TUDENT AT TRITI ON
In terms of terminology, there is a variety of measurements associated with the completion academic studies, such as 'retention', 'withdrawal', 'timely completion', 'discontinuation', 'non-completion', 'survival', 'graduate completion' and 'student success rate'. In his seminal work on attrition, Tinto focused on first-year withdrawal (Tinto, 1994;Tinto & Goodsell, 1993). Currie et al. (2014) provided numerous descriptions including failing to enrol, enrolling but failing to attend class and so on. In the UK, student retention generally refers to the extent to which learners remain within a higher education institution, completing a programme of study in a pre-determined time period (Jones, 2008). In terms of interpretation of student withdrawal, it is noteworthy that in the overwhelming majority of cases, student attrition has been framed as an institutional failing, and there is a motive to document studies into retention in a manner that contextualises it in the least detrimental way as possible to the institution. However, it should be acknowledged attrition is not always a negative outcome and withdrawal can be a positive choice for individual students (Boyd & Mckendry, 2012). This outcome positions personal needs at odds with the intentions of the education providers, potentially complicating the messages about withdrawal from the student stakeholder position.
Student attrition in general should be considered to be a dynamic problem, with non-linear responses to external influences that vary across place and time. There are several large-scale, multi-institution, longitudinal studies that have documented factors associated with student withdrawal (e.g., Yorke & Longden, 2008), and a number of integrative literature reviews (e.g., Pitt, Powis, Levett-Jones, and Hunter (2012). However, the majority of TA B L E 1 Rittel and Webber's (1973) properties of a Wicked Problem (adapted from McGrandle & Ohemeng, 2017, p. 231) in the context of healthcare student attrition
Characteristics of wicked problems Attrition characteristics
No clear definition of the problem Multiple stakeholders have differing definitions of the problem. For the HEI, student attrition can be costly due to loss of funding and impacts on reputation. For students, there may be impacts on well-being, but ability to freely leave an unsuitable programme is not a problem; for healthcare providers, attrition from healthcare programmes is a problem in terms of workforce planning and supply. Various descriptions of the problem are unhelpful such as withdrawal, discontinuation and non-completion. A tolerable level of attrition is difficult to define Never ending solutions and amendments Solutions proposed address particular issues such as personal support, placement experiences and academic achievement. All can be beneficial although do not address the complexity of the interaction between such individual problems and how this relates to student retention No right or wrong evaluation or solution As definitions of the problem vary, so will the associated evaluations and solutions. It is difficult to achieve the 'right' evaluation if the cause is interrelated and complex Framing of the problem affects and limits potential solutions Tame solutions have been proposed in the literature for isolated aspects of student attrition. However, these do not account for the complexity of the problem Pressure on policy-makers There is intense pressure on policy-makers to find solutions to the problem. The implications of attrition are costly and have far-reaching consequences on multiple levels research into interventions to alleviate student attrition is founded on small-scale studies at single institutions (Cameron et al., 2011b).
As a consequence, the proposed solutions are generally specific to a particular context and should be treated with a degree of caution, as they may not be transferrable across settings. Previous research highlights complex reasons that underpin withdrawal by identifying multiple, interacting factors associated with the probability of leaving. Tinto (1975) argued that student achievement is determined largely by integration into both the social and academic aspects of an institution. That is, the likelihood of whether students will continue is dependent on integration during learning transitions (Tinto & Goodsell, 1993). Table 1 maps Rittel and Webber's (1973) properties of a wicked problem to the characteristics of healthcare student attrition. As described, healthcare student success in tertiary education has many to set reasonable student expectations ensuring that a career within healthcare is both desired and valued (Fontaine, 2014).
| THE PARTI CUL ARLY WI CK ED PROB LEM OF HE ALTH C ARE S TUDENT AT TRITI ON
Throughout their studies, healthcare students constantly undergo a process of transition as they adapt their expectations of both the higher education and the clinical practice environments.
Those students who feel socially and academically integrated in both environments are more likely persist in their studies (Scanlon, Rowling, & Weber, 2007). The challenge for healthcare educators is to try to identify which factors will affect students' experiences of this transition and respond in a positive and supportive way. The ability of any given student to complete their course is governed by many interacting factors, including the educational experiences of the student prior to enrolling; the social and academic engagement between the student, their peers and the institution; job certainty, national events and the commitment of the student to the institution (Urwin et al., 2010). In accord with the findings of Bryson and Hardy (2012), it is clear that the specificity of each student's personal situation defines their learning experience (Beer & Lawson, 2017).
Definitive results from specific interventions may take up to 5 years to emerge (Cameron et al., 2011b), and despite a significant body of literature in this field, there is limited robust evaluation of specific interventions that are designed to reduce attrition. The complex nature of the student environment, comprising of many dynamic extrinsic and intrinsic factors governing their potential success in both academic and professional components of their courses, underpins our assertion that the problem of improving student achievement is wicked in nature.
| K E Y DRIVER S OF HE ALTH C ARE S TUDENT WITHDR AWAL-A CROSS-S EC TIONAL S TUDY
To illustrate the nature of the wicked problem of healthcare student retention problem, we draw on key outcomes of a repeat regional cross-sectional study that identified factors contributing to attrition in nursing and healthcare students at nine institutions in the North West of England. Briefly, an online survey was made available to all undergraduate students studying on healthcare education programmes (n = ca. 20,000) at nine participating institutions; full details are reported in Hamshire et al. (2013Hamshire et al. ( , 2017. Ethical approval was obtained from the Manchester Metropolitan University Research and Ethics Committee. The second survey demonstrated that there had been little change in students' perceptions of their learning experiences despite considerable investment in a range of interventions by both higher education institutions and regional funding bodies (Hamshire et al., 2017). Reasons for considering withdrawal from their courses in the later survey were thematically analysed following the approaches of Ritchie and Spencer (2002). Over 42% (735) of the students reported that they had considered leaving their programme and 712 of these added comments regarding their thoughts about early withdrawal.
Thematic analysis of the student comments related to the 'Have you considered leaving?' question identified three key themes as shown in Table 2, with examples of student comments.
Understanding the complexity of causative factors that impact on an individual student's learning experiences and thus engagement can be challenging (Currie et al., 2014). With a more traditional approach to problem-solving, many of the challenges could appear as peripheral or outside the sphere of influence of the academic manager charged with addressing student retention. While one body is responsible for student funding, another may be responsible for a provider's approach to placement students, and another responsible for timetabling student placements and assignment deadlines.
Evaluation of a funded project tends only to consider intended consequences and may not identify wider problems which the project has generated. For example, a course team may decide to provide early clinical placement experience to motivate students, but without considering the additional workload pressures for clinical staff in supervision of students without underpinning knowledge. This can lead to the somewhat fatalistic analyses described by Hawick et al. (2017); there seems to be no way to 'get off the carousel', when repeated attempts at enhancement seem to reinforce a sociocultural situation that already existed.
In the case of student attrition, the overall intended outcome of proposed enhancements is that the proportion of students progressing successfully to the end of their programme of study increases. If this overall goal is slightly reformulated, for example to say that more individuals are able to progress to the end of their programme of study, we can shift the focus to those individuals rather than thinking about the group as a whole. Whitehead (2017, p. 283) frames this kind of approach as 'a change from putting the curriculum at the centre of attention (in a way that makes it the primary object of focus) to contextualizing the curriculum'. Using a similar approach to that proposed by DeFries and Nagendra (2017), the survey data have been used to propose a framework to inform consideration of wicked problems (Table 3; Figure 1).
If the overall goal is to make it possible for each student to progress, then the data presented here, along with those in many other studies of student attrition, can be used to identify unnecessary barriers to this goal and ensure that both intended and unintended Factor influencing attrition Example data
Concerns due to personal circumstances
We have less money to live off than other students over the year when we have an extra 2 months to fund. Watching other people live the university lifestyle while we're supposed to feel like uni students is like rubbing it in our faces It's not a normal sort of degree. In regards to the intensity and hours. I have not been able to participate in sport the way I would have liked. Holidays and paid work in the summer is very limited as we only have a month off I have felt overwhelmed several times on the course, there is very little consideration given for those who have children with little childcare support especially regarding placement hours. When I approached a member of staff about this I was told that was just the way it is, which was very discouraging and unhelpful Workload pressure I have considered leaving the course due to the overlap of work placements and assignments. Throughout this academic year, all assignments have had a deadline that is when we are on placement, therefore it has been very difficult to complete assignments to the best of my abilities and I have been extremely tired on placement due to having late nights to complete assignments Stresses of having so much academic work to do whilst being on placement-sometimes feeling like I'm working a full time job for zero pay and getting so stressed out that my own mental health suffered. I was close to quitting when I became ill from all the stress but I'm determined not to waste the rest of my life in rubbish jobs Clinical placement culture When you're on a placement and the staff treat you as a healthcare assistant it can really get you down, as staff constantly see you as an extra pair of hands. I honestly feel that student nurses are used to bridge the gap in the staffing shortages on most wards. And this problem needs to be addressed as it affects the student learning experience. And if student nurses try to broach this subject with staff we are often thrown back with the saying that we are to 'posh to wash', which is a complete lie Nursing is fascinating, however the politics that go on in placement, the lack of doors open to be able to progress from nursing into higher positions and the lack of pay and expectations of nurses makes me think every single day I am on placement about leaving the course. The only thing keeping me on is that I have one year left and I would like to try and have a go at progressing from nursing. But it really scares me that I am going to get stuck as a band 5 [entry level job] like everybody else The way that nurses get treated by hospitals and the community from day one (first day as a student nurse) puts me off being in the profession as it doesn't seem to improve over time or as we qualify TA B L E 2 Exemplar student comments for the three identified themes in relation to early withdrawal consequences are considered in evaluation. This process involves setting incremental goals and reviewing them regularly; this is captured in the model shown in Figure 1.
Diverse stakeholders may have different worldviews and competing understandings (Jha & Lexa, 2014) and potential solutions to mitigate against the impacts of the wicked problem must be ongoing (Harris et al., 2009) and may be non-linear or problematic. Actions that are successful in one context may have a different impact within another that appears on the surface to look similar and 'waves of consequence' may be far-reaching and irreversible (Hannigan & Coffey, 2011, p. 221).
| IMPLIC ATIONS
The three themes from the study presented here highlight the social and cultural complexity of the factors influencing students' experiences. Acknowledging these as representing wickidity, and exploring student attrition using the wicked problem framework, enables us to focus on how these social factors contribute to workplace/placement culture and impact on students' learning and personal circumstances.
Many interrelated factors determine whether an individual student withdraws from a course, and within subject areas, such as healthcare, there is variation between different areas (within and between institutions), both in terms of their retention rates and the composition of their typical learners, with large-scale variations across institutional types and geographical location. The starting point for a wicked problem analysis, in the context of healthcare student attrition, is to consider the three systems which may influence students' experiences: the students' personal system, the university education system and the clinical education system. These are all in turn influenced by both local and national policy and clearly interact and influence each other. From a university perspective, this means re-conceptualising student attrition as a systems problem, avoiding the repetition of modest solutions and considering a more holistic approach to the problem. This reframing allows us to step back from making assumptions about blame, causality and linkages, and move from small-scale, one-shot, simple models to considering attrition as a complex problem derived from the interactions of factors. Without recognition of the 'wicked' nature of the problem, investment in interventions will continue to result in poor results and little change. It probably seems obvious that placement culture needs to be addressed in the workplace, but a multi-system approach could also look at how students are prepared for placement and for coping with potential challenges, and how they report challenges back to university staff. It could also show how placements are integrated explicitly with other aspects of the curriculum, so that students and workplace mentors understand how they work together, and how deadlines may affect students' activities and pressures at different times. A multi-system approach would also anticipate challenging personal circumstances and provide flexible options for responding to these.
The question of evaluating the effectiveness of a wicked problem framework is challenging for many reasons. Notably, it can be difficult to accept the combined effect of the complex and unique nature of any particular wicked problem, and the existence of a no stopping rule, whereby even if significant improvements are made, the problem does not disappear. For example, if an institution experiences low withdrawal in a particular cohort of healthcare students, this does not mean that successful practices in that year are a guarantee of future success with student persistence on the course; the problem should be seen as dynamic and ongoing. Stakeholders need to work together to acknowledge historical issues and the impact and influence of earlier interventions, in order to move beyond previous ineffective investment in the problem of healthcare student attrition.
We have provided a framework to conceptualise healthcare student attrition based on prior research evidence. Using this as a basis for actions with diverse stakeholders should result in contextually effective approaches to the design of healthcare programmes and ultimately reduce the number of students withdrawing from these courses.
ACK N OWLED G EM ENTS
We would like to acknowledge the contribution of the anonymous peer reviewers who gave feedback on the first version of this paper; their comments have greatly improved it. | 2019-05-07T13:03:07.274Z | 2019-05-06T00:00:00.000 | {
"year": 2019,
"sha1": "3a81e3918d1546f6d6eca5caed20b8e036e80152",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nin.12294",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4fb3e8e2c2bb2b00a13a1d94b5114fb3107b533",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
210136301 | pes2o/s2orc | v3-fos-license | Parents’ experiences of abuse by their adult children with drug problems
Aims: To examine parents’ experiences of abuse directed at them by their adult children with drug problems. Material and Method: The material consists of 32 qualitative interviews on child-to-parent abuse with 24 mothers and eight fathers. The interviewees had experienced verbal abuse (insults), emotional abuse (threats), financial abuse (damage to property and possessions) and physical abuse (physical violence). Findings: In the parents’ narratives, the parent-child interaction is dominated by the child’s destructive drug use, which the parents are trying to stop. This gives rise to conflicts and ambivalence. The parents’ accounts seem to function as explaining and justifying their children’s disruptive behavior in view of the drug use. The fact that an external factor - drugs - is blamed seems to make it easier to repair the parent-child bonds. The parents differentiate between the child who is sober and the child who is under the influence of drugs, that is, between the genuine child and the fake, unreal child. The sober child is a person that the parent likes and makes an effort for. The child who is on drugs is erratic, at times aggressive and self-destructive. Conclusions: The interviewed parents’ well-being is perceived as directly related to how their children’s lives turn out. The single most important factor in improving the parents’ situation is to find a way for their adult child to live their lives without drug problems.
Introduction
The abuse committed by children against their parents has received considerably less attention than child abuse and intimate partner violence. While a growing body of research on parent abuse has emerged in recent years, most of it has focused on adolescent-to-parent violence (Holt, 2013;Routt & Anderson, 2011;Simmons, McEwans, Purcells, & Ogloff, 2018). A category of abuse victims which is seldom discussed is parents of adult children who have problematic use of drugs. It is these victims is the focus of the present study.
We make use of the concept "child-to-parent abuse" (CPA) to capture the range of abuse and draw on the division laid out by the English criminologist Amanda Holt of four main categories of abuse: Verbal abuse (yelling, screaming at the parent, using insulting names, swearing, criticising the parents' appearance, intelligence and parenting ability) Financial abuse (damaging property, furniture and possessions, theft of money or possessions, demanding money or goods, incurring debts that parents are responsible for) Physical abuse (hitting, punching, kicking, slapping, pushing, spitting or throwing objects at parents) Emotional abuse (intimidating the parent so self-esteem is undermined, attempting to make the parent feel unstable, threats to harm parent or themselves) (Holt, 2013).
The various types of abuse can occur at the same time: the categories thus overlap to a certain extent, especially those of verbal abuse and emotional abuse (Holt, 2013). All forms of abuse have a strong emotional association.
The aim of this article is to highlight parents' narratives on the abuse that adult children perpetrate against their parents in direct interactive conflict situations. We focus on insults (verbal abuse), threats (emotional abuse), violence (physical abuse) and damage to property and possessions (financial abuse). We have also examined and analysed the parents' reactions to and explanations of these incidents. The article accounts for an analysis of interviews with parents who suffer from drug use-related childto-parent abuse.
Prevalence of child-to-parent abuse
There are no data available about the prevalence of various forms of child-to-parent abuse (CPA) in Sweden. The Swedish National Council for Crime Prevention (BRÅ ) conducts annual nationwide surveys on the extent of crime, but CPA is not included as a distinct crime category. Also, comparable international figures are hard to find, as definitions of CPA differ. In a recent review the estimations of the prevalence of physical, emotional and psychological CPA vary between 33% and 93% in community studies, depending on the definition used. The conclusion is that CPA, whether recognised or not, is a common phenomenon in industrialised countries (Simmons et al., 2018).
Background factors related to childto-parent abuse
There are no significant differences in rates of perpetration of CPA between males and females (Simmons et al., 2018). While both boys and girls engage in all types of CPA, it is more common for boys to resort to physical violence and for girls to commit emotional abuse (Lyons, Bell, Fréchette, & Romano, 2015). Children who display antisocial behaviour outside of the family are overrepresented among CPA perpetrators (Otto & Douglas, 2011;Simmons et al., 2018).
Children's drug use increases the risk for CPA (Kennair & Mellor, 2007;Simmons et al., 2018), but the existing studies do not make a distinction between different levels of drug use or whether different drugs have different impacts on CPA. It is difficult to ascertain whether there is a causal relation between drug use and CPA, or whether the drug use is a part of an overall pattern of antisocial behaviour (Contreras & Cano, 2015;Ibabe, Arnoso, & Elgorriaga, 2014;Ibabe & Jaureguizar, 2010;Simmons et al., 2018). The risk for CPA increases if the child has a neuropsychiatric disorder, especially ADHD (Contreras & Cano, 2014;Simmons et al., 2018).
A Swedish survey (n ¼ 687) among parents of children with drug problems showed that the child's ongoing drug problems clearly increased the parents' risk of falling victim to property crime. The parents tended to explain these crimes by their children's need for money and being on drugs or by the children's drug problems as a whole (Johnson, Richert, & Svensson, 2018).
According to the survey, the child's life situation and the severity of the drug problems have an impact on the extent to which the parents' lives are negatively affected. Factors that suggest severe problems (daily drug use, current mental problems, repeated treatment episodes) are linked to more negative parental experiences. The parents' problems typically escalate during those phases when the adult children live with their parents; there are more conflicts and greater financial strain (Richert, Johnson, & Svensson, 2017).
Situational antecedents of CPA
There is little research into the situational contexts in which CPA occurs, but general aggression research has found that when aggression occurs in dyads, the behaviour of the other party can trigger aggression, when the behaviour is perceived as hostile, provocative or rejecting (Hamby & Grych, 2013). Verbal aggression between parent and child often precedes physical CPA (Kethineni, 2004;Purcell, Baksheev, & Mullen, 2014;Stewart, Wilkes, Jackson, & Mannix, 2006). Common topics of conflict include child substance use (Pagani et al., 2004;Purcell et al., 2014;Stewart et al., 2006), house rules, lack of respect, money, and denial of privileges (Kethineni, 2004;Jackson, 2003;Purcell et al., 2014;Stewart et al., 2006).
Causes of crime according to the parents
When they seek explanations for their child's aggressiveness and CPA, the parents find themselves treading on tricky terrain, as both lay and scientific discourses often locate causes in the perpetrators' childhood. This risks laying the blame on the parents, Amanda Holt argues (2013, p. 73). According to Holt, the most common explanations advocated by the parents are (1) mental illness and psychological problems, such as being diagnosed with ADHD, (2) drug problems (and the measures taken by the parents to stop the children from using drugs), (3) emulating the behaviour of an abusive father, (4) impact of a separation/divorce, (5) peer influence and (6) gendered power imbalances (which manifest both within and outside of the family and send a powerful message to the child).
Processing one's sense of guilt and shame has a central role in the parents' attempts to live a viable life (Richert et al., 2017). A psychiatric diagnosis, such as that of ADHD, can help the parents process the fact that their child has a drug problem: a diagnosis can explain why these problems have emerged. That the child is diagnosed can secure better support from society, access to school resources, and can also mean that the parents and, to a certain extent, the children are released from feelings of guilt (Clarke, 2015).
Method
We have conducted a total of 32 qualitative interviews; 24 with mothers and eight with fathers. Most (18) interviews were with mothers who had a son with drug problems, while six interviews were conducted with mothers who had a daughter with drug problems. Of the interviews with fathers, seven had a son and one had a stepdaughter. In one case, we interviewed both parents. Two parents had two children with drug problems; the other parents talked about one child.
The parents were aged 46-70 years, and the children were 18-47 years old. The age difference between the interviewed parent and the child was 17-37 years.
Interviewees were recruited through The National Swedish Parents Anti-narcotics Association (FMN), by a call on our project website and via various Facebook groups. The inclusion criterion was that the person was a parent or a stepparent to an adult child with a present or former drug problem. Almost half of the group, 15 persons, are or have been active members in the FMN, which is the predominant Swedish organisation for parents who have children with drug problems. 1 Because the interviewees were mainly recruited through support groups, their experiences and situations may differ from other parents of adult children with drug problems. The problems experienced might have been particularly difficult, which could have led them to seek this kind of support, and our results cannot be generalised to all parents of adult children with drug problems.
The interviewees come from all parts of Sweden. There are more women than men among the interviewees, which reflects the fact that mothers tend to be more actively involved in parent associations and on forums for parents with children who have drug problems.
Two of the parents we interviewed (a woman and a man) said that they had at some point in their lives had substance use problems of their own, one with amphetamine and the other with alcohol. Seven mothers reported that the child's father had had alcohol problems, while one of the mothers said that the father of the child had had problems with cannabis.
The interviews are based on an interview guide with broad topics (experiences of different forms of CPA, the interviewee's social, economic and mental situation, the child's mental health history, use of drugs/alcohol, the child's experiences of treatment, the relation between parent and child over time). Our aim was to give the interviewees an opportunity to elaborate on how their life situations had been shaped by their children's drug use.
The narratives follow a structure in which the events are described in detail, from how they start to their escalation. The parents depict the child's emotional state and degree of intoxication at the time of the conflict. They also portray their emotional reaction at the time as well as their efforts to deal with the situation when it occurs. Each story has an immediate outcome of some sort, which is also described. This is often followed by the parents drawing a conclusion about what to do next. Given that we have conducted a detailed survey with similar questions, it is not primarily quantitative data on "how much?" or "how often?" that we are interested in. We have, rather, focused on the parents' experiences and feelings.
Of the interviews, 15 were conducted face to face, and 17 were telephone interviews. The benefits of telephone interviews are mainly cost-related. Such interviews save time and money on travel, and therefore enable a wider geographical spread among the respondents. Telephone interviews have proved to be useful in qualitative studies on sensitive subjects (Cachia & Millward, 2011;Holt, 2010;Stephens, 2007). Our impression of the interviews, further enhanced by having reviewed the transcripts, is that there are no important differences between the two interviewing modes as regards the interviewees' engagement or willingness to share painful experiences or in terms of the level of detail in the responses. The telephone interviews were on average somewhat longer, 95 minutes as compared to the average length of 87 minutes with face-to-face interviews.
The interviews have been transcribed verbatim. We have examined the narratives through qualitative text-based analysis in three steps (Kvale & Brinkman, 2014). As a first step, we read all the interviews carefully, summing them up under specific codes based on the themes of the interview guide. The encoding was laid out in easy-to-manage tables with a column for each respondent to help us get an overview of the data. In the second step the focus lay on the main questions of this article. We marked illustrative quotations and categorised the passages based on the codes so as to identify patterns in each interview. This made it possible to pick up similarities, differences and nuances in our source material. In the third step, we interpreted the categorisations and quotations based on the theoretical premises of the study. The quotations that appear in this article have been chosen to highlight the research questions and the complexity of the responses.
The project was conducted in accordance with the Swedish Ethical Review Act (SFS 2004:460). The design and execution of the project was reviewed and approved by the Regional Ethical Review Board at Lund University (dnr: 2015/215). The parents and children have been given pseudonyms to protect their identities.
Theoretical premises
Our analysis of the parents' narratives builds on the concept of accounts as introduced by the sociologists Marvin B. Scott and Stanford M. Lyman: an account is "a statement made by a social actor to explain unanticipated or untoward behaviour -whether that behaviour is his own or that of others, and whether the proximate cause for the statement arises from the actor himself or from someone else" (Scott & Lyman, 1968, p. 46). Accounts justify or excuse what has happened. "Justifications are accounts in which a person accepts his/her responsibility for the act in question, but denies the pejorative quality that is associated with it." (ibid., p. 47). When that person accepts that his/her behaviour was out of line but refuses to take full or any responsibility for it, he/she excuses that behaviour. As a part of this, the person may blame his/her intoxication for what has happened. Accounts require an identifiable speaker and an audience (Scott & Lyman, 1968). In an interview situation the interviewer becomes the audience.
Repeated experiences of ending up as a victim of crime in intimate or family relationships often lead to feelings of guilt and self-accusations on the part of the victim (Lindgren, Pettersson, & Hägglund, 2001). Here, we follow Thomas Scheff's (1990Scheff's ( , 1997 relational theory of shame, pride and social bonds. Scheff argues that shame is the most basic of emotions, the most dominating of all feelings. A central element of morality, shame also indicates that important social bonds are threatened. Shame plays a central role in regulating the expression of emotions on a general level; according to Scheff, if one feels shame over other emotions, such as anger, guilt and love, one represses them. In order to understand how parents as victims of crime act and react, we need to examine how the parents -at different stages -see their emotional situations, and how their social bonds and relations to the child have evolved.
Results
Because we focus on the parents' experiences rather than those of their children, we will only briefly summarise the kinds of drug problems that the children have. In general, the children have (or have had) a problematic drug use mostly of amphetamine or heroin, but cannabis and benzodiazepines have also been listed as main drugs. Of the 33 children, 14 have been in compulsory care for their drug problems. Compulsory care has been discussed in a further eight cases. This, too, indicates the severity of the children's problems. In 18 cases, the children have been diagnosed with ADHD (12) or show signs of having ADHD (6), according to the parents. 2 The children who have been diagnosed with ADHD are hyperactive and hard to raise. These children are more prone to using drugs than their siblings who do not have this diagnosis. The drug use began when the children were teenagers. The first illegal drug was typically cannabis, but it is possible to see a subsequent link between ADHD and self-medication with amphetamine in several parental narratives.
We will discuss insults, damage to property and possessions, threats and physical violence as perpetrated by children against parents. These concrete actions express and escalate the conflicts between the two parties. The abuse generally takes place face to face and is charged with powerful emotions. The actions can also pile up on one another: for example, threatening behaviour is aggravated by damage to property and possessions, and an argument escalates from verbal abuse to threats and physical assault. As we will show, even one single incident of abusive behaviour can have an important impact on the relationship because it shows the abusive potential of the child. In this article it is the parents' rather than the children's version that is presented.
Verbal abuse: insults and demeaning comments
An insult is defined as something conceived as such by the interviewed parent. The insults commonly take place face to face, but also via telephone or text messaging. 3 Bodil, a single mother, describes the emotionally loaded relationship between herself and her son, an only child. They are close to one another. When on drugs, her son can quickly alternate between friendliness and anger; nuances fade away from his verbal and emotional communication.
It has kept changing back and forth. We can sit down together and talk, and then all of a sudden he can go crazy. He can be really angry and say awful things about me. That my business will never take off and stuff. He is provocative and creates a bad atmosphere. Then he explodes, and when he makes me cry he says that "I'm having a hard time as well." Her son, now 22, uses cannabis and amphetamine, and blames his mother for him not being well. She explains the conflicts as emerging from her son's drug use. The conflicts take place when he is on drugs or is suffering from withdrawal symptoms, and deal with her attempts to tackle his drug problem. He claims that she is interfering and violating his personal integrity, that she is disloyal to him. Once, Bodil felt so threatened that she called the police, who came and took her son into custody. After this, the son severed ties with his mother and moved in with his father.
In a number of interviews, the mothers raise the point that their sons accuse them of being mentally ill. Inga, whose 22-year-old son has problems with cannabis and anabolic steroids, recounts: He just said "you're schizophrenic, you're out of your mind." But that's his way. I know that's how he defends himself. I let it pass. I don't let it get to me any longer. I used to think it was all true, that there was something wrong with me. I used to soak it up. But that's his . . . I know how unwell he really is.
In this account, the child attacks the parent's personality, instead of focusing on her way of communication. It can be seen as an attempt to undermine the parent's authority, but to the mother the accusation that she was mentally ill appeared as an affront.
Initially, Inga was very hurt by her son's words, but she then developed a counterstrategy and chose not hold him accountable for his actions. She received the help she needed when she began attending the open meetings of the self-help group Narcotics Anonymous and came into contact with people struggling with drug problems. Inga was advised not to take the accusations seriously but to regard them as a result of her son's drug problem.
Bodil and Inga both keep their sons' harsh words at a distance by explaining them as druginduced. In practice, these mothers make a distinction between two persons -one fake and unreal (the intoxicated son) and the other a genuine human being (the sober son) -and the insults are linked to the "fake" son on drugs. The mothers highlight the drug use in explaining their sons' behaviour.
The major role ascribed to the influence of drugs entails that when they meet their children or talk to them on the phone, the interviewed parents regularly assess whether the child is on drugs or not. They have an underlying concern over the risks that the drug use poses to the child's health and social situation. It is difficult to have a positive interaction when the child is under the influence of drugs, according to the parents.
Sylvia told us how her daughter changes when she is on drugs, mainly amphetamine.
But when she also takes drugs she's utterly mean. It's a whole different person. And I can hear it in her voice, or when I talked to her on the phone I could hear that she'd taken something. Or then I was bombarded with text messages, mean and disgusting. She becomes a terrible person.
Tina says that her daughter has offended her on several occasions after using drugs.
She stands here, I have painkillers for my aches, she stands here, yelling at me in front of the neighbours. They must've heard everything from my balcony, and things like "I should keep quiet, I'm a bloody junkie" and stuff. I've never ever taken drugs in my life. I have painkillers prescribed by the doctor for fibromyalgia, that's the only medication I have. I've never touched anything, I don't even drink alcohol. I'm teetotal. But because she has nothing on me, she has to come up with something.
Her daughter's outburst made Tina very unhappy, partly because she felt it was unjustified, partly because it happened so the neighbours could hear it all. And yet, when her daughter has been drug-free, she has said that Tina is "the best mom in the world".
Verbal abuse creates a sense of shame in the recipient, an emotion which threatens the interpersonal social bonds. At the same time, the children's positive comments to their parents, of which there are many examples in the interviews, can induce a feeling of pride and help repair social bonds (Scheff, 1990(Scheff, , 1997.
Emotional abuse: threats
Threats of hurting the parent or oneself are a form of emotional abuse. In 18 of the 32 interviews, the parents talk about the child threatening either parent, which is why we have concentrated on this particular form of emotional abuse.
Cecilia talks about her son's intimidating behaviour, which he has exhibited on several occasions. Underlying this situation is the disagreement between mother and son about what she is supposed to do when her son absconds from compulsory care.
If he, like, came here and fell asleep, when he was on the run, I phoned the police and they came and got him, which was really tough, and it took me a long time to do it, because it was so hard. But he knew this when he came and he could be really intimidating toward me and warn me that "you won't do this again!" but he came here all the same even if he knew that I'd done it many times before. But it could turn really nasty in those situations. I've left the apartment many times as I've felt so badly threatened, even if he hasn't assaulted me. But he destroys things and is very threatening . . . with black eyes, showing that he is the one in charge.
In this account, the son finds his way to the apartment that he shares with his mother. While on the run from compulsory care, he is aware of the risk that his mother might call the police. The son tries to put pressure on his mother in this stressful situation. By intimidating her, he tries to secure himself a chance to stay at home without police involvement. He is typically on drugs on these occasions, making his mother more afraid than when he is sober.
Cecilia has been a member of FMN, where she has been encouraged to call the police if her son comes home while being on the run.
It's so tough when you're a parent, but I didn't want him to be out there either and use drugs, but it took a long time. And I don't know . . . He came home so many times before I . . . in the beginning, I just let him in and . . . he left again. And all the time I begged him to go back.
It took a long while for Cecilia to decide to call the police in such a situation. What finally prompted her was changing her mind about what was best for her son in the long run. She now sees that running away from treatment means that her son will go back to the world of narcotics, which she cannot accept. She calls the police so that her son might be returned onto the right track and get the treatment that he needs.
Many parents told us in the interviews that they would try in different ways to make it harder for their children to use drugs. Several parents said that they would call the police to take action against the child for possession of drugs. This is most often done discreetly, but sometimes the parents are open about getting in touch with the police, which can lead to a conflict. "I'll kill you if you tell the cops!" was what Margita's daughter told her. Margita did not take her words seriously, but rather felt that it was the drugs talking; her daughter was not herself at the time.
Monika told us about a situation when her son behaved menacingly, but said that this was a one-off related to his being under the influence of drugs. She had driven to her son's rental house, afraid of a relapse.
Once when I drove to his house in the summer . . . I knew that he'd been out partying, and I drove there in the morning. He was all junked-up when I got there. And then he, like, came toward me, out there on the porch and he pushed me against the railing. And he was being really mean and he . . . then he threatened me with . . . I think it was a log of wood or maybe it was his fist. /Ummmm/ Then he pushed me against the railing out there on the porch. I was so afraid that time because he was so dazed. And I just knew that I should keep quiet and not say a word. /Ummmm/ Several accounts refer to situations where the child is unbalanced or has a conflict with his/ her partner or is upset after receiving bad news from the authorities on an important matter. In the following example, the social services had decided not to let Agnes's son see his child, as his urine sample had tested positive for benzodiazepines. Agnes, who is usually present in the meetings with the authorities, took it upon herself to tell her son the news, because the social worker was too afraid.
So I went home to tell him that he couldn't have D for the weekend. And then he flew into such a rage that he rushed to the kitchen and got a knife. First he threatened to kill himself and then he threatened me, too, with the knife.
He was so hysterical that I couldn't get through to him at all, he was drugged up by benzos, he must've been. A normal person doesn't behave like that . . . I didn't know if I was going to get out of there alive. He pressed the knife against himself and said "I'm gonna kill myself." Then he turned toward me and said "And you can come with me" and pressed the knife on my throat. I just stood there and thought that I can't leave him, I can't. So, either I get him to put the knife down or we both die.
In the end I got him to drop the knife by speaking to him all calm and collected. I didn't show him that I was terrified.
Here, too, the role of the villain belongs to drugs. The mother does not view the dramatic events as her son really wanting to hurt her. Rather, what happened is in her eyes a manifestation of her son's unmet need for help with drug problems. This emerges from the rest of the interview.
In relation to insults, threats directed to the opposite party in a conflict situation represent an escalation. Threats can relate to physical violence or damage to property and possessions, and can be interpreted as more or less based in reality. None of the interviewed parents had reported the threats to the police after the event, but on a couple of occasions the parent (always the mother) had called the police for protection in the actual situation. The parents also blame the drugs in this type of conflict: the drugs are the culprit, and the parents do not end up breaking contact with their children. In three of the four cases outlined above, the child is believed to have ADHD, but the threatening incidents are not blamed on the child's underlying mental condition. The child behaved in a threatening manner, because he/ she was under the influence of drugs.
Financial abuse: damage to property and possessions
Damage to property and possessions is a form of financial abuse. 4 This means that somebody purposely destroys or damages another person's home, for example, or possessions. The parents reported such damage in 11 of the 32 interviews. In some cases the child has repeatedly caused damage over a long period of time as a result of venting him/herself. Pieces of furniture have been knocked over, paintings torn down, walls have been damaged as a result of major rows. In other cases the damage has occurred on a single occasion.
Yes, we have plenty of damaged items and walls at home. (Britt) He's never attacked me as such, but he's damaged a lot of things. Yes. Like a whole glass cabinet comes tumbling down, or mirrors and paintings that he just tears down, and the bathroom, bathroom mirrors, he's ripped apart showers, shower railings, anything that's near him. (Cecilia) These incidents have not been reported to the police. As each individual incident has been relatively minor on its own, the parent has not felt that the situation has been serious enough to merit a call to the police. Nor has the parent felt the need to call the police after the incident, as the damages have only led to minor financial consequences. A concern over the child's potentially destructive behaviour after being reported to the police is also a factor, as is the feeling of shame at the thought of reporting one's own child to the police.
When the damage takes place in front of the parent, the accounts portray it as a part of a longer dynamic process. It is used to underline verbal communication in a row when the child is angry and upset. Bodil recounts pleading with her son not to destroy an item which had great value for her. It was then that he stopped.
There's been many occasions that I've felt threatened when he throws things around or has grabbed me. He's punched at the walls and hit holes in doors. Then he could indicate that it's me he'd like to punch.
But he hasn't in fact damaged my possessions. I told him once when he was threatening to smash something that "Are you going to destroy everything, this is all I have. These came from my grandmother, and now you're going to smash them." He didn't cross that line.
The interview extract shows that the son uses violence against the interior as a substitute object, as a marker that he could just as easily have used the violence against his mother instead. It is a way of making the frightening situation even more menacing. But as Bodil says, her son respects certain boundaries even when enraged. He will not attack his mother, nor does he break truly valuable items.
One of the fathers, Per, told us that his 18year-old son has smashed things in the family home.
We have a house, where he's caused damage both in the kitchen and the garden, he's smashed the roof of the greenhouse and parasols, and pots and pans and stuff like that, knocked holes through the doors and stuff.
After such incidents, the son has shown remorse, which has made it easier for his parents to forgive him for the damage and to move on. They have linked his anger and the damage caused either to his being under the influence of cannabis or benzodiazepines or suffering from withdrawal symptoms. The mother and the father work together, and the son, with no income of his own, is clearly dependent on them.
There is a further element involved in stepfamilies, the emotional imbalance that may arise from the fact that one of the spouses is not the child's biological parent. The biological parent is more closely attached to the child than the stepparent.
The situation is especially emotional for Doris in the following example, trying to cope with her son's causing damage in the house and her seeking to minimise conflicts between her son and his stepfather. I've been afraid . . . of Jack when he's come home on drugs, he's tall and strong, he yells at you and breaks stuff. I knew that if he broke stuff in the house, the father of my youngest children would get mad, and we'd fight. In that situation I often tried to hide that he'd smashed up things so there wouldn't be a row. He did kick the door in once and . . .
In the parents' narratives, such causing of damage is almost always linked to a face-to-face confrontation. Causing damage is a way for the children to punish their parents and heighten their own sense of outrage. But the parents have found a reason for the behaviour of their children. This behaviour is always coupled with conflicts about the child's drug use. Furthermore, the child is usually on drugs, withdrawing from or craving drugs, when the damage to property occurs, according to the parents' narratives.
Physical abuse: physical violence
Stories of physical violence are rare in the interviews. When physical violence does appear, it is of a less severe kind and generally takes place as isolated incidents, in conjunction with an argument between parent and child. For example, Aina was pushed out of the way when she blocked the doorway and tried to prevent her daughter from going out. There was a car waiting for her, with people inside who were drug users.
Doris shares a story about something that happened 20 years ago. Her son was 17 at the time and was into hash and pills.
He was a high school drop-out, and we argued a lot. But I confronted him once. Both of my sons have turned really angry and aggressive when they've been doing drugs. And he became so mad that he pushed me so hard that I fell on the floor and my glasses got broken. At the time of the interview, Bodil's son was 24 years old. The concern that her son might hurt her made Bodil in the end insist that he move out. All contact between mother and son ceased almost a year before the interview. The son wants nothing to do with his mother.
Inga describes the situation which led her to ask that her then 19-year-old son move in with his father, where he has stayed for a number of years. The son had crossed the line, and Inga no longer wanted him to live with her. The father had to step in and assume greater responsibility, as she could not accept the son's behaviour toward her. Even if the violence was an isolated incident Inga considered it as an abuse that she could not accept.
One of the interviewed fathers told us about a violent domestic incident. It happened when he refused to give his son, still living at home, the key to the basement locker, because he was afraid the son would hide drugs or stolen goods there. In the ensuing row the two had a scuffle when father and son shoved each other but it stopped at that.
The interviewees, both the mothers and fathers, talk about a relationship where childto-parent violence appears to be something of a taboo, a boundary that the children do not cross. Still, isolated acts of minor violence have been committed by children who have been under the influence of drugs or been craving them. The violence has been an escalation of an argument about the child's drug use, their criminal activities or demands for money. Threats of violence, discussed earlier in this article, are much more common.
Discussion
Despite the conflicts, the adult children keep coming back to their parents time and again in many of the discussed cases. The picture emerging from the interviews is that the children are in precarious situations, as they do not have money and sometimes also lack a place of their own. Help from social services is either insufficient, associated with unreasonable demands, or non-existent. When other options have been exhausted, the parental home is a last resort. It is hard for the parents to say no to their child. Even if the parent might be short of money, they usually have more than the child. All the interviewed parents also had their own place to live. As many of the children are homeless, a parent's offer of a bed in their home can keep the child from walking the streets at night or sleeping in a doorway. But the offer comes at a price: the children will face warnings and criticism about their drug use, which leads to new conflicts.
In the parents' narratives, the interaction with the child deals to a great extent with the child's destructive drug use. This gives rise to more or less severe conflicts.
An Australian research group has illustrated the emergence of intimate partner violence through a chain of events with a background of historical preconditions (history of violence, relationship breakdown, stressors) which is also influenced by "situational preconditions" (intoxication, heightened emotions, prior acts of violence). The course of events consists of (1) contact made with the victim, (2) conflict, (3) tipping point, (4) violence against victim, (5) de-escalation of violence, and, (6) end of contact with victim (Boxall, Boyd, Dowling, & Morgan, 2018). We found a similar course of events in our interviews also when the conflicts involved insults, threats, damage to property and possessions, and violence.
Background of events
While the narratives portray powerful emotional closeness to and love for the child, they also speak of underlying conflicts which have an impact on the relationship between parent and child. Close relationships of this kind are characterised by what Hamby and Grych describe as a "high level of emotional investment and interdependence." When conflicts, frustration and perceptions of criticism and rejection occur between parents and children strong emotions are awakened that can lead to aggressive behaviour (Hamby & Grych, 2013, p. 33).
The parents support their children financially, help them in their dealings with the authorities, give them a place to stay and show in words and deeds that they care about their children. But there is a master conflict between them that has to do with the children's drug use -which the parents are hostile toward and worried about. The parents have various control strategies to determine whether or not the child is on drugs, making an assessment of this in face-to-face meetings, keeping their ears open to any changes in the child's voice on the telephone and paying the child a house call to see for themselves what the drug situation is. Some parents try to get the social services to act and have the child sent to compulsory care. We have also talked to several parents who have called the police to get them to take action against the child for possession of drugs.
According to the interviews with parents, the children respond most clearly to police intervention and also to the parents' attempts to restrict their freedom of action and agency. Included in this is the assertion by the child that he/she has the right to go on using drugs. Selfmedication is often raised as a central argument here. By ignoring the parents' warnings and requests, the children are able to show their power and independence. This minefield -a struggle for power -gives rise to various forms of conflict and confrontation between children and parents.
Specific incidents
The two parties disagree about something that, according to the parents, is of importance for both. The result is a row between the child and one or both of the parents. They can fight about, for example, money, current or historical parenting conflicts, the parents' views on the child's plans or about broken promises. The common denominator, however, is most often the child's drug use. In an interview study with parents whose children used heroin, lack of trust in the drug user was the issue raised most often in the interviews, as the parents have been "lied to, deceived and stolen from" (Butler & Bauld, 2005).
According to our interviews, insults often fly in both directions in a loaded situation, and the emotional temperature rises. What started as an exchange of views escalates to a row. Sesha Kethineni, who has examined court cases of youth-on-parent violence, found that "verbal arguments were a common step in the continuum that resulted in violence or threat of violence by their children" (Kethineni, 2004, p. 387). How the conflict is dealt with by the parties depends on, among other things, their state of mind. The child's state of mind may be linked to his/her being on drugs -with various effects -but also to withdrawal symptoms and craving for drugs, as the parents' accounts suggest. Tiredness, weariness, stress and many other factors can impact on both parties' frame of mind. Leonard Berkowitz (1993) has suggested that any negative affect (anxiety, irritability, low mood) can serve as a motivator of aggression. When a person is under the influence of alcohol or drugs, the risk increases of aggressive acts, as behavioural inhibition is reduced and judgment impaired (Hamby & Grych, 2013).
Escalation of rows or confrontations
At a certain stage of an argument, one party crosses a line, and the dispute turns abusive. Because we have focused on child-to-parent abuse, the accounts emphasise the role of the child as having crossed that line, but it is entirely possible that the parent has also breached the child's boundaries between civil and abusive behaviour. The escalation can also entail that the child causes damage to things, doors, windows or walls. The parent is unableor is afraid or does not have the time neededto prevent the damage, or the parent is not present when the damage is caused. The row can also lead to physical violence against the parent by the child. In the parents' accounts, physical violence is the most severe measure that the child can subject parents to. If this line is crossed, there is a risk that the social bonds between child and parents are broken, at least temporarily.
Boxall and colleagues discuss the "tipping point" as the moment which in many cases of domestic violence signifies that the conflict crosses a border and reaches a new stage (Boxall et al., 2018). This "tipping point" is identifiable in many of our accounts, although it is rare for an argument to transition to physical violence. A preventing factor is the "moral belief system"; the standards, norms, values and morals, which are associated with children's relationships with their parents and which include the child's reflected appraisals and notions of fairness (Walters, 2015). We can appreciate that violence also occupies a special place in the parents' moral belief systems: some of the interviewed mothers had demanded that their children move out after the child had perpetrated violence against them.
End of conflict
Sooner or later the confrontation will end. Perhaps the child or the parent leaves the scene, or maybe an outsider intervenes or somebody calls the police. The child and the parent may be too exhausted to go on arguing. They need to calm down, to descend from the emotional heights to a more manageable level and let the argument be. In Boxall's study, which is based on police reports on domestic violence, the violence stops when the police arrive or when the victim calls the police and the perpetrator chooses to leave the scene (Boxall et al., 2018). In our study, the parents rarely called the police.
How the parents process and explain the events
Once the emergency situation is over, parents start to process what has happened. Judging by the interviews, the processing begins fairly soon after the actual events. The parents need to decide what to do if the incident has been grave enough to be reported to the police. The parents may insist that the child move out or not visit them again. A recurring phenomenon in our study is the decisive role ascribed to the child's being on drugs as explaining why the situation got out of hand. The drugs are already being blamed in the post-event processing phase, and their role is further rehearsed in the interviews where the parents are encouraged to reflect on why the conflict arose. The child's guilt, and thereby also the guilt experienced by the parents, can be decreased if the confrontation is blamed on the child's being under the influence of drugs. Notably, the child's ADHD diagnosis or other psychiatric diagnoses, if any, are not suggested as causing the confrontation in any of the accounts. But where they have existed, the child's psychiatric problems have had an effect on the parental role in other ways, and according to the parents have contributed to the fact that the child started using drugs.
The interviewed parents' narratives can be seen as accounts (Scott & Lyman, 1968), which explain and justify the children's disruptive behaviour by their having been on drugs. When the blame is put on an external phenomenondrugs -the parents will find it easier to repair the social bonds (Scheff, 1990(Scheff, , 1997 with the child. The parent differentiates between the child who is sober and the child who is under the influence of drugs; that is, between the genuine child and the fake child. The sober child is a person that the parents like and make an effort for. The child who is on drugs is erratic, at times aggressive and self-destructive. The ADHD diagnosis which many of these children have, is a part of the genuine child. It affects the child's personality and behaviour positively and negatively but is not a reason to reject him/her. The parents' accounts, or explanations, make it understandable that the abuse committed by the child is linked to drugs, which exist outside of the child, rather than being associated with the ADHD diagnosis which is a part of the child.
By distinguishing between the genuine person and the fake person on drugs, the parents are able to decrease the child's culpability for his/her bad behaviour, to the interviewer, other outsiders and also in their own eyes. The child is really a good person, but turns into somebody else through the noxious influence of drugs. This act of excusing can be used to diminish the parents' feelings of guilt for the child's inappropriate behaviour and to weaken their sense of shame in the eyes of the outside world, here represented by the researcher (Scheff, 1990(Scheff, , 1997. The parent uses an account as an excuse.
Long-term parental coping strategies
The children are considered as irresponsible and sick, as incapable of making decisions about their own lives because of the power that drugs have over them. The parents therefore step in to try and influence the children's choices by any available means. Many have found their way to parents' associations for guidance on what to do. The largest such organisation in Sweden, FMN, seeks to support the parents' parental role by encouraging them to set limits on their children's demands. A few parents told us that their children are no longer in touch with them, because they have found the parents much too assertive. Orford and colleagues have constructed a typology of coping positions, with three poles classified as "engaged" (try to change the user's behaviour by confronting it and being emotional and controlling), "tolerant" (to be more or less inactive, accepting and supportive), and "withdrawn" (to withdraw from interaction with the user) (Orford et al., 1998). These positions are based upon the ways in which relatives respond to the stresses that they feel, arising from the drug problems of the family member. Parents alternate between these poles in their relations to the child. Our interviewees are mostly found in the "engaged" group, but some end up among the "withdrawn", if the child terminates the relationship. None of "our" parents were "tolerant" about their child's drug use, in the interviews. Debra Jackson notes that violence between mothers and children "occurs within a context of intensely intimate and longstanding emotional, familial and caring bonds" (2003, p. 327). In her study, breaking the relationship is not an option. Rather, the mothers are committed to restoring and retaining loving, positive relationships with their children (Jackson, 2003). We found a similar approach among the interviewed parents. Even when the bonds with the child are severed after serious conflicts, the parents still hope that the contact will be rebuilt.
Conclusions
All of our interviewed parents have or have had adult children with drug problems. The drug use has been so constant and extensive that the parents have experienced it as a threat to the children's health and their ability to take care of such social responsibilities as studies, work, child maintenance and contact with the family. The parents have in vain tried to convince the child to abandon drugs. The child-to-parent abuse occurs in a specific emotionally loaded situation, typically preceded by a verbal argument of some kind. During this verbal argument, the child grows ever more agitated and finally crosses the line between an ordinary argument and abusive action against the parent. The interviews portray insults (verbal abuse) and threats (emotional/psychological abuse) as recurring elements in the interaction with the adult child, whereas causing damage (financial abuse) and, in particular, violence (physical abuse) are less common. Such incidents are in retrospect excused by the fact that the child was on drugs. The parents' accounts make a clear distinction between the child as under the influence of drugs and the child as sober. On drugs, the child is self-destructive, unreliable and irritable, a fake human being, whereas the sober child is a genuine person that the parents love and are ready to make great sacrifices for.
The interviews convey a consistent message: the parents are torn by the abuse and crimes committed by the children, but they can cope by explaining the abusive events by the impact of drugs. What remains after all the harsh words, damage to property and possessions, and the violence is the concern over the risks related to the child's drug use. It overshadows much of the parents' existence. The single most important intervention in improving the parents' situations is for the adult children to find a way to live their lives without drug problems. The parents' well-being is directly related to how their children's lives turn out.
Parents who are subjected to abuse by their adult children with drug problems are crime victims who have received inadequate help and attention. It is important that the authorities better identify this group and do what they can to offer them help and support.
Limitations
The conflicts between parents and children that we have outlined are portrayed by one party, that is, the parents. They have not been mirrored by the children's accounts of the events. Rather, the focus lies on the parents' experiences of the situations which have been retrospectively recounted to an outsider who lacks any personal knowledge about what has happened.
Overrepresented among the interviewees are women, persons born in Sweden and persons with no history of drug problems of their own. This reduces the possibilities to generalise to other constellations of parents and children. Almost half of the parents were members of FMN, the leading Swedish parent support organisation, which might have influenced their understanding and conceptualisation of the conflicts with their children.
ORCID iD
Bengt Svensson https://orcid.org/0000-0002-8248-8825 Notes 1. Established in 1968, the The National Swedish Parents Anti-narcotics Association (Föräldraföreningen mot narkotika, FMN) now has about 25 local branches around Sweden. According to the FMN website (www.fmn.se), the association's main goals are to provide advice, support and assistance to families where drug abuse occurs. The parents are advised to set clear limits toward their child with drug problems, for example, by not helping the child with money or letting the child stay at home during periods of ongoing drug abuse. This is to protect parents from severe consequences and not to allow parents to facilitate the child's continued drug abuse. 2. For example, a neuropsychiatric examination has been planned but has not been conducted, because the child has not cooperated or because the procedure has been halted. 3. It is impossible to objectively determine where the boundaries lie between criticism, accusations and expressions that cross the line to transition to insults (verbal abuse). Language use is individual in that each of us chooses which style and phrasing to use, but these are also influenced by our surroundings. What appears as a grave insult to an outsider can be seen in a more benevolent light by those involved in an argument, and vice versa. 4. We will discuss thefts in another article. | 2019-11-14T17:08:58.489Z | 2019-11-11T00:00:00.000 | {
"year": 2019,
"sha1": "141a771f33df4dea854eb2faf7f97ff079b8117a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1455072519883464",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d26942e8259117b4715c7c5a17f661327afdd355",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
251478309 | pes2o/s2orc | v3-fos-license | Swin Transformer Improves the IDH Mutation Status Prediction of Gliomas Free of MRI-Based Tumor Segmentation
Background: Deep learning (DL) could predict isocitrate dehydrogenase (IDH) mutation status from MRIs. Yet, previous work focused on CNNs with refined tumor segmentation. To bridge the gap, this study aimed to evaluate the feasibility of developing a Transformer-based network to predict the IDH mutation status free of refined tumor segmentation. Methods: A total of 493 glioma patients were recruited from two independent institutions for model development (TCIA; N = 259) and external test (AHXZ; N = 234). IDH mutation status was predicted directly from T2 images with a Swin Transformer and conventional ResNet. Furthermore, to investigate the necessity of refined tumor segmentation, seven strategies for the model input image were explored: (i) whole tumor slice; (ii–iii) tumor mask and/or not edema; (iv–vii) tumor bounding box of 0.8, 1.0, 1.2, 1.5 times. Performance comparison was made among the networks of different architectures along with different image input strategies, using area under the curve (AUC) and accuracy (ACC). Finally, to further boost the performance, a hybrid model was built by incorporating the images with clinical features. Results: With the seven proposed input strategies, seven Swin Transformer models and seven ResNet models were built, respectively. Based on the seven Swin Transformer models, an averaged AUC of 0.965 (internal test) and 0.842 (external test) were achieved, outperforming 0.922 and 0.805 resulting from the seven ResNet models, respectively. When a bounding box of 1.0 times was used, Swin Transformer (AUC = 0.868, ACC = 80.7%), achieved the best results against the one that used tumor segmentation (Tumor + Edema, AUC = 0.862, ACC = 78.5%). The hybrid model that integrated age and location features into images yielded improved performance (AUC = 0.878, Accuracy = 82.0%) over the model that used images only. Conclusions: Swin Transformer outperforms the CNN-based ResNet in IDH prediction. Using bounding box input images benefits the DL networks in IDH prediction and makes the IDH prediction free of refined glioma segmentation feasible.
Introduction
Glioma is one of the most refractory cancers with a wide range of prognosis, showing a median survival of 14 months for glioblastomas (grade IV) [1] and of more than 7 years for lower grade gliomas (grades II and III) [2]. To evaluate the prognosis and guide individualized treatment, genetic mutation, especially the isocitrate dehydrogenase (IDH) mutation status, is recommended to be the most important marker for glioma diagnostic decision [3]. The new 2021 WHO guidelines even recommend that the first diagnostic delineation relies on IDH-mutation [4]. Clinical studies have found that lower grade gliomas with wildtype IDH were similar to glioblastomas in terms of prognosis [5]. At 2 of 15 present, IDH mutation status can only be definitively identified using immunohistochemistry (IHC) or gene sequencing on a tissue specimen, acquired through biopsy or surgical resection. However, three problems hinder an extensive and accurate accessibility to the IDH mutation identification, including the inaccessibility for biopsy or resection before the treatment decision, the unavailability of tumor resection, and the sampling bias of the biopsy tissue [6]. Moreover, IDH mutation status is not static during cancer progression and/or therapy stages. In other words, pathological examinations may be outdated over time and dynamic monitoring is in urgent need. Therefore, a highly efficient, noninvasive, and instant approach for preoperative IDH mutation status prediction is in high demand.
Magnetic Resonance Imaging (MRI) plays a leading role in the non-invasive glioma diagnosis and treatment planning. Vast efforts have been devoted to invasively and preoperatively determine the IDH mutation status from MRI radiographic features [7][8][9][10]. Specifically, indistinct margins and T2-FLAIR mismatch have been verified to be useful in the IDH mutant and IDH wild type differentiation [9]. However, these radiographic features rely on subjective visual assessment of MRI images. It is difficult for the radiologist to distinguish glioma genotypes based on these radiographic features in clinical practice. Fortunately, leveraging the recent advances in machine learning approaches, such as deep learning (DL), SVM, decision tree, etc., IDH mutation status prediction from MRI can be operated accurately and objectively [11][12][13][14][15][16][17]. Among them, DL approaches have received the most notable attention for the reason of their outstanding performance in the molecular biomarker prediction from high-dimensional numeric information or image signal intensities [18][19][20]. Besides IDH prediction [11][12][13][14][15][16][17], DL is also widely applied to 1p/19q [21,22], MGMT [23,24] prediction, etc.
Most of the previous DL studies comprise of two stages. Firstly, the glioma region is manually or automatically segmented along the lesion edge. Subsequently, another classifier is trained to discover abstract task-specific features from the lesion region and predict IDH mutation from these features [18]. However, manual segmentation of the glioma is subjective and time consuming. Training an automatic segmentation network of glioma is also based on the extra manual annotation, and the network performance is highly vulnerable to image quality, which restricts an efficient implementation to real oncology workflow. Additionally, it has been shown that peritumor tissue provided helpful information for diagnosis and prognosis prediction [25][26][27][28]. Therefore, this study hypothesizes that segmenting the glioma lesion subtly on the MRI is not compulsory for the IDH prediction using deep learning. Moreover, almost all the DL studies use classical convolutional neural networks (CNN) to predict IDH mutation status, such as ResNet [11,13,15,17], which is the most wildly used CNN network in IDH prediction. However, new DL architectures, such as Transformer, have been seldom introduced to perform the IDH prediction. Transformer, a novel neural architecture whose empirical performance significantly outperforms the conventional CNNs, can effectively capture long-range contextual relations between image pixels and approach to be a state-of-the-art network for medical image representation [29][30][31][32]. Until now, only one study has applied this framework to IDH mutation status prediction using the TCIA dataset [32], and more research needs to be performed to demonstrate its generalization and compassion to CNNs.
Thus, this study endeavors to build a Transformer-based model to predict the IDH mutation status free of refined tumor segmentation. The following experiments are operated: (i) Transformer-based and CNN-based models are established, respectively. (ii) To evaluate the feasibility of IDH mutation status prediction free of refined tumor segmentation, seven different kinds of image inputs are defined in different rectangle sizes with different amounts of peritumor tissues. (iii) Clinical information relevant to the predictions is added to optimize the model performance. Only T2 images are used for model building in this study, as they are acquired routinely and showed best performance in IDH genotyping [12].
Materials and Methods
This retrospective study received approval from the ethical review board of Affiliated Hospital of Xuzhou Medical University (AHXZ), Xuzhou, China. The data were anonymous and the requirement for informed consent was waived.
Patients
The data curated from The Cancer Imaging Archive (TCIA, https://www.cancerimag ingarchive.net/, accessed on 5 March 2021) was used for model development and internal testing. The patients met the following criteria: (i) pathologically confirmed glioma; (ii) known IDH protein expression; (iii) inclusive preoperative T2 MRI images; (iv) age ≥ 18 years. Corresponding molecular genetic information was obtained from The Cancer Genome Atlas (TCGA) and referred to the previous studies [11,12,21,24]. The list of enrolled patients from TCIA is elaborated in Supplementary Materials.
The external test set was curated from AHXZ, a total of 488 patients who were diagnosed as gliomas (grades II-IV) from January 2015 to December 2020 at AHXZ were considered for inclusion, as shown in Figure 1A. The inclusion criteria were in accordance with the TCIA set and the exclusion criteria were as follows: (i) the absence of IDH protein expression (N = 152); (ii) missing preoperative axial T2 images (N = 72); (iii) history of brain tumor treatment (N = 30).
Materials and Methods
This retrospective study received approval from the ethical review board of Affiliated Hospital of Xuzhou Medical University (AHXZ), Xuzhou, China. The data were anonymous and the requirement for informed consent was waived.
Patients
The data curated from The Cancer Imaging Archive (TCIA, https://www.cancerimagingarchive.net/, accessed on 5 March 2021) was used for model development and internal testing. The patients met the following criteria: (i) pathologically confirmed glioma; (ii) known IDH protein expression; (iii) inclusive preoperative T2 MRI images; (iv) age ≥ 18 years. Corresponding molecular genetic information was obtained from The Cancer Genome Atlas (TCGA) and referred to the previous studies [11,12,21,24]. The list of enrolled patients from TCIA is elaborated in Supplementary Materials.
The external test set was curated from AHXZ, a total of 488 patients who were diagnosed as gliomas (grades II-IV) from January 2015 to December 2020 at AHXZ were considered for inclusion, as shown in Figure 1A. The inclusion criteria were in accordance with the TCIA set and the exclusion criteria were as follows: (i) the absence of IDH protein expression (N = 152); (ii) missing preoperative axial T2 images (N = 72); (iii) history of brain tumor treatment (N = 30).
In a nutshell, the dataset (N = 493) used for this study included a cohort from TCIA for model development and internal test (N = 259) and another cohort from AHXZ for external test (N = 234), as shown in Figure 1B. TCIA IDH mutation status was determined by Sanger sequenced DNA methods and exome sequencing of whole-genome amplified DNA. The AHXZ IDH expression was detected by immunohistochemistry. Additional clinical data of gliomas, including gender, age, and grade distributions, were also collected. In a nutshell, the dataset (N = 493) used for this study included a cohort from TCIA for model development and internal test (N = 259) and another cohort from AHXZ for external test (N = 234), as shown in Figure 1B. TCIA IDH mutation status was determined by Sanger sequenced DNA methods and exome sequencing of whole-genome amplified DNA. The AHXZ IDH expression was detected by immunohistochemistry. Additional clinical data of gliomas, including gender, age, and grade distributions, were also collected.
Study Design
The overall study design is summarized in Figure 2. Five key steps were described, including tumor delineation, image processing and augmentation, image inputs definition, network development using different network architectures, and hybrid model development. Ultimately, 16 models were considered for comparison: 7 Swin transformer models with different image inputs strategies, 7 ResNet models with different image inputs strategies, and another 2 hybrid models that integrate images with clinical features.
Study Design
The overall study design is summarized in Figure 2. Five key steps were described, including tumor delineation, image processing and augmentation, image inputs definition, network development using different network architectures, and hybrid model development. Ultimately, 16 models were considered for comparison: 7 Swin transformer models with different image inputs strategies, 7 ResNet models with different image inputs strategies, and another 2 hybrid models that integrate images with clinical features.
Figure 2.
Overview of this study design. This study includes five key steps: tumor delineation, image preprocessing and augmentation, image inputs definition, network development, and hybrid model development.
Tumor Delineation
Using InferScholar (an online research platform supported at https://research.infervision.com/, Beijing, China), the tumor was outlined on the T2 weighted images. Two regions were contoured for each patient from T2 weighted images. The tumor region was
Tumor Delineation
Using InferScholar (an online research platform supported at https://research.infervisi on.com/, Beijing, China), the tumor was outlined on the T2 weighted images. Two regions were contoured for each patient from T2 weighted images. The tumor region was masked if it contained necrosis, cyst, or hemorrhage, and the edema region that surrounded the tumor region (if present; note that some patients do not have an edema region) was masked separately. Illustrative examples of the annotated image are shown in Figure 3, where tumor region was marked in red and edema is in cyan. Tumor masks for all the subjects were manually drawn by one neuroradiologist and independently validated by another senior neuroradiologists with more than 10 years of experience in neuroradiology. masked if it contained necrosis, cyst, or hemorrhage, and the edema region that surrounded the tumor region (if present; note that some patients do not have an edema region) was masked separately. Illustrative examples of the annotated image are shown in Figure 3, where tumor region was marked in red and edema is in cyan. Tumor masks for all the subjects were manually drawn by one neuroradiologist and independently validated by another senior neuroradiologists with more than 10 years of experience in neuroradiology. Meanwhile, the lesion location information, including location features and hemisphere distribution, was recorded and confirmed. The location features were reviewed on the T2 weighted images by the neuroradiologist based on the pre-defined six location options, namely frontal lobe, temporal lobe, occipital lobe, parietal lobe, others (insula, basal ganglia, thalamus, cerebellum, brainstem) and multiple lobes [33,34]. Spurred by the location features, this research defined one more clinical feature, i.e., hemisphere distribution. Hemisphere distribution includes four categories: left side, right side, both sides, and others (cerebellum and brain stem), which targets to probe that whether the hemispherical information of glioma related to its IDH mutation status.
Imaging Preprocessing and Augmentation
The most commonly used T2 image acquisition parameters were summarized in Supplementary Figure S1, including MagneticFieldStrength (T), SliceThickness (mm), Manufacturer, and PixelSpacing (mm). MagneticFieldStrength: 3T in the TCIA (42.9%) and in the AHXZ (93.2%), 1.5T in the TCIA (46.1%) and in the AHXZ (6.8%). SliceThickness: 5mm in the TCIA (61.0%) and 6mm in the AHXZ (91.1%). Manufacturer: GE in the TCIA (53.7%) and in the AHXZ (86.0%), Philips in the TCIA (11.6%) and in the AHXZ (8.1%), SIMENS Meanwhile, the lesion location information, including location features and hemisphere distribution, was recorded and confirmed. The location features were reviewed on the T2 weighted images by the neuroradiologist based on the pre-defined six location options, namely frontal lobe, temporal lobe, occipital lobe, parietal lobe, others (insula, basal ganglia, thalamus, cerebellum, brainstem) and multiple lobes [33,34]. Spurred by the location features, this research defined one more clinical feature, i.e., hemisphere distribution. Hemisphere distribution includes four categories: left side, right side, both sides, and others (cerebellum and brain stem), which targets to probe that whether the hemispherical information of glioma related to its IDH mutation status.
All T2 images were preprocessed sequentially: (i) N4BiasCorrection; (ii) Intensity normalization to zero mean and unit variance; (iii) Selecting the slices that involved the tumor region and discarding the first and the last slices of each case to prevent the slices interference (iv) Resampling to sizes of 256 × 256 and expanding to three channels by simply repeating the first channel. We leveraged the following data augmentations, including geometric transformations and intensity transformations, to improve the model generalization ability. Extra hyper parameters involved in preprocessing and augmentation parameters are detailed in the Supplementary Materials.
Image Inputs Definition
To probe the feasibility of IDH mutation status prediction without refined tumor segmentation, seven different input image strategies were proposed, depending on the proportion of used information about tumor regions, as depicted in Table 1 and Figure 3.
Network Development Using Different Network Architectures
To investigate the superiority of Transformer network in IDH genotyping, we developed the IDH status prediction model using a Swin transformer and CNN-based ResNet architectures, respectively.
The Swin Transformer, a hierarchical vision transformer using shifted windows, was the most popular architecture in tackling computer vision tasks [35]. Since the Swin Transformer has never been used in IDH genotyping in any studies, and more generally biomarker predictions from MRIs, this study forced the Swin Transformer to bridge this gap. Since ResNet has been the most widely used network in the previous studies and performed well in the IDH mutation status prediction [11,13,15,17], this study only used ResNet to build the CNN-based model.
(1) The Swin Transformer network development The entire classification process and Swin Transformer architecture were illustrated in Figure 4. MRI images inputs (matrix: 256 × 256) were subdivided into non-overlapping 4 × 4 patches, which are then converted into sequences by flattening. Then, linear image embedding was conducted in stage 1 to preserve positional information about the images, and their features were extracted with a Swin Transformer block. In stage 2, a down sampling process was performed on the patch merging layer to merge adjacent 2 × 2 patches into one patch. As the network deepens, hierarchical representations, such as CNN, could be extracted by the Swin Transformer block. A total of four stages were used to generate the final representation. A global average pooling layer was applied to the output feature map in the last stage (i.e., the class token) to perform the Classification Head, then a linear classifier output the prediction. The Swin Transformer block was also displayed in Figure 4 and detailed in Supplementary Materials. (2) The CNN-based ResNet network development The conventional CNN-based network was derived from the well-known 101-layer ResNet architecture (i.e., ResNet-101) [36] and initialized using the ImageNet pretrained weights. The ResNet block was displayed in Supplementary Figure S2.
Hybrid Model Development
Additional learnable fully connected layers were respectively added to the top-performing Swin Transformer or ResNet to build the hybrid network, which used the additional numeric inputs along as complement to the image inputs. Only clinical features indicating significant difference between IDH-mutant and IDH-wild were used for hybrid model building.
Network Implementation
The slices per patient were considered as individual samples in model development and testing, which means that each slice had its own diagnostic probability. For per-patient probability, the mean probability of all the slices was considered. Through the slicelevel strategy, the prediction at case level can get rid of the interference of abnormal slice samples, achieving better model generalization. Accordingly, we split the TCIA dataset All the models, which were trained with T2 images, were implemented with PyTorch on an Ubuntu 16.04 server using four NVIDIA GeForce RTX 3090 GPU devices. For the Swin transformer, the initial learning rate was set to 1 × 10 −5 with a batch size of 32 and maximal iteration of 300. For ResNet model, the initial learning rate was set to 1 × 10 −4 with (2) The CNN-based ResNet network development The conventional CNN-based network was derived from the well-known 101-layer ResNet architecture (i.e., ResNet-101) [36] and initialized using the ImageNet pretrained weights. The ResNet block was displayed in Supplementary Figure S2.
Hybrid Model Development
Additional learnable fully connected layers were respectively added to the topperforming Swin Transformer or ResNet to build the hybrid network, which used the additional numeric inputs along as complement to the image inputs. Only clinical features indicating significant difference between IDH-mutant and IDH-wild were used for hybrid model building.
Network Implementation
The slices per patient were considered as individual samples in model development and testing, which means that each slice had its own diagnostic probability. For per-patient probability, the mean probability of all the slices was considered. Through the slice-level strategy, the prediction at case level can get rid of the interference of abnormal slice samples, achieving better model generalization. Accordingly All the models, which were trained with T2 images, were implemented with PyTorch on an Ubuntu 16.04 server using four NVIDIA GeForce RTX 3090 GPU devices. For the Swin transformer, the initial learning rate was set to 1 × 10 −5 with a batch size of 32 and maximal iteration of 300. For ResNet model, the initial learning rate was set to 1 × 10 −4 with a batch size of 32 and maximal iterations of 300. EarlyStopping strategy was used for speeding up the training stage, i.e., the training stage was stopped when the loss on train set did not decrease in five epochs. We employed StepLR with default parameters a learning rate scheduler. Adam optimizer was used for network optimization with β 1 = 0.9 and β 2 = 0.99.
Statistical Analysis
The statistical analysis was performed on SPSS 26.0 with p < 0.05 considered significant. Continuous variables were expressed as means with corresponding standard deviation and categorical variables were described as proportions. Continuous variables were compared using the Mann-Whitney U test for non-normally distributed and differences in categorical variables were assessed by the chi-squared test or Fisher's exact test between the train set and the test set as well as between the IDH-mutant and the IDH-wild groups.
Receiver operating characteristic curve (ROC) analysis was performed to obtain the area under the curve (AUC). The probability threshold for the accuracy (ACC) calculation was set to 0.5, thus a predicted probability of ≥0.5 was classified as an IDH-mutant, and other values were classified as IDH-wild. The diagnostic probability per patient was measured by the mean probability from all involved individual slices.
Patient Data
As shown in Table 2, in terms of IDH status, gender, age, and WHO grade, the AHXZ set had no difference compared with the TCIA set with p = 0.053, p = 0.277, p = 0.678, p = 0.059, respectively. While a significant difference was found in location features (p < 0.05) and hemisphere distribution (p = 0.01). According to the statistical results between IDH-mutant and IDH-wild in the two datasets, the age of IDH-wild was significantly higher than that of IDH-mutant in both TCIA set (p < 0.05) and AHXZ set (p < 0.05). Location features (p < 0.05, p = 0.002) and WHO grade (p < 0.05, p < 0.05) also yielded significant difference in TCIA set and AHXZ set. No significant difference was found in gender and hemisphere distribution.
Lastly, only age and location features were reserved for hybrid model development. We did not use WHO grade in the hybrid model building because this data remained unknown prior to the surgery.
Performance of the Models with Different Architectures and Input Image Strategies
The results of the Swin Transformer and ResNet model on both the TCIA internal test set and the AHXZ external test set were summarized in Table 3. Only the patient-level results were displayed in Table 3 and the corresponded slice-level results were supplemented in Table 2. With the seven proposed image input strategies, seven Swin Transformers and seven ResNet models were built, respectively. The seven Swin Transformer models obtained an average internal test AUC, internal test ACC, external test AUC and external test ACC of 0.965, 92.3%, 0.842, 76.6%, respectively. While these of the ResNet model was 0.922, 89.3%, 0.805, 74.9%, respectively (Table 3). Despite the difference in image inputs, all the transformer models consistently achieved higher AUC than the corresponding ResNets ( Figure 5a).
As shown in Figure 5b, the highest AUC (0.984) and ACC (96.2%) for the Swin Transformer in the internal test were obtained using 1.5× Tumor Bbox as inputs, followed by
Performance of the Hybrid Model
According to the above results, we built the hybrid model with the 1.0× Tumor Bbox as image input. Besides the image input, age and location information was also used as input in the hybrid model. According to the above results, we built the hybrid model with the 1.0× Tumor Bbox as image input. Besides the image input, age and location information was also used as input in the hybrid model. Compared to the image-based models, the hybrid model achieved similar results on both the Swin Transformer (AUC = 0.975, ACC = 96.2%) and ResNet network (AUC = 0.960, ACC = 93.2%) in the internal test set. While in the external test set, better results were obtained on both hybrid the Swin Transformer (AUC = 0.878, ACC = 82%) and hybrid ResNet (AUC = 0.833, ACC = 78.1%), as shown in Figure 6.
Discussions
IDH mutation status has great clinical significance and potentially improves the glioma treatment selection. DL approaches built based on MR images are expected to be an efficient alternative to standard invasive biopsy approaches for the IDH status determination and are robust computer-aided diagnostic tools that can be used to assist radiologists. Thus, this study leverages the Swin Transformer as the backbone to tackle three problems in IDH prediction: (1) IDH mutation status forecasting using Transformer backbones rather than CNN. (2) Free of glioma segmentation and consideration of peritumoral tissue. (3) Important clinical information relevant to IDH mutation predictions. Empirically, the Swin Transformer consistently outperformed conventional ResNet models.
Discussions
IDH mutation status has great clinical significance and potentially improves the glioma treatment selection. DL approaches built based on MR images are expected to be an efficient alternative to standard invasive biopsy approaches for the IDH status determination and are robust computer-aided diagnostic tools that can be used to assist radiologists. Thus, this study leverages the Swin Transformer as the backbone to tackle three problems in IDH prediction: (1) IDH mutation status forecasting using Transformer backbones rather than CNN. (2) Free of glioma segmentation and consideration of peritumoral tissue.
(3) Important clinical information relevant to IDH mutation predictions. Empirically, the Swin Transformer consistently outperformed conventional ResNet models. When 1.0× Tumor Bbox input was used, the Swin Transformer achieved better performance and generalization, compared to that which used refined tumor segmentation (Tumor + Edema). Similar results were observed with ResNet. Furthermore, the hybrid model that combined images and clinical features (age, location feature) as inputs demonstrated performance improvement in the external dataset. To our knowledge, this is the first study using the Swin Transformer network and tumor bounding box to predict IDH mutation status and testing it in an external dataset.
Compared to the previous studies, our top performing image-based Swin Transformer model achieved a robust result in both an internal test set (AUC = 0.975, ACC = 96.2%) and external test set (AUC = 0.868, ACC = 80.7%) in IDH prediction. Two early reported image-based CNN models obtained a comparable high accuracy of 0.94-0.97 in the internal public dataset [12,14] without performing external testing in a separate dataset. There were also two image-based studies that performed external testing [11,15]. Choi [11] performed IDH mutation prediction of AUC = 0.81 and ACC = 73.5% in the external testing using multimodal images as inputs rather than single T2 images. In Ken's study [15], T2 imagebased external testing achieved AUC = 0.73 and ACC = 67.3%, which was inferior to our results. Besides the CNN-based studies, only one study introduced the transformer to the IDH genotyping [32] without external testing, achieving an internal TCIA test of AUC = 91.04% and ACC = 90%, which was lower than our internal test results. Our Swin Transformer network with bounding box inputs showed great potential in IDH mutation prediction.
The Swin Transformer yielded overall better performance than the ResNet, consistently with the same image inputs strategies. Since IDH expression showed no significant signs on the conventional MR images, it was a great challenge to improve the feature learning efficiency on the DL model. Three structures contributed to the Swin Transformer superiority in feature learning and classification: (i) Multi-head self-attention derived from the good noise suppression ability. Specifically, due to the inherent glioma nature of tumor heterogeneity and lesion boundary diffusion, quite a lot of noise mixed with information related to the IDH genotyping. Compared with CNN, the Transformer network was more prudent to the signal noise [37][38][39]. (ii) Hierarchical architecture spurred by the translation invariance advantage of CNN had the flexibility to model at various scales. Although image inputs had a high diversity in image size, the hierarchical architecture enabled the model to capture distinct phenotypic differences on the regional patches as well as the whole lesion. ResNet is good at deep feature representation, while still having limitations in modeling explicit global contexts due to the intrinsic nature locality of convolutional operations [40]. (iii) Shifted windows ensured the global information interaction. The Swin Transformer could effectively capture long-range contextual relations between image pixels while maintaining the low-level feature extraction [35]. In general, the Swin Transformer has a promising future to conduct accurate and robust performance in imaging molecular prediction [40].
To the best of our knowledge, all the previous studies yielded their results in IDH mutation prediction with tumor segmentation as inputs. Different from previous studies, our model pioneered tumor bounding box as inputs and achieved outstanding performance compared to that using tumor mask or larger boxes. Several merits of bounding box inputs deserved discussion. Firstly, it was a paradox to delineate the infiltrating margins of diffuse glioma in a refined manner, while bounding box had higher fault tolerance and reproducibility [25,26]. Secondly, the rectangular frame not only involved the glioma lesion, but also contained the peritumoral area where the tumor microenvironment might provide more information contributing to diagnosis [28]. Thirdly, bounding box drawing was also friendly to the clinical practice. Compared to the elaborate margin drawing, box drawing only relied on the rough lesion position and largely reduced the labor cost for labeling. Moreover, bounding box of 1.0 times was most approaching the IDH mutation status in our results, and we could deduce that this region resection may have survival benefit to the patient [41].
Pretreatment age and location feature could be easily obtained and had good correlation to the IDH mutation status prediction [11,15,33,34]. Our study demonstrated that the age of IDH-mutant gliomas was significantly younger than that of IDH-wild groups on both the TCIA and AHXZ datasets. The location feature results in this study were also in line with the previous studies in which IDH-mutant gliomas occupied a single frontal lobe more frequently, whereas the IDH-wild gliomas predominantly located in multiple lobes [33]. Importing the age and location feature might be a viable option to improve the model performance. However, compared to the image-based model results, our hybrid model obtained little performance improvement in the internal test and slightly better performance in the external test. Two reasons might attribute to this result. Firstly, the image-based DL model performance on the internal test was good enough and even reached the model ability ceiling. Secondly, the location feature distribution between TCIA and AHXZ behaved with significant differences, which weakened its efficacy on the external testing. More research needs to probe the necessity to import the clinical features to the Transformer-based image model.
Limitations
Several limitations merit discussion. Firstly, this study only focused on T2 images as model inputs, without considering other MR image modalities such as T1 contrast images and diffusion-weighted images (DWI). Given that our goal was to establish a clinical feasible model with the most widely available T2 images, using multi-modality images might limit the model feasibility. Moreover, previous studies indicated that models constructed for T2 images showed better performance than the multi-modality network [12]. Secondly, as a representative of real-world clinical experience, the TCIA data set, with multiparametric MR images from multiple institutions, was friendly to be used to train a model of good robustness. Although the TCIA data set was applied to train the models in this study, only one external test cohort was used, and model generalization to more external datasets need to be assessed. Thirdly, this is a primary study that used the Swin Transformer for IDH genotyping. Therefore, we are looking forward to investigating its further clinical practice by optimizing its structure to enhance model efficiency. Moreover, compared to the Radiomics [42], the interpretability of the Swin Transformer remains to be a challenge and needs further investigation.
Conclusions
In this research, we developed a robust IDH mutation status prediction model based on T2 weighted images: (i) Swin Transformer overwhelmed the ResNet in predicting IDH mutation status. (ii) Using bounding box input images benefited the Swin Transformer in IDH prediction and made the IDH prediction free of refined glioma segmentation feasible. The Swin Transformer with bounding box input images might have a promising future in clinical practice, facilitating individualized treatment planning.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm11154625/s1, Figure S1: Image Acquisition Parameters; Figure S2: ResNet architecture; Figure S3: ROCs of all the image-based models; Table S1: List of the enrolled patients from TCIA set; Table S2: Slice-level diagnostic performance of the models for the IDH status prediction. | 2022-08-11T15:20:34.759Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "e8762193f50d977ad7768097bfb1f726014b84a7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/15/4625/pdf?version=1659954731",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd170a1b796fbbc363a2cedd4333a465f432f7a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
9043489 | pes2o/s2orc | v3-fos-license | Low high‐density lipoprotein cholesterol level is a significant risk factor for development of type 2 diabetes: Data from the Hawaii–Los Angeles–Hiroshima study
Abstract Aims/Introduction A low level of high‐density lipoprotein cholesterol (HDLC) is a common feature of metabolic syndrome. We have reported that Japanese–Americans who share a virtually identical genetic makeup with native Japanese, but who have lived Westernized lifestyles for decades, have lower HDLC levels and a high prevalence of type 2 diabetes compared with native Japanese. However, the impact of low HDLC level on type 2 diabetes is unclear. The aims of the present study were to evaluate whether serum HDLC level was associated with development of type 2 diabetes and if the effect might be modified by lifestyle. Materials and Methods We examined 1,133 non‐diabetic Japanese–Americans and 1,072 non‐diabetic Japanese, who underwent the 75‐g oral glucose tolerance test (OGTT) and were followed for an average of 8.8 and 7.0 years, respectively. We analyzed whether serum HDLC level is a risk factor for development of type 2 diabetes based on the Cox proportional hazards model. Results After adjustment for age and sex, hazard ratios for development of type 2 diabetes per unit of serum HDLC level (mmol/L) were 0.292 (95% confidence interval [CI] 0.186–0.458, P < 0.0001) among Japanese–Americans and 0.551 (95% CI 0.375–0.88, P = 0.0023) among native Japanese. Comparable hazard ratios after further adjustment for category of OGTT and body mass index were 0.981 (95% CI 0.970–0.993, P = 0.0018) and 0.991 (95% CI 0.980–1.002, P = 0.112), respectively. Conclusions HDLC level was associated with development of type 2 diabetes in both Japanese–Americans and native Japanese. However, these results suggest that the impact of high‐density lipoprotein on glucose metabolism might be affected by lifestyle.
INTRODUCTION
Epidemiological studies have shown that low levels of high-density lipoprotein (HDL) cholesterol (HDLC) are associated with cardiovascular disease risk 1,2 . A recent report showed that HDL protects against cardiovascular disease in both males and females, independent of age, smoking status, systolic blood pressure and total cholesterol 3 . In addition, considering the global epidemics of type 2 diabetes and metabolic syndrome, the impact of low HDLC level as a risk factor for cardiovascular disease is likely to increase rapidly in the future 4,5 .
HDL exerts anti-atherogenic actions through its intrinsic anti-oxidative and anti-inflammatory properties 6 . In addition, increased reactive oxygen species levels are thought to be an important trigger of insulin resistance 7 , a common feature of type 2 diabetes. Accordingly, low HDL level might be associated with impaired glucose tolerance (IGT) and development of type 2 diabetes. Japanese-Americans who share a virtually identical genetic makeup with native Japanese currently living in Japan have lived Westernized lifestyles for decades 8,9 . We have reported that the prevalence of metabolic syndrome among Japanese-Americans is significantly higher, and serum HDLC levels are significantly lower, than among native Japanese 10 . In addition, we have reported that the prevalences of type 2 diabetes and cardiovascular disease among Japanese-Americans are significantly higher than among native Japanese 8 . The purpose of the present study was to investigate the impact of serum HDLC level on the development of type 2 diabetes, and to investigate whether its effect was modified by Westernized lifestyle based on a comparison between Japanese-Americans living in Hawaii and Los Angeles, and Japanese living in Japan.
Study Participants and Methods
The Hawaii-Los Angeles-Hiroshima study, initiated in 1970, is part of a long-term epidemiological study of risk factors for diabetes and cardiovascular disease in which subjects living in Hawaii and Los Angeles, California, were limited to a population genetically identical to the Japanese population. This epidemiological study was previously described in detail elsewhere [10][11][12][13] . The Hiroshima Atomic Bomb Casualty Council, Health Management and Promotion Center provides health management services to approximately 110,000 atomic bomb survivors living primarily in Hiroshima, Japan 14 .
Study participants were Japanese-Americans consisting of 487 men and 646 women who were enrolled in medical surveys carried out from 1988 to 2010, and native Japanese consisting of 438 men and 634 women, matched on age and sex to the Japanese-Americans, who were enrolled in medical surveys carried out from 1963 to 2012. Participants were free from diabetes at start of follow up, as ascertained by the 75-g oral glucose tolerance test (OGTT), and were examined at least twice during the study periods.
Participants underwent physical examinations and provided blood samples after an overnight fast. The Japanese-American participants underwent the OGTT during each follow-up examination. The Japanese participants underwent the OGTT a few days later if their plasma glucose was ≥5.55 mmol/L at fasting, ≥7.21 mmol/L within 1.5 h after eating, ≥6.66 mmol/L between 1.5 and 2.5 h after eating, or ≥6.10 mmol/L beyond 2.5 h after eating, and if they showed glycosuria in the course of a screening health examination at the Hiroshima Atomic Bomb Casualty Council, Health Management and Promotion Center 14 . All incident diabetes cases were diagnosed on the basis of the OGTT according to the 1997 American Diabetes Association criteria (fasting glucose ≥7.0 mmol/L or 2-h glucose ≥11.1 mmol/L after an OGTT) 15 .
Participants were free of infectious symptoms, autoimmune diseases and other acute conditions, as assessed by medical interview. Written informed consent was obtained. The study was approved by the ethics committees of Hiroshima University, the Council of Hiroshima Kenjin-Kai Association in Hawaii and Los Angeles, and the Hiroshima Atomic Bomb Casualty Council, Health Management and Promotion Center.
Statistical Analysis
Data are described as meanstandard deviation. Because the triglyceride and body mass index (BMI) variables did not conform to normal distributions, they were analyzed after logarithmic transformation. Continuous variables were compared by analysis of covariance. Differences in frequency between the Japanese-Americans and native Japanese were tested by the v 2 -test. To test the significance of HDLC level as a predictor of incidence of type 2 diabetes, HDLC concentration was divided into quartiles based on population values (<1.11, 1.11-1.37, 1.38-1.60 and >1.60 mmol/L in Japanese-Americans; and <1.34, 1.34-1.60, 1.61-1.89, and >1.89 mmol/L in native Japanese); quartile-specific hazard ratios were estimated with the Cox proportional hazards model. With respect to potential confounders, adjustment was made for continuous age and BMI, as well as categorical sex and OGTT (normal glucose tolerance [NGT] and IGT). Hazard ratios were estimated after adjustment by two sets of potential confounders: the first set comprised age and sex only, and the second set comprised age, sex, category of OGTT, and BMI. The proportional hazards assumption was verified by inspection of log-log survival curves, and by examination of Schoenfeld partial residuals 16 . The SAS software package version 8.2 (SAS Institute, Cary, NC, USA) was used for analyses.
RESULTS
Japanese-American participants were followed for an average of 8.75 -5.27 years, and the mean age at the time of followup initiation was 61.3 -10.8 years. Native Japanese participants were followed for an average of 7.00 -4.39 years, with the mean age at the time of follow-up initiation being 61.9 -7.1 years. Baseline clinical characteristics of the participants are shown in Table 1. A total of 181 and 175 participants developed diabetes during the follow-up period among Japanese-Americans and native Japanese, respectively. The proportion of IGT participants among the native Japanese was greater than among Japanese-Americans (P < 0.0001). The Japanese-Americans had significantly higher systolic blood pressure (SBP; P < 0.0001), BMI (P = 0.013), triglycerides (P < 0.0001) and non-HDL cholesterol (P < 0.0001) compared with the native Japanese. The Japanese-Americans had significantly lower fasting glucose (P < 0.0001), 2-h glucose (P < 0.0001) and HDLC level (P < 0.0001) compared with the native Japanese.
Clinical characteristics of participants at baseline, divided by quartiles of HDLC after adjustment for age, sex and category of OGTT, are shown in Tables 2 and 3. Among Japanese-Americans, the participants in the third and fourth quartiles were significantly older, and had lower 2-h glucose compared with participants in the first quartile (P < 0.05). Participants in the second, third and fourth quartiles had significantly lower BMI, lower fasting glucose, lower triglycerides, and lower non-HDL cholesterol compared with participants in the first quartile (P < 0.05). Participants in the fourth quartile had significantly higher total cholesterol compared with participants in the first quartile (P < 0.05; Table 2). Among the native Japanese, participants in the second and fourth quartiles had significantly lower DBP compared with participants in the first quartile (P < 0.05). Participants in the third and fourth quartiles had significantly lower BMI, lower fasting glucose, lower 2-h glucose and lower non-HDL cholesterol compared with participants in the first quartile (P < 0.05). Participants in the second, third, and fourth quartiles had significantly higher total cholesterol and lower triglycerides compared with participants in the first quartile (P < 0.05; Table 3).
DISCUSSION
The main finding of the present study is that a relationship exists between serum HDLC level and the development of type 2 diabetes both in Japanese-Americans and native Japanese. In addition, low serum HDLC level is strongly indicative of development of diabetes in Japanese-Americans compared with native Japanese. This result suggests that low HDLC level should be recognized as a risk factor for diabetes, especially among highly Westernized subjects.
In the present study, we assumed that serum HDLC level serves as an indicator of HDL. In other words, a high HDLC level indicates a high level of HDL anti-atherosclerosis and antidiabetes properties, such as anti-inflammatory and antioxidative effects of paraoxonase and apolipoprotein A-I (apoA-I) activity 17 . Accordingly, low HDL level was associated with development of type 2 diabetes after simple adjustment for age and sex in both Japanese-Americans and native Japanese. In addition, recent studies have shown that HDL level could be linked to the pathogenesis of type 2 diabetes because of the capacity of HDL to enhance pancreatic b-cell function and glucose uptake by skeletal muscle through adenosine monophosphate-activated protein kinase 18,19 . HDL also protects against stress-induced b-cell apoptosis and islet inflammation 20,21 . Consequently, individuals with low HDLC level might have insufficient insulin secretion and inadequate glucose uptake in skeletal muscle. In contrast, higher HDLC level, as well as higher HDLC/apoA-I and HDLC/apoA-II ratios, are reported to lower the risk of future development of type 2 diabetes 22 . In mice, a global deletion of apoA-I resulted in impaired glucose tolerance 23 , whereas apoA-I overexpression increased insulin sensitivity 24 . ApoA-I stimulates the adenosine monophosphateactivated protein kinase pathway in myocytes in vitro 23 . Therefore, HDL and apoA-I could increase insulin sensitivity and decrease insulin resistance. This raises the possibility that differences in apoA-I concentration might be related to the varied HDL effects on type 2 diabetes between Japanese-Americans and native Japanese.
We showed that, although trend analysis of the effect of HDLC on the development of type 2 diabetes after adjustment for age and sex was statistically significant both in Japanese-Americans and native Japanese, trend analysis after further adjustment for category of OGTT and BMI was statistically significant only in Japanese-Americans. With respect to category of HDLC level in the present study, the first and second quartiles among Japanese-Americans corresponded approximately to the first quartile among native Japanese, and the third quartile among Japanese-Americans overlapped almost completely with the second quartile among native Japanese. Therefore, it is possible that the trend in effect of HDLC level on development of type 2 diabetes among Japanese-Americans provides evidence of the same effect in the first and second quartiles among native Japanese. It suggests that, although HDLC level plays a protective role in prevention of type 2 diabetes in Japanese-Americans and native Japanese, it might be rate limiting, especially when HDLC level is very low.
The present study had several limitations. First, although participants did not have diabetes at baseline, medications for other medical conditions might have affected the study's find- ings. However, as far as we could ascertain in our investigations, medication use did not differ among the four quartiles of HDLC participants (data not shown). Second, in only native Japanese participants, we had no data regarding family history of diabetes. Therefore, we were unable to use family history of diabetes as an adjustment factor. Third, HDLC level is generally known to be higher among women than among men, but we analyzed both sexes together. However, the numbers of men and women were almost the same in both populations of Japanese-Americans and native Japanese, and we used sex as an adjustment factor in all analyses, which provides a more powerful analysis than subset analyses separately for each sex as long as the sex adjustment is valid (i.e., there was no interaction between sex and other factors, as such interactions were not included in our models). Fourth, in the present study, fasting glucose and 2-h glucose were significantly higher in native Japanese than in Japanese-Americans, because the criteria for undergoing OGTT differed between Japanese-Americans and native Japanese. Furthermore, the number of IGT participants was larger among native Japanese than among Japanese-Americans, although the number of developed type 2 diabetes was lower among native Japanese than among Japanese-Americans, which might have affected the results. Finally, the present study was observational. Hence, whether low HDLC is a cause of diabetes development is unclear. Further examination will be required.
In summary, we provide evidence that low HDLC level might be a risk factor for development of type 2 diabetes. This Figure 1 | (a) Adjusted hazard ratios of type 2 diabetes among Japanese-Americans according to baseline serum HDLC concentrations from a Cox proportional hazards model. Bars represent 95% confidence intervals. Statistical significance of trend analysis: P < 0.0001 adjusted for age and sex (left side), P = 0.012 adjusted for age, sex, category of oral glucose tolerance test (OGTT; right side) and body mass index (BMI; right side). (b) Corresponding hazard ratios among native Japanese participants. Bars represent 95% confidence intervals. Statistical significance of trend analysis: P = 0.038 adjusted for age and sex (left side), P = 0.936 adjusted for age, sex, category of OGTT and BMI (right side). *P < 0.05 compared with the first quartile. finding has the potential to add a new dimension to understanding the clinical relationship between glucose metabolism and HDLC level. The present study also suggests that HDL could exert a beneficial metabolic effect for prevention not only of cardiovascular disease, but also diabetes, especially in Japanese-American subjects with low HDLC level. | 2018-04-03T02:20:53.251Z | 2013-12-01T00:00:00.000 | {
"year": 2013,
"sha1": "3b232cf929d6ad29b1d5fff65233440606ae35c3",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jdi.12170",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b232cf929d6ad29b1d5fff65233440606ae35c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256221380 | pes2o/s2orc | v3-fos-license | Practical Recommendations for a Selection of Inhaled Corticosteroids in COPD: A Composite ICO Chart
The use of inhaled corticosteroids (ICS) for the maintenance of bronchodilator treatment in patients with chronic obstructive pulmonary disease (COPD) is controversial. While some patients achieve clinical benefits, such as fewer exacerbations and improved symptoms, others do not, and some experience undesired side effects, such as pneumonia. Thus, we reviewed the evidence related to predictors of ICS therapy treatment response in patients with COPD. The first priority clinical markers when considering the efficacy of ICS are type 2 inflammatory biomarkers, followed by a history of suspected asthma and recurrent exacerbations. It is also necessary to consider any potential infection risk associated with ICS, and several risk factors for pneumonia when using ICS have been clarified in recent years. In this article, based on the evidence supporting the selection of ICS for COPD, we propose an ICS composite that can be added to the COPD (ICO) chart for use in clinical practice. The chart divided the type 2 biomarkers into three ranges and provided recommendations (recommend, consider, and against) by combining the history of suspected asthma, history of exacerbations, and risk of infection.
Introduction
Chronic obstructive pulmonary disease (COPD) is a preventable and treatable disease; however, it presents a growing social and economic burden worldwide in terms of both disease prevalence and mortality [1]. The goals of COPD management include relieving symptoms, improving quality of life (QOL), maintaining or improving exercise tolerance and physical activity, preventing exacerbations and disease progression, and reducing premature mortality [2,3]. Both pharmacologic therapies and nonpharmacologic treatments, such as smoking cessation and pulmonary rehabilitation, are important in achieving management goals. Bronchodilator therapy with long-acting muscarinic antagonists (LAMA), long-acting beta-agonists (LABA), or combinations of both, is considered by the various guidelines as the main pharmacotherapy of COPD [2][3][4][5][6]. Long-acting bronchodilators can reduce exacerbation and improve lung function, exercise capacity, symptoms, and QOL in patients with COPD [7]. On the other hand, many patients have residual symptoms and repeated exacerbations despite optimal bronchodilator therapy.
The addition of Inhaled corticosteroids (ICS) to regular bronchodilator treatment in patients with COPD has been debated back and forth for a long time. ICS recommendations have changed over time in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) reports and guidelines for different countries. The GOLD 2023 report recommends the use of ICS as step-up pharmacologic therapy for COPD patients with frequent exacerbations despite regular treatment with bronchodilators and evidence of eosinophilic inflammation (blood eosinophil count of >300 cells/µL) [2]. Treatment with ICS added to bronchodilators has been reported to reduce exacerbations and improve symptoms in patients with uncontrolled COPD [8][9][10][11][12][13][14]. For COPD patients who have frequent exacerbations Biomolecules 2023, 13, 213 2 of 14 even with dual bronchodilators, recent studies have demonstrated that the addition of ICS not only improves exacerbations and symptoms but also reduces death [15,16]. Subgroup analyses of many clinical trials have shown that type 2 inflammation and concomitant asthma are useful indicators to predict ICS-treatment response [15,17,18]. On the other hand, there is concern about the risk of increased respiratory infections as a side effect of ICS. In recent years, several characteristics were clarified for risk factors for pneumonia when using ICS [19][20][21]. Despite the GOLD recommendations, evidence from real-world studies suggests that ICS is being over-prescribed in COPD, irrespective of disease presentation and underlying inflammation [22][23][24].
Therefore, a combined evaluation of patient-specific predictors of response and risk factors, such as (1) type 2 inflammatory biomarkers, (2) history of suspected asthma, (3) history of exacerbations, and (4) risk of infection, is required to select ICS more safely and effectively in the management of COPD. In this review, we propose a composite Ics in COpd (ICO) chart that can be practically applied, based on the evidence for ICS selection for COPD.
Type 2 Inflammatory Biomarker in COPD
COPD has heterogeneous patterns in the inflammatory process. Representative inflammatory cells of the airway in COPD are neutrophils, which reflect type 1 inflammation. There are also phenotypes of COPD in which eosinophilic inflammation of the airway is predominant under circumstances such as exacerbation or asthma overlap. In short, eosinophilic inflammation in the disease is a promising therapeutic target because sputum eosinophilia becomes a predictor of clinical outcomes [25]. Eosinophilic inflammation in the airway is thought to reflect type 2 inflammation caused by T helper 2 (Th2) lymphocytes from adaptive immune systems (allergic eosinophilic airway inflammation) and innate lymphoid group 2 cells (ILC2) from innate systems (non-allergic eosinophilic airway inflammation) [26]. However, it is not easy for clinicians to measure the inflammatory status of the airways by using a sputum examination. Instead, in a clinical setting, type 2 inflammatory biomarkers, such as blood eosinophil and fractional exhaled nitric oxide (FeNO), are measured as surrogate markers of eosinophilic inflammation of the airway because these biomarkers are easily accessible and useful indicators to predict the exacerbation risk and treatment response [27][28][29].
Relationship between Type 2 Biomarkers and Clinical Outcomes of COPD
Adaptive or innate immune system dysregulation overproduces type 2 cytokines, such as interleukin (IL)-5, IL-4/13, granulocyte-macrophage colony-stimulating factor (GM-CSF), IL-33, and thymic stromal lymphopoietin (TSLP) [30,31]. As a result, the elevation of type 2 biomarkers, such as eosinophils and FeNO, persist. Both type 2 biomarkers in COPD have actually been reported to become predictors for symptom burden, pulmonary function decline, and exacerbation risk [32]. Moreover, they have increasingly been highlighted by recent evidence because they seem to be very promising tools to identify which patients with COPD are most likely to benefit from ICS [27]. Hereafter, the relationship between ICS response to clinical outcomes and blood eosinophils and FeNO is described.
ICS Effect on Symptom Burden
Although ICS at higher blood eosinophil counts (≥310 cells/µL) is effective for symptom relief in COPD, lower blood eosinophil counts (<90 cells/µL) are not effective [17]. Additionally, high FeNO levels (≥25 ppb) in patients with COPD could become better predictors for the ICS/LABA effect on symptomatic relief compared to patients with low FeNO levels [33]. In the Destress study, while the patients with COPD with FeNO > 35 ppb had improved symptoms in response to ICS, those with FeNO < 20 ppb did not improve [29]. Moreover, when the patients with COPD were divided into three groups according to FeNO levels, the low group (<25 ppb) had few responders, while the intermediate (20-35 ppb) and high groups (≥35 ppb) had more responders in that order. This was also the case when the patients with COPD were divided into three groups according to blood eosinophil counts ( Figure 1). ppb had improved symptoms in response to ICS, those with FeNO < 20 ppb did not improve [29]. Moreover, when the patients with COPD were divided into three groups according to FeNO levels, the low group (< 25 ppb) had few responders, while the intermediate (20-35 ppb) and high groups (≥ 35 ppb) had more responders in that order. This was also the case when the patients with COPD were divided into three groups according to blood eosinophil counts ( Figure 1).
Figure 1.
Response rate to ICS therapy in patients with COPD stratified by type 2 biomarkers. Abbreviations: B-Eos, blood eosinophil counts; FeNO, fractional exhaled nitric oxide.
ICS Effect on Pulmonary Function Decline
Patients with COPD with higher blood eosinophil counts (≥ 220 cells/μL) showed a stronger bronchodilator effect of ICS compared to those with lower blood eosinophil counts. In particular, patients with COPD with blood eosinophil counts > 270 cells/μL showed clinically important treatment differences in lung function (FEV1 ≥ 50 mL) [18]. In addition, ICS users with COPD with higher blood eosinophil levels (≥ 2%) showed a slower FEV1 decline [34]. Moreover, Kerkhof et al. showed that patients with COPD with high blood eosinophil counts (≥ 350 cells/μL) and at least one instance of exacerbation had a significantly greater FEV1 decline if they were not treated with ICS. This suggests that ICS is an important strategy for preventing the rapid loss of lung function, which is caused by eosinophilic exacerbations in patients with COPD [35]. In contrast, a higher FeNO value (> 35 ppb) is a good predictor of increased pulmonary function (FEV1) by ICS, while poor bronchodilator responsiveness after ICS use is predictable by a lower FeNO value (< 20 ppb) [28,29].
ICS Effect on Exacerbation Risk
ICS is possibly beneficial for patients with COPD with elevated blood eosinophil counts (> 150 cells/μL) to reduce exacerbations [15]. During severe exacerbations of COPD, ICS effectiveness is associated with the absolute number of blood eosinophils (≥ 200 cells/μL) [36]. At a low eosinophil count (< 90 cells/μL), the moderate/severe exacerbation risk in once-daily single-inhaler triple therapy (SITT)(ICS/LABA/LAMA) was not reduced compared with that in LAMA/LABA (95% confidence interval: 0·88 [0·74, 1·04]), while the exacerbation rate ratio for triple therapy was significantly suppressed at high blood eosinophil counts (≥ 290 cells/μL) [17]. Patients with COPD with blood eosinophil counts > 150 cells/μL are more likely to benefit, in terms of exacerbation risk, from triple therapy [15]. Therefore, high blood eosinophil levels (> 300 cells/μL) can be a predictor for the exacerbation risk or better response to ICS in COPD [15,18,[37][38][39][40][41]. Based on the aforementioned recent pieces of evidence, GOLD also recommends thresholds of blood eosinophils as a guide for ICS treatment in patients with COPD according to the exacerbation pattern [2]. Eosinophil counts ≥ 100 cells/μL accompanied with high exacerbation risk despite
ICS Effect on Pulmonary Function Decline
Patients with COPD with higher blood eosinophil counts (≥220 cells/µL) showed a stronger bronchodilator effect of ICS compared to those with lower blood eosinophil counts. In particular, patients with COPD with blood eosinophil counts >270 cells/µL showed clinically important treatment differences in lung function (FEV 1 ≥ 50 mL) [18]. In addition, ICS users with COPD with higher blood eosinophil levels (≥2%) showed a slower FEV 1 decline [34]. Moreover, Kerkhof et al. showed that patients with COPD with high blood eosinophil counts (≥350 cells/µL) and at least one instance of exacerbation had a significantly greater FEV 1 decline if they were not treated with ICS. This suggests that ICS is an important strategy for preventing the rapid loss of lung function, which is caused by eosinophilic exacerbations in patients with COPD [35]. In contrast, a higher FeNO value (>35 ppb) is a good predictor of increased pulmonary function (FEV 1 ) by ICS, while poor bronchodilator responsiveness after ICS use is predictable by a lower FeNO value (<20 ppb) [28,29].
ICS Effect on Exacerbation Risk
ICS is possibly beneficial for patients with COPD with elevated blood eosinophil counts (>150 cells/µL) to reduce exacerbations [15]. During severe exacerbations of COPD, ICS effectiveness is associated with the absolute number of blood eosinophils (≥200 cells/µL) [36]. At a low eosinophil count (<90 cells/µL), the moderate/severe exacerbation risk in once-daily single-inhaler triple therapy (SITT)(ICS/LABA/LAMA) was not reduced compared with that in LAMA/LABA (95% confidence interval: 0.88 [0.74, 1.04]), while the exacerbation rate ratio for triple therapy was significantly suppressed at high blood eosinophil counts (≥290 cells/µL) [17]. Patients with COPD with blood eosinophil counts >150 cells/µL are more likely to benefit, in terms of exacerbation risk, from triple therapy [15]. Therefore, high blood eosinophil levels (>300 cells/µL) can be a predictor for the exacerbation risk or better response to ICS in COPD [15,18,[37][38][39][40][41]. Based on the aforementioned recent pieces of evidence, GOLD also recommends thresholds of blood eosinophils as a guide for ICS treatment in patients with COPD according to the exacerbation pattern [2]. Eosinophil counts ≥100 cells/µL accompanied with high exacerbation risk despite treatment with LAMA/LABA actually seem to have a useful index of proper use of triple therapy, including ICS [42]. However, blood eosinophil counts <100 cells/µL are not recommended for patients with COPD. Persistently high FeNO levels (≥20 ppb) seem to be a valuable indicator of an acute exacerbation in patients with stable COPD [43]. However, it remains uncertain whether FeNO could be a biomarker used to detect ICS responders for exacerbation risk.
ICS Effect on Mortality
The benefits for COPD mortality with type 2 biomarker-targeted ICS treatment remain unclear. Only one study to date examined this point. The ETHOS Trial showed that the benefit of ICS/LABA/LAMA versus LABA/LAMA in reducing mortality generally increased with blood eosinophil count [44]. Future studies will be necessary to clarify this issue.
Future Directions
Future directions include novel type 2 biomarkers and genetic studies. Emerging type 2 biomarkers, such as eosinophil cationic protein and eosinophil-derived neurotoxin (EDN), might be useful for identifying ICS treatment in patients with COPD. Although there are no reports on the effectiveness of these biomarkers in discriminating ICS treatment in COPD patients, a previous study reported that EDN was significantly higher in asthma-COPD overlap (ACO) than in asthma or COPD [32]. Several genome-wide association studies in COPD patients investigated the ICS potential genetic predictors of interindividual responses to ICS. In Chinese COPD patients, the single nucleotide polymorphisms (SNP) rs37973 may be linked to decreased ICS efficacy [45]. Another study revealed that the SNP rs111720447 was associated with lung function decline in COPD patients receiving ICS [46]. Although clinical implementation remains far away, these novel biomarkers and genetic studies are an area of great promise.
Brief Summary
The blood eosinophil count and FeNO could become surrogate markers for eosinophilic airway inflammation and are easily accessible in COPD. Moreover, they can predict the ICS response to symptom burden, pulmonary function decline, exacerbation risks, and death in COPD. These biomarkers could be a useful indicator to identify which patients with COPD would most likely benefit from ICS. Therefore, type 2 biomarker-targeted ICS therapy could contribute to the progress of medical efficiency in COPD.
History of Suspected Asthma
ICS is considered as the mainstay of treatment for patients with asthma [47]. However, no studies have examined the response to ICS in patients having COPD with a history of suspected asthma. Some reports have studied the proportion of patients having COPD with a history of suspected asthma and the association between medical history and type 2 inflammation. The coexistence rate of asthma in patients with COPD varies according to the definition and population. However, a previous systematic review reported a coexistence rate of 27% [48]. In this review, most of the articles included the history of asthma in the definition of asthma-related complications. Annangi et al. studied approximately 3.11 million Americans with COPD who were aged ≥40 years. They found that 14.6% of these patients had a history of asthma [49]. They also reported that 35.8% of the patients having COPD with a history of asthma had elevated blood eosinophil counts (≥300 cells/µL), and 84.4% had elevated FeNO levels (≥25 ppb). Thus, many patients having COPD with a history of asthma are believed to have type 2 airway inflammation. In contrast, in the present report, approximately 35.6% of the patients having COPD without a history of asthma had elevated blood eosinophil counts, and 15.2% had elevated FeNO levels. In other words, not all patients having COPD with type 2 airway inflammation had a history of asthma. Thus, this study suggests that a history of previously diagnosed asthma is an important finding that indicates a complication of asthma with type 2 airway inflammation.
The Japanese Respiratory Society published diagnostic criteria for ACO in 2018 [50]. These criteria use subjective information, such as variability of symptoms and a history of asthma before the age of 40 years, as well as objective information, such as FeNO level, blood eosinophil count, and airway reversibility, to arrive at a diagnosis. These criteria are used to distinguish between the pathophysiologies of asthma and COPD while making a diagnosis of ACO. In a multicenter prospective cohort study of approximately 400 patients, the prevalence of ACO among patients with COPD was reported to be 25.5% based on these criteria [51]. In this study, 27.3% of the patients with ACO had a history of asthma before the age of 40, and 85.7% of the patients were reported to have variable or paroxysmal respiratory symptoms. In addition, 68.6% of the patients with ACO had elevated FeNO levels (≥35 ppb), and 76.6% of the patients had allergic rhinitis or airway reversibility, elevated blood eosinophil counts (≥300 cells/µL), or high IgE levels. Type 2 inflammation is likely to be present in many patients with ACO who meet these diagnostic criteria. Therefore, a history of suspected asthma is thought to be a predictive biomarker of type 2 airway inflammation, which may be expected to respond to ICS.
In this section, we have described how variable or paroxysmal respiratory symptoms and a history of asthma before the age of 40 years are associated with the presence of type 2 inflammation. As discussed in a previous section, the presence of type 2 inflammation is associated with responsiveness to ICS in patients with COPD. Therefore, a history of suspected asthma constitutes a useful guide for the use of ICS in combination with the evaluation of blood eosinophil count and FeNO.
Importance of Reducing COPD Exacerbations
Exacerbations in COPD are defined as an acute worsening of a patient's condition, which includes respiratory symptoms and necessitates a change in regular medication [52,53]. Exacerbation is associated with a decline in lung function [54,55], quality of life [56], and poor prognosis in COPD patients [57]. Previous exacerbation in the past 12 months was the strongest risk factor for further exacerbation in COPD patients (odds ratio, 4.30; 95% confidence interval [CI], 3.58 to 5.17) [58]. This result has also been reported in many other studies [59]. Similarly, previous exacerbations increase subsequent severe exacerbations and mortality [60]. A recent study revealed that 36% of patients with no exacerbations at baseline will experience an exacerbation within the next three years [61]. Moreover, the importance of a single exacerbation is highlighted by the fact that even one moderate or severe exacerbation is a significant risk factor for all-cause mortality and re-exacerbation [62]. Therefore, it is important to reduce COPD exacerbation and keep exacerbation at zero. ICSs are expected to be useful for this purpose.
Usefulness of ICS in Reducing Exacerbations
In 2000, the ISOLDE trial revealed that the exacerbation rate of COPD was reduced by ICS (fluticasone propionate) compared to placebo [63]. After this trial, similar results have been reported, suggesting that ICS significantly suppresses exacerbation in some COPD patients with or without an exacerbation history [64][65][66]. Furthermore, ICS has been found to be more reliable when used in combination with a long-activating β agonist (LABA) compared with LABA alone for COPD patients with a history of at least one previous exacerbation [8][9][10][11][12][13][14]67]. Therefore, some have recommended that ICS be added to LABA for COPD patients with a high exacerbation risk, such as those with frequent exacerbations [68].
Conversely, long-activating muscarinic antagonists (LAMA) have also been reported to reduce exacerbation in COPD patients regardless of exacerbation history [69], and those with at least one exacerbation history per year [70]. However, the INSPIRE study showed no difference in the frequency of exacerbations between LAMA and ICS/LABA in COPD patients with an exacerbation history [71]. These findings suggest that if LAMA is used to treat COPD, ICS may not be necessary to prevent subsequent exacerbations.
Moreover, the WISDOM study showed that withdrawal of ICS did not increase exacerbation risk (hazard ratio, 1.06; 95% CI, 0.94 to 1.19) in COPD patients who received triple therapy (defined as treatment with LAMA, LABA, and ICS), with a history of at least one exacerbation in the past year [72]. Similar results were shown in moderate-to-severe COPD patients with no exacerbation history [73] and low exacerbation risk (FEV1 >50% of predicted and less than two exacerbations in the past year) [74]. In contrast, LAMA/LABA was reported to be more effective at preventing exacerbations than ICS/LABA in COPD patients with a history of at least one exacerbation in the previous year (the FLAME study) [75], and a meta-analysis that included FLAME and other studies showed the same result (LAMA/LABA vs. ICS/LABA, hazard ratio 0.82; 95% CI, 0.70 to 0.96) [76]. However, the WISDOM study showed that both a high blood eosinophil count (≥300 cells/µL) [40] and frequent exacerbation (two or more per year) are risk factors for exacerbation when ICS is withdrawn [41]. Thus, ICS should be considered when asthma complicates and/or exacerbates its frequency (two or more times per year).
Benefits and Problems of Triple Therapy with LAMA, LABA, and ICS
The effect of triple therapy with a single inhaler has recently been reported. First, the TRIBUTE trial showed that a single-inhaler triple therapy with beclomethasone/formoterol/ glycopyrronium) decreased moderate-to-severe exacerbation compared with LAMA/LABA (glycopyrronium/indacaterol) (rate ratio, 0.848; 95% CI, 0.723-0.995) in COPD patients with FEV1 < 50% of predicted, moderate or severe exacerbation history, and without current asthma [77]. Second, the IMPACT trial, a large study, compared triple therapy with a single inhaler, LAMA/LABA, and ICS/LABA using the same ICS (fluticasone furoate), LABA (vilanterol), and LAMA (umeclidinium). This trial included COPD patients with FEV1 < 50% of the predicted value and a history of at least one moderate or severe exacerbation in the previous year, but those with asthma were explicitly excluded. Triple therapy resulted in a lower exacerbation rate than LAMA/LABA (rate ratio 0.75; 95% CI, 0.70 to 0.81) or ICS/LABA (rate ratio 0.85; 95% CI, 0.80 to 0.90) [16]. In addition to these studies, the ETHOS trial comparing triple therapy with LAMA/LABA or ICS/LABA in COPD patients with exacerbation history also showed similar results to triple therapy, with a greater ability to reduce exacerbation than LAMA/LABA (rate ratio, 0.76; 95% CI, 0.69 to 0.83), using the same ICS (budesonide), LABA (formoterol), and LAMA (glycopyrrolate) [15].
However, some problems have been pointed out in these studies showing the effects of triple therapy [78]. These studies excluded patients with current asthma but allowed past asthma. The population of patients who had been using ICS before study entry was approximately 60% in TRIBUTE, 70% in IMPACT, and 80% in ETHOS. Therefore, it has been pointed out that some of the LAMA/LABA groups may have increased the frequency of exacerbation because ICS, a necessary therapy for COPD with asthma-like features, was discontinued for study entry. Another article also reported that blood eosinophil counts are associated with reducing exacerbation rate in the IMPACT trial [17].
Conversely, in the subgroup analysis of the IMPACT study, the previous single moderate exacerbation group did not show a significant difference between triple therapy and LAMA (rate ratio, 0.92; 95% CI, 0.79 to 1.06); however, the frequent moderate exacerbation group and severe exacerbation group, which included patients who required hospitalization, showed significant effects (frequent group, rate ratio, 0.80; 95% CI, 0.72 to 0.90) (severe group, rate ratio, 0.81; 95% CI, 0.70 to 0.93) [79]. Additionally, in real-world clinical practice, Suissa et al. showed that the superiority of triple therapy over LAMA/LABA in preventing COPD exacerbation is exhibited in the blood eosinophil count >6% (hazard ratio, 0.83; 95% CI, 0.46 to 0.94) or frequent exacerbation (hazard ratio, 0.83; 95% CI, 0.70 to 0.98) [80]. The same investigators also recently reported that triple therapy has a higher mortality rate than LAMA/LABA in patients with no prior asthma diagnosis or none/one exacerbation in the previous year, but not prior asthma diagnosis and two or more exacerbations [81].
In summary, exacerbation history during the previous year is considered to not provide sufficient evidence for adding ICS to LAMA/LABA in patients with COPD. However, when blood eosinophils are high and frequent exacerbations occur, there is evidence for adding ICS to prevent further exacerbation. For patients with COPD and a history of frequent exacerbation but <300 cells/µL, it is controversial due to a lack of evidence.
Risk Factors of Respiratory Infections with ICS Treatment for Patients with COPD
While there are benefits that ICS bring to patients with COPD, such as a reduction of the frequency of exacerbations mainly caused by infectious mechanism, some concerns have been reported, especially, paradoxically, the increased risk of other respiratory infections. Infection-induced exacerbation of COPD significantly reduces patients' prognosis and quality of life. Older age, lower body mass index (BMI), more severe airflow limitations, and use of high-dose ICS are generally associated with an increased risk of developing (and exacerbating) respiratory tract infections and pneumonia. Given these concerns, it is important to stratify risk by the patient and make individual and judicious decisions for ICS indications. According to the previous study [82], ICS was associated with a dosedependent increased risk of acquiring Haemophilus influenzae (H. influenzae), and the authors said that high-dose ICS should be used with caution. H. influenzae is known to contribute to daily symptoms, exacerbations, and disease progression. Patients from whom H. influenzae was isolated also had lower BMI, lower FEV1, and more hospitalizations for previous exacerbations. Similarly, ICS dose-dependently increases the risk of Pseudomonas aeruginosa (P. aeruginosa) colonization [83]. P. aeruginosa was also more prevalent in patients with lower BMI, lower FEV1, and a higher rate of previously hospitalized exacerbations. It is also pointed out that ICS is related to non-tuberculous mycobacteriosis (NTM). Patients with COPD on current ICS therapy had about four times the odds ratios for the risk of NTM compared to patients with COPD who had never received ICS. The risk of ICS for NTM was dose-dependent, and fluticasone had a higher odds ratio (OR) than budesonide [84,85]. Giorgio Castellana et al. asserted the tuberculosis risk of ICS in a meta-analysis of nonrandomized studies [86].
The mechanism by which these types of bacteria colonize is unclear; however, it is said that ICS can alternate the innate and adaptive immune system, increase the bacterial load and change the microbial composition in the airway, especially in patients with lower sputum or blood eosinophil [87]. Further possible mechanisms include easy infection with viruses and chronic respiratory tract infection due to suppression of the production of type-1 IFN and cathelicidin, an antimicrobial peptide [88]. ICS can be also involved in the deficiency of mucosal-associated invariant T (MAIT) cells. MAIT cells are a subset of innatelike T lymphocytes accounting for up to 10% of T cells in blood and airway tissue and play an important role in protective immunity against bacterial or fungal infections [89,90].
How Do We Measure the Risk of ICS for Infections in Patients with COPD?
Chronic bronchiectasis infection (CBI) (three or more times detections of the same causative bacterium in four consecutive valid sputum samples) in COPD patients is associated with the risk of developing pneumonia regardless of the number of blood eosinophils (100 or more or less than 100). Blood eosinophils count <100 cells/µL was the sole risk factor for pneumonia with or without CBI. ICS also increased the risk of developing pneumonia in patients with CBI and blood eosinophils count <100 cells/µL. On the other hand, the use of ICS in patients lacking these risk factors did not significantly increase the incidence of pneumonia [21]. It means that regular sputum cultures, reference to past sputum culture results, and confirmation by peripheral blood tests are important. Analysis of the TORCH study on pneumonia risk in COPD patients receiving ICS, the risk of developing pneumonia is associated with advanced age (≥55 years old), %FEV1 < 50% (namely GOLD stage III or higher), exacerbation within one year, and lower BMI (BMI < 25) [19]. Gender differences were not detected in this study. Moreover, Courtney Crim et al. clarified that current smoking, previous history of pneumonia (within one year), lower BMI (BMI < 25), and more severe airflow limitation (%FEV1 < 50%) are more than double the risk of developing pneumonia in ICS administration (fluticasone furoate + vilanterol vs. vilanterol alone) [20]. A tendency of dose-dependent risk was observed, especially in male patients.
Based on the above, in general, elderly patients (especially males) with lower BMI and more severe airflow limitation (GOLD III or higher) have a high risk of respiratory tract infection (including CBI, NTM, and pneumonia) and exacerbation due to the use of ICS; therefore, careful use of ICS should be considered. Histories of recurrent hospitalizations for exacerbations may reflect these risks. Consider ICS indications in patients with blood eosinophils count >100 cells/µL and no continuous detection of pathogenic bacteria in past sputum tests. However, the risk of ICS for infection is thought to increase in a dosedependent manner; therefore, aimlessly continuous administration of high doses should be avoided.
Discussion
According to the evidence discussed above, some patients with COPD may benefit from the addition of ICS to their bronchodilator treatment, while others may not. As a result, each patient's risk/benefit ratio for starting ICS therapy must be carefully considered. Notably, the challenge is to determine what characteristics can be practically used to help identify patients with COPD who can benefit most from using ICS while running the lowest risk of unfavorable side effects. Based on a review of the current literature, type 2 inflammation biomarkers should be considered the highest priority as clinical markers of potential ICS benefits, such as shown in Figure 2. The next highest priority was a history of suspected asthma, followed by a history of COPD exacerbation. Furthermore, it is necessary to consider the potential risk of ICS infections. Therefore, we propose the following composite ICO chart to be considered when adding ICS treatment in combination with one or two long-acting bronchodilators ( Table 1). The chart divided the type 2 inflammation biomarkers into three ranges and offered recommendations (recommend, consider, and against) by combining the history of suspected asthma, history of exacerbations, and risk of infection. For the type 2 biomarker-high groups, such as patients with a blood eosinophil count of ≥300 cells/µL and/or FeNO ≥ 35 ppb, the current evidence is sufficient to make a firm recommendation regarding the use of ICS if there is a history of suspected asthma complications and/or frequent COPD exacerbations [15,17,18,41,44,78,80]. However, if such patients also have characteristics that place them at a high risk of infection, careful follow-up is needed to check for pneumonia development after ICS use. On the other hand, recommendations for the use of ICS are slightly lower for patients with type 2 biomarkers who do not have a history of suspected asthma complications or frequent COPD exacerbations and are more pronounced when the patient is at risk for infection [28,29,81]. For the type 2 biomarker-low groups, such as patients with a blood eosinophil count of <100 cells/µL and/or FeNO < 20 ppb, the use of ICS should normally be avoided [15,17,28,29,44]. However, it may be considered when there is a history of suspected asthma complications and little concern about infection.
For the type 2 biomarker-intermediate groups, such as patients with a blood eosinophil count of 100-300 cells/µL and/or FeNO 20-35 ppb, the current evidence is insufficient to make a firm recommendation. This is because most analyses are conducted for low and high type 2 biomarker groups, and few analyses focus on intermediate groups. For example, a post-hoc analysis of the KRONOS study is described below [91]. This analysis evaluated lung function and exacerbations in patients with moderate-to-very severe COPD who did not have airway reversibility and had a blood eosinophil count of <300 cells/µL. The results showed that triple therapy did not significantly improve through FEV1 compared with LABA/LAMA, but significantly reduced the rate of moderate-to-severe exacerbations. While these findings are important, it was unclear how many patients with blood eosinophil counts below 100 were included; consequently, the results of the analysis in the 100-300 cells/µL population remain ambiguous. However, the results of the subgroup analyses of the IMPACT and ETHOS studies showed that even in the range of blood eosinophil counts from 100 to 300, the exacerbation-suppression effect of the addition of ICS gradually increased [15,17]. Moreover, the analysis in the Destress study showed no ICS responders in the low-type 2 biomarker group, while ICS responders were present in the intermediate group. Based on these results, we determined that the ICS recommendations could be expanded in the intermediate group compared to the low group and, thus, decided on the recommendation level.
A limitation of the ICO chart is that it does not provide recommendations when multiple factors are in conflict. The more multiple indicators a person has, the more difficult it becomes to combine and interpret them. In most patients, combinations of two or three are limited. However, each patient often must clarify which specific medical history must be prioritized. In the AERIS study, the analysis of COPD exacerbation phenotypes using a Markov chain model was utilized to show that bacterial and eosinophilic exacerbations were more likely to be repeated in subsequent exacerbations within a patient [92]. An analysis and a recommendation chart that combines all of the effects and risk factors presented here would be ideal, but the solution to such a request would require an Artificial Intelligence-based analysis that includes multiple factors. Thus, further validation and revision studies of ICO charts are required. Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-01-25T16:17:40.062Z | 2023-01-22T00:00:00.000 | {
"year": 2023,
"sha1": "feaf64416c0f23bc469a0e1feca1c6508dead619",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/13/2/213/pdf?version=1674382534",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a7a5ea453c67a981e6af00800265ca0d98a3f4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11326054 | pes2o/s2orc | v3-fos-license | Translation and validation of Autism Diagnostic Interview-Revised ( ADIR ) for autism diagnosis in Brazil
The landmark paper in autism came in 1942, when Kanner described a behavioral syndrome in eleven children1. Autism is characterized by three core behavioral manifestations: qualitative deficits in social interaction and communication; repetitive and stereotyped behavior patterns; and restricted interests2. In the 1980s, questionnaires and scales were created in an attempt to standardize the diagnosis and evaluation of children with autism. The Autism Diagnostic Interview-Revised (ADI-R)3 is one of the most detailed instrument. Today, it is considered the gold standard for the diagnosis of autism worldwide4,5. The ADI-R is one of the most frequently used
instrument in research and publications in the autism field 6,7 .Its diagnostic properties and validity are well documented 4 .The ADI-R is a useful diagnostic tool in distinguishing between children with autism and children with receptive language disorders 8 .The ADI-R diagnostic classification remains relatively stable over time in prospective studies 9 , although it does not rank the syndrome as mild, moderate or severe forms 10 .
In this study, the authors translated the ADI-R into the Brazilian Portuguese language and validate it as a diagnostic instrument of autism in Brazil.The preliminary validation properties are described.
mEThoDs
A case-control study was done in a convenience sample of children from the Hospital de Clínicas de Porto Alegre (HCPA).The inclusion criteria were: 7-18 years of age, diagnosis through the DSM-IV criteria for patients with autism and through the Weschler Intelligence Scale for the group of patients with moderate intellectual disability and no autism.The patients with autism were diagnosed by one of the authors (R.S.R).In the group of patients with intellectual disability, autism was ruled out by the Autism Screening Questionaire 11 .The study excluded patients with sensorial or physical impairment, as well as patients with syndrome-associated diseases.The groups were paired by age.
An informed consent was obtained.This study was approved by the Ethics and Research Committee (no 06-539).
study tool
The ADI-R is a standard semi-structured interview, applied to parents and/or caregivers of individuals with possibility of autism 12 .
The ADI-R produces a scoring algorithm that is similar to the diagnostic criteria of CID-10 (World Health Organization, 1992) and DSM-IV 3,4 .
It is comprised of 93 items, 42 of which are able to be ranked into the following four scores with the respective cutoff values for diagnostic purposes.Score A: 10; Score B: verbal 8, non verbal 7; Score C: 3; and Score D: 1 12 .The answers are transcribed into the interview protocol 3,10,12 .
The instrument provides only three diagnoses: patient with autism, autistic signs without the classic form of the disease, patient without autism.
statistical analysis
Cronbach's α reliability coefficient was calculated to evaluate the internal consistency.The interobserver consistency was evaluated using Kappa statistics, and, for this purpose, the possible answers to the questions were grouped into two groups: 0 and 1 (no symptoms and mild symptoms); 2 and 3 (moderate symptoms and severe symptoms).For discriminant validity, the Student's t-test was utilized for the interview and Fisher's exact test for the interview items.The validity of criterion was evaluated with sensitivity and specificity measurements, having the DSM-IV diagnostic criteria as the gold standard.
The significance level established for this study was p<0.05.
LogIsTICs
First, the project was evaluated and authorized by the Western Psychological Services (WPS), the publisher that owns the copyrights of the ADI-R.Then, the purchase of the kit for the instrument translation, the royalty payment and the training to interviewers were arranged.It is important to note that the ADI-R is a diagnostic scale belonging to WPS and that its use requires prior training in a recognized Center by the copyright holder.Its use in Brazil is still restricted to research field.
The ADI-R was translated as earlier proposed 13,14 .Two investigators with fluent English made independent translations of the instrument into Portuguese.After that, a final version was produced, which was translated into English by a translator specialized in translation and retro-translation processes, and later sent for analysis of the interview authors.Then, the final version of the interview was applied to parents and/or caregivers.
The interview was made by one of the other three investigators who had been previously trained, blindfolded to the patient's diagnosis.They filled out the interview protocol and handed it to two of the investigators without the initial part of the protocol containing the identification and information that could affect the study blindness.Then, these two investigators, based on the behavior descriptions, gave the scoring algorithm to each patient.Half of the interviews were submitted to the scoring algorithm filled by the two investigators for the interobserver consistency evaluation.
REsuLTs
The study assessed 20 children and adolescents with autism and 20 with intellectual disability without autism.Both groups presented age ranging from 8 to 16 years, with mean age 11, as well as a predominance of boys, especially in the group of patients with autism (80.0%; 52.9%).The mean interview duration was similar in both groups, with 2.69 hours of duration in the patients with autism and 2.77 hours in patients without autism, totaling over 100 hours of interviews.Total intelligence quotient (IQ) in the group of patients with moderate intellectual disability ranged from 40 to 55, with median value of 51.The Cronbach' s α reliability coefficient was 0.967 (95%CI 0.952-0.982).In the evaluation for the validity of criterion, the ADI-R correctly identified all autism cases diagnosed through the DSM-IV criteria, with sensitivity of 100% (95%CI 83.2-100.0)and specificity of 100% (95%CI 80.5-100.0).In the discriminant validity evaluation, mean scores obtained in each of the three diagnostic domains of the instrument were significantly higher in the group of patients with autism, showing that this instrument can discriminate both groups (Table 1).The discriminant validity of each of the 42 scoring items of the interview was also evaluated.A significantly higher number of score 2 in the group with autism and scores 0 and 1 in the group without autism was obtained, showing that the items are valid as they discriminate patients with autism from those with intellectual disability without autism (Table 2).Questions 2, 86, 87, 9 and 10, although part of the scoring algorithm, were not considered in the discriminant validity evaluation because they are not able to be discriminant.
In the group of children with intellectual disability without autism, 5 (29.4%) of them reached scoring values in 1 or 2 of the three autism diagnostic domains.
The external consistency was satisfactory in all scoring algorithm items, except for item 37, which did not allow the application of Kappa statistics (Table 3).The median Kappa value was 0.824.
DIsCussIon
In Brazil, in the last years, three autism assessment instruments have been translated into Portuguese and validated.The Scale for the Assessment of Autistic Behavior 15 , the Childhood Autism Rating Scale 16 and the Autism Screening Questionaire 11 .
The ADI-R is considered as one of the "gold standard" methods for autism diagnosis by the international literature 4,5 and is the most frequently used clinical instrument in studies and publications.Given its importance, it has been translated in several countries, such as Germany, Island, Japan, China, Italy and Spain, and is in translation process in others, as Holland, Norway, Hungary, Sweden, Korea and France.
Our study adopted the model of translation and transcultural validation process described by Sperber 14 , which is one of the most frequently utilized in the literature 6,17 .
The predominance of male patients in the group with autism was not unexpected.According to some authors, the mean reported proportions varies from 3.5 to 4 boys to one girl, sometimes reaching 6 or more 18,19 .
The ADI-R Brazilian Adaptation obtained from our sample showed initial validation properties.These results were similar to those of the original interview and its revised edition, as well as the methodology utilized in the instrument validation, with interview application to two groups of patients 3,10 .
The reliability, also known as consistency, refers to the reproducibility of a measurement and it can be evaluated through several forms.This study evaluated the interobserver consistency, which is the measurement of agreement between two or more observers assessing the same individuals 20,21 .It was measured in the original interview and in its revision by means of the video recorded interviews.
In this study, the investigators made the evaluation in written records, a more practical and cheaper method.According to Menezes and Nascimento 20 , these records immediately show whether the appraisers know the adopted criteria and interpret the records the same way.The Kappa statistics is the most frequently method utilized to assess the external consistency; it is a measurement of interobserver agreement, corrected to chance agreement.Kappa values over 0.61 are considered substantial, according to Menezes and Nascimento 20 .In our study, high Kappa values were obtained in 40 of the 42 assessed items.In item 37, which assessed the presence of pronoun reversal, it was not possible to apply the statistic test, as it is an orthogonal matrix and not all cells were filled out.Only item 69 presented a nonsatisfactory value of external consistency (k=0.471).This item assesses the stereotyped use of objects and interest in parts of objects.The authors did not find problems in the retro-translation process of this specific item and suggest that the training on concepts and codes involved in the question might have been insufficient.The internal consistency of the ADI-R Brazilian Adaptation was fully satisfactory.According to Streiner and Norman 22 , the minimum value of 0.70 for Cronback's α coefficient is recommended to ensure that the items consistently assess the same construct.
This study assessed not o nly the discriminant validity of the instrument, but also each instrument item, as performed in the validation studies of the original ADI 10 and its subsequent revision 3 , with satisfactory results, similar to those of the abovementioned studies.Thus, the instrument showed to be able to differentiate patients with autism from patients without autism, although they presented manifestations of the autism spectrum, a phenomenon already known in the clinical practice and that makes the differential diagnosis easier between patients with autism and patients with intellectual disability without autism 3,10,23 .
Our study detected that approximately 30% of the patients with moderate intellectual disability fulfilled the autism diagnosis criteria in at least one of the three diagnostic domains, a percentage that is very similar to that found by Le Couteur et al. 10 , when creating the ADI.
This study also showed that the Brazilian adaptation of the ADI-R presents validity of criterion as it identified all children with autism diagnosed through the DSM-IV criteria.Such criteria were chosen because it is the most frequently utilized in the clinical practice and considered accurate, as suggested by Blacker and Endicott 24 .
The purpose of this study was to translate and validate the ADI-R into Brazilian Portuguese.Although the initial findings have been positive, precaution should be considered.This study was made with a small sample, using a case-control study design that knowingly can overestimate the psychometric properties of behavioral scales.Finally, the sample was obtained in a restricted area of the country.Regional, social and cultural variations should be assessed in a broader way.
Table 1 .
Assessment of discriminant validity of the Autism Diagnostic Interview-Revised Brazilian Adaptation.
Table 2 .
Assessment of discriminant validity for each item of the Autism Diagnostic Interview-Revised Brazilian Adaptation.
Table 3 .
Interobserver consistency of the Autism Diagnostic Interview-Revised Brazilian Adaptation.
*Impossible to calculate k. | 2016-11-08T18:56:27.780Z | 2012-03-01T00:00:00.000 | {
"year": 2012,
"sha1": "1b1a8e780febeefc235c6140deb9256cee503d1f",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/anp/a/JZW6bkTYBV77ZsQLKVNxLnt/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1b1a8e780febeefc235c6140deb9256cee503d1f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
254622810 | pes2o/s2orc | v3-fos-license | A Technology-Based Intervention to Support Older Adults in Living Independently: Protocol for a Cross-National Feasibility Pilot
Innovative technologies can support older adults with or without disabilities, allowing them to live independently in their environment whilst monitoring their health and safety conditions and thereby reducing the significant burden on caregivers, whether family or professional. This paper discusses the design of a study protocol to evaluate the acceptance, usability, and efficiency of the SAVE system, a custom-developed information technology-based elderly care system. The study will involve older adults (aged 65 or older), professional and lay caregivers, and care service decision-makers representing all types of users in a care service scenario. The SAVE environmental sensors, smartwatches, smartphones, and Web service application will be evaluated in people’s homes situated in Romania, Italy, and Hungary with a total of 165 users of the three types (cares, elderly, and admin). The study design follows the mixed method approach, using standardized tests and questionnaires with open-ended questions and logging all the data for evaluation. The trial is registered to the platform ClinicalTrials.gov with the registration number NCT05626556. This protocol not only guides the participating countries but can be a feasibility protocol suitable for evaluating the usability and quality of similar systems.
Introduction
In 2050, there will be 1.5 billion people aged 65 worldwide [1]. The central policy in response to this global trend is to foster the concept of Active and Healthy Ageing (AHA) [2,3]. Despite the influence of genetics, the health of the older population is associated mainly with the physical and social environments of their daily life, such as homes, neighborhoods, and communities, and of their individual characteristics, such as gender, ethnicity, and socioeconomic status [4,5].
Innovation technologies can support older adults, with or without disabilities, by enabling an independent life in their environment and monitoring their health status and safety, therefore reducing the significant burden of care for caregivers, whether family and/or professional [6][7][8]. Using smart artifacts can also be an effective way to overcome social isolation through connecting with the outside world, obtaining social support, engaging in activities of interest and strengthening self-confidence [9][10][11][12]. In addition, innovative technologies can improve the safety of older people through emergency solutions (e.g., fall detection, alert function, help call opportunities) by supporting therapy, observing physical parameters, and controlling the environmental ones [13][14][15][16][17]. Khosravi et al. [18] reported that key issues in aging care are the risk of falls, chronic diseases, dementia, social isolation, depression, poor well-being, and insufficient medication management. He raises the point that assistive technologies can improve the quality of life, especially among older adults, by tackling each of the issues previously outlined.
Although there is limited understanding of the digital landscape for aging care [19], several Smart Home technologies (e.g., web platforms, applications, sensors) that support older adults' quality of life, security, and growing sense of loneliness have been developed in recent years. The success of these technologies still depends on users' perception of personal privacy [20], so data security and confidentiality are a priority for the acceptance of a Smart Home system. To address these challenges, several projects have been developed. For example, after a 12-week Smart Home personalized technology program, participants' quality of life increased significantly [21], the Fik@ room web platform allowed social connection among older adults in a secure digital environment [22], and the SMART4MD mobile application [23] facilitated a sense of coherence for older persons with cognitive impairment.
The COVID-19 pandemic has had an enormous impact on the extent of social connectedness and the quality of relationships among individuals, and it will probably represent a general rethinking of new methods and (digital) models of care [19]. Different approaches for the involvement of older people in various activities and reducing social isolation have been proposed in the last two years [24]. The Pharaon Project [25] addressed the importance of alternative approaches to face-to-face methods using phone and online interviews, online questionnaires, and virtual and semi-virtual co-creation seminars. Moreover, telemedicine and e-Health services have proven essential to support care for older adults and family caregivers. For example, Internet-based technology has helped people at early stages of cognitive impairment through electronic reminders, daily activities, and cognitive stimulation therapy and games [26].
Based on this scenario, eight partners from three countries (Italy, Hungary, and Romania) joined their efforts in the European-funded SAVE (SAfety of elderly people and Vicinity Ensuring) project (EU Grant Agreement AAL-CP-2018-5-149). The SAVE system aims to offer technology-based support to older adults to stay in their familiar surroundings for as long as possible while feeling safe and optimally cared for. The SAVE technology has been designed and developed according to the User Centered Design (UCD) approach, which involves multiple interactions with users to understand their needs and preferences and involve them in the design process for creating a helpful and appreciated technological product [27]. Secondarily, it supports informal caregivers, such as relatives, in providing optimal care for their loved ones while maintaining their professional and private life.
Study Objectives
The general objective of this study is to test the usability and efficiency of the SAVE prototype by systematically and objectively identifying the strengths and weaknesses of the proposed solution for enabling older adults to keep their independent and active lives in their homes and maintain their social relationships for as long as possible.
Study Design
To thoroughly evaluate the SAVE prototype, a pre-post interventional study involving the use of the SAVE platform for 21 consecutive days was designed. It consists of the use of a mixed-methods approach, in which it collects both qualitative (open questions) and quantitative (standardized tests) data in three different measurements (T0, T1, T2) during the period of use of the system. The data collection card will therefore be divided into three different sections, which correspond to the three different moments of detection: 1.
Prior to the start of the experimentation (T0); 2.
Ten days into the intervention study, i.e., at the midterm of the trial (T1); 3.
After 21 days, i.e., at the end of the trial (T2).
The users' data will be logged and continuously stored over the 21-day test period. The research will be managed by qualified personnel, and the researchers will supervise the tests and the interactions between the users and the system.
Study Setting
The study involves three different European institutions in charge of performing the feasibility assessment: the Istituto di Ricerca e Cura a Carattere Scientifico, Istituto Nazionale Ricovero e Cura Anziani (IRCCS INRCA located in the city of Ancona in Italy, the National Institute for Medical Rehabilitation (NIMR) in the city of Budapest in Hungary and the Transilvania University of Bras , ov (UNITBV)in collaboration with the Romanian Direction of Social Assistance (DAS) located in the Brasov City Council and the Timis , oara City Council and with "Hand in Hand" Association from Bras , ov in Romania. This multicentric setting will allow for assessing the SAVE system in different social and cultural contexts. Overall, the cross-national approach will ensure a broad acceptance of the developed technology and prepare the possibility of its dissemination and the transferability of the research methods adopted by the SAVE study at the European level and well beyond the initial life cycle of the project.
Participants
The study will involve primary, secondary and tertiary users. According to the European Project SAVE activities, the primary users are expected to be 80 older adults: 30 participants will be enrolled in Romania, 25 in Hungary and the remaining 25 in Italy. Their caregivers (30 participants will be enrolled in Romania, 25 in Hungary and 25 in Italy) as secondary users and at least 5 tertiary users for the site will be enrolled.
Inclusion and exclusion criteria for each end user group are described in Table 1.
Recruitment
Potential participants will be selected based on free participation and effective adherence to the inclusion criteria. The staff involved in the study will contact their networks (associations, recreational centers, trade unions, etc.) by phone and/or email, which will provide the names of potential participants. The latter will be contacted by phone to verify the inclusion criteria and describe the objectives, methods, procedures and timing of the study. Once compliance with the inclusion and exclusion criteria of the study is verified and informed consent is obtained, the local team member will proceed with the baseline evaluation.
Trial status
User recruitment was started in (enter date here), which was completed by (enter method here). The study is currently underway, with data collection beginning in September 2022 and expected to end by March 2023.
The Intervention
The SAVE system will be implemented in end users' homes, which is achieved by installing all the kits in the appropriate rooms and by offering relevant training for using all the different devices to the end-users.
Flood sensors will be installed in the bathroom (preferably next to the washing machine) and kitchen (preferably next to the dishwasher), presence sensors in the living room and bedroom, and the contact sensor will be installed at the entrance door. The sensors are powered by button batteries with very low energy consumption; the producer of the sensors advertises an autonomy of 2 years with a standard CR2032 button battery. Only the sensors' hub and the SAVE Sensor Adapter are powered from a socket (the SAVE Sensor Adapter is powered by the USB connector on the sensors' hub). The other devices can be used as long as they are charged (the smartwatch and smartphone). Thus, the end user's responsibility is to charge their smartwatch and not to unplug the sensors' hub (and, in Hungary, the smartphone from the charger). It is planned to place the central unit in the bedroom, where the user can easily charge the smartwatch in the evening. For the best user experience, the required software (for the sensors kit and the smartwatch) will be installed on the users' smartphones in Romania and Italy. On the Hungarian side, users are provided with a separate mobile phone to receive data and transmit it to the cloud system.
The sensor set and the smartwatch software are only downloaded to the user's mobile phone at the user's request. The sensors have been located in places where they do not affect the daily activity of the users, these being easy to move according to preferences and small enough to blend in the background. Thus, at the end of the installation, users will have the Aqara Home System in their homes (5 sensors and a sensor hub), a Samsung smartwatch, and a SAVE Sensors Adapter, all of which are connected to a router with unlimited internet access. After installing the system, there will be a brief instruction training session on the use and purpose of these devices and the services they offer. This will ensure that the end users are encouraged to use them, performing some tests with them: • Heart rate testing in association with the frequency indicated on the smartwatch; • Testing the emergency system by pressing the power button 3 times; • Testing of the flood and door sensors by visualizing the values received by the SAVE cloud app through the SAVE Web App; • Calling a friend/relative from their smartwatch.
To test the pilot solution, the following step-by-step guide will be followed:
1.
Kit creation (from the SAVE Admin Centre)-a unique kit key will be generated; 2.
Adding devices to the kit (1 x Save Sensor Adapter and 1 x Galaxy Watch 3); 3.
Checking the internet connection of the phone/home router in Romania/Italy and the provided mobile phone in Hungary); 4.
Verify that the user has a Gmail account; for those users who do not have an account, an account is created; 5.
Filling in the user profile, including the Kit Key; 7.
Installation of the Aqara Home app from the Play Store; 8.
Installation and placement of Aqara hub and sensors by following the instruction in the Aqara Home app ( Figure 2); 9.
Test the functioning of all sensors through the Aqara Home app; 10. Adding a remote control for the Sony projector from the Aqara Home app on the Aqara hub; 11. Adding the SAVE automations in the Aqara Home app by following the instruction in the Aqara Home app and the SAVE installation manual ( uration app ( Figure 6); 19. Setting the smartwatch face to the SAVE watch face (Figure 7); 20. Configuration of the SOS and fall detection features from the Galaxy Wear app; 21. Test the emergency system. Furthermore, test the data sent from the watch together with the data sent from home ( Figure 8); 22. Creation of caregivers' SAVE Web accounts; 23. Linking the caregivers accounts to the end-user account via the SAVE Web app.
On the Hungarian side, the 7th, 10-21 is completed just before the system is deployed. 21. Test the emergency system. Furthermore, test the data sent from the watch together with the data sent from home ( Figure 8); 22. Creation of caregivers' SAVE Web accounts; 23. Linking the caregivers accounts to the end-user account via the SAVE Web app.
On the Hungarian side, the 7th, 10-21 is completed just before the system is deployed.
The Outcomes
In accordance with the general objectives of this study, primary and secondary outcomes are described in the following subsections.
The Primary Outcomes
The primary outcomes are: • Usability, which is understood as "the extent to which a product can be used by certain users to achieve certain goals with effectiveness, efficiency, and satisfaction in a given context of use". This result will be measured through the SUS scale [28] and the UEQ questionnaire [29]. On the Hungarian side, the 7th, 10-21 is completed just before the system is deployed.
The Outcomes
In accordance with the general objectives of this study, primary and secondary outcomes are described in the following subsections.
The Primary Outcomes
The primary outcomes are: • Usability, which is understood as "the extent to which a product can be used by certain users to achieve certain goals with effectiveness, efficiency, and satisfaction in a given context of use". This result will be measured through the SUS scale [28] and the UEQ questionnaire [29]. • Learnability of the system, which is seen as a component of usability, is the degree to which an interface is intuitive, and the user can immediately understand how to interact with the system. This result will be measured through the SUS scale [28]. • Acceptance, seen as the degree to which users come to accept and use a piece of technology. This result will be measured through the SUS scale [28] and the UEQ questionnaire [29].
The Secondary Outcomes
The following outcomes will be the secondary effects that we will measure as part of our intervention:
•
Well-being is "a state of complete physical, mental and social well-being, and not simply as the absence of disease". This result will be measured through the WHO-5 Index [30] and the EQ-5D-5L questionnaire [31]. • Self-efficacy is the set of beliefs we have about our ability to complete a certain task. This result will be measured through the short version of the GSE self-efficacy scale [32].
Data Collection
In line with the design of the study, three different data collection tools were developed. As reported in Table 2, the data collection sheet for primary users consists of four dimensions sections, which include a series of scales, as follows: (A) Health and Wellness Condition:
•
Mini-Mental State Examination (MMSE) [33] is a neuropsychological test for the evaluation of disorders of intellectual efficiency and the presence of cognitive impairment. The test consists of 30 questions, which refer to various cognitive areas: orientation in time and space, recording of words, attention and calculation, re-enactment, language, and constructive praxis. The total score is between a minimum of 0 and a maximum of 30 points. A score of 26 to 30 is an indication of cognitive normality. The score is adjusted with the coefficient for age and schooling [34]. • Functional Ambulation Category (FAC) [35] is a scale that evaluates the ability to achieve autonomy in walking. The ambulatory capacity is evaluated with a score ranging from 0 to 5, where 0 indicates total dependence and 5 indicates complete independence. From the score obtained, it can be deduced the amount of support that the patient requires when walking and on what kind of surfaces he is able to walk.
•
The Barthel Index [36] is an objective and standardized tool for measuring functional status. The individual is scored in a number of areas depending upon the independence of performance. Total scores range from 0 (complete dependence) to 100 (complete independence). • SF-12v2 ™ Health Survey [37] is a widely used instrument and is a 12-element subset of the SF-36v2 ™. It is a short and reliable measure of the general state of health. It is useful in health surveys of large populations and has been widely used as a screening tool. • Five Well-Being (WHO-5) Index [30] is a short self-reported measure of current mental well-being.
• EuroQol-5 Dimension-5 Level (EQ-5D-5L) [31] is a self-report survey that measures the quality of life across 5 domains: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. Each dimension is scored on a 5-level severity ranking that ranges from 'No problems' through 'Extreme problems'.
(B) Self-efficacy: • General Self-Efficacy Scale (GSE) [32]. Its abbreviated form of ten entries is a reliable and valid tool for assessing general self-efficacy.
(C) Usability and Acceptance: • System Usability Scale (SUS) [28] is a reliable tool for measuring usability. It consists of a 10-item questionnaire with five response options for respondents from 'Strongly agree' to 'Strongly disagree'. It allows for evaluating various products and services, including hardware, software, mobile devices, websites, and applications. It is easy to administer to participants, can be used on small sample sizes with reliable results, and can effectively differentiate between usable and unusable systems. • User Experience Questionnaire (UEQ-S) [29]. This short version of the questionnaire measures the subjective impression of users towards the user experience of products. The UEQ is a semantic differential with 26 items. Both classical usability aspects (efficiency, perspicuity, dependability) and user experience aspects (originality, stimulation) are measured. • Quebec User Evaluation of Satisfaction with assistive Technology (QUEST-Version 2.0) [38] is a 12-item outcome measure that assesses user satisfaction with two components, Device and Services.
(D) Privacy and Stigmatization
• Open questions • Usefulness of the system, • Reliability of the system, and some free comments about satisfaction with the system.
The data collection sheet for tertiary users consists of a sequence of 5-Likert scale questions on the following dimensions (Table 2): • Impact of the system on the reduction in time spent in caregiving activities; • Impact of the system on the reduction in the cost of caregiving activities; • Impact of the system on the reduction in the workload.
Each instrument will be administered, if possible, in a face-to-face session in the presence, otherwise remotely, of a trained interviewer, who will report the answers on a paper version of the data collection card.
During use, the following log data will be continuously recorded: usage time/used services (time/number), number of interactions, participation in social events (web conference, phone call, video call), tracked daily activities, number of errors, number of aids required for the tasks, recognized user commands, used input modality, captured touch screen data/activity, number of services used in the given period, number of tasks solved by using the SAVE system (for example, did the user recieve an answer to a question, did the user manage to call/inform the caregiver, was the user able to perform recreational activities with the help of the device, did the user manage to send the alarm to the caregiver, whether the caregiver was informed about the user's fell, could not sleep, decreased activity, be deteriorated, etc.).
Data Analysis
The proposed study is a small-scale study, carried out as part of an innovation and research project. To calculate sample size, G*Power software (latest ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) [39] was used. The G*Power software supports sample size and power calculation for various statistical methods, including dependent t-test (differences between two dependents means-matched pairs). A power calculation based on a dependent t-test shows that at least 34 end users per group are needed (alpha set at 0.05, beta 0.2 and effect size at 0.5 medium). It will conduct only a partial evaluation of the effectiveness of the services to specific dimensions of the quality of life of the subjects involved. To investigate primary and secondary objectives, the data will be analyzed using qualitative and quantitative methods and mixed analysis methods.
The processing of questionnaire data will be performed through specific software, such as SPSS, Mplus (for the execution of the confirmatory factor analysis), and AMOS (for the execution of the exploratory factor analysis), depending on the needs. The quality of the data and its internal consistency will be evaluated using the Cronbach Alpha and other specific tests. The questionnaires will first be verified manually to check the completeness of the compilation and any apparent inconsistencies. Later, automated routines will be used to detect dubious outliers and records. In such cases, the necessary data cleaning will be carried out.
An Analysis Plan will be defined on the basis of which the analyses themselves will then be conducted. The first step of the analysis will be exploratory in nature. The descriptive analysis of the sample will be conducted through the classic techniques of uniand bi-variate statistical analysis. Significant differences between outcomes and exposures will be compared using the Chi Quadro test, the Fisher Exact, the t-test, or the Anova test. The characteristics of the subjects will be compared with those of the non-respondents to verify any distortions due to non-responses during the detection.
Discussion
In this study, we have described a feasibility protocol to evaluate the usability and acceptability of the SAVE system for enabling older adults to keep their independent and active lives in their homes and to maintain their social relationships for as long as possible. This is closely related to the concept of "aging in place", which represents the desire expressed by older people to continue living within the community, with some level of independence, rather than in residential care [40]. Housing and neighborhood satisfaction have been used as good indicators of environmental and overall well-being for older adults [41]. Technological devices should provide a sense of confidence and security for older adults, which would enable aging in place. In that sense, the SAVE intervention has the potential to assist older people to live in their own homes and facilitate their engagement in everyday tasks, helping to maintain their independence and increasing their control over the world around them. Several studies [42][43][44][45][46][47] have claimed that older people want to remain independent for as long as possible. The desire to stay independent stemmed from their wish to not be perceived as a burden to family, friends, or society. Therefore, the smart home solutions would be designed to help older people carry out everyday activities and lead healthier and more fulfilled lives by improving their physical safety and social communication. According to Lee and Kim [48], older adults' active participation in social activities and establishing their sense of belonging as social members have important effects on successful aging in place. The SAVE system would effectively overcome social isolation among older people by connecting to the outside world, gaining social support, engaging in activities of interest, and boosting self-confidence.
The study must be performed the same way in all participating countries to obtain comparable data from the three countries. However, there are also limitations in the applicability of the intervention. According to Quan-Haase et al. [49], age-related factors beyond income, education, and gender affect older adults, hindering their ability to take advantage of digital technology. Low digital literacy could hinder older people's use of digital media, perhaps because they did not grow up with it and had to learn and adapt in order to use them later in life [50]. A limited number of participants with a specific level of cognitive impairment is also a limitation, as well as the fact that the recruitment of the subjects and the beginning of the trial will take place when the COVID-19 pandemic is still an ongoing threat. Despite this, in emergency situations, technology has proved to greatly help in mitigating the consequences of physical distancing and helping older people maintain relationships with their relatives and friends [51].
Ethics and Dissemination
The study was approved by the Ethical Committees of the three countries involved. Any protocol modifications will be notified to the same Ethics Committees and to other interested parties, such as researchers and participants. For the latter, changes will be communicated by email. The principles of the Declaration of Helsinki and Good Clinical Practice guidelines will be adhered to. Participants in this study will provide written informed consent.
Risk Management, Mitigation, and Possible Limitations for the Users
We do not expect any adverse effects on users' health related to testing the technology platform. The hardware devices used are commercial devices (Samsung smartwatches and AQARA-XIAOMI environmental sensors) and CE-certified. Researchers will provide clear and detailed information on the terms of use of the technology platform and the services offered during the study. The proposed services aim to support older people in terms of personal and residential security by allowing them an independent and safe lifestyle, even when the first signs of disorientation could lead the individual and their family members to progressive isolation and social exclusion, and do not replace (in whole or in part) the support from professional services, as the proposed technologies support the well-being of the participants.
The users who take part in the study will not incur any direct or indirect costs related to the use of the technology platform. All the devices necessary for the trial (smartwatches, smartphones, sensor kits and internet connection) will be made available to users free of charge.
Even if we do not expect any adverse effects, the presence and use of technological devices (sensors and smartwatches) could be a source of discomfort, anxiety about making mistakes, or quickly forgetting the instructions and feeling stigma. Low digital literacy and the poor usability of the devices could hinder older people's use of technologies.
The SAVE technology has been designed and developed according to the UCD approach to counteract these possible limitations by matching the needs and capabilities of users, thereby improving the user experience [27]. The pre-post interventional study design will allow us to better analyze changing behaviors regarding usability, acceptance, and stigma.
The use of a specific manufacturer's company (Galaxy Gear app) probably significantly limits the range of users, but the operating systems for smartwatches are still very fragmented, and the budget resources of the project are limited. Therefore, for our research, we chose a product of a large company with a significant market share. We aimed for a product with LTE connectivity (based on eSIM), which makes it independent of a smartphone and allows a better monitoring process. At the time of selection, there were not many products with LTE connectivity, and the eSIM technology was not yet supported by all service providers for all smartwatch producers. The architecture we adopted lends itself to the easy inclusion of other devices from other manufacturers by adding a software element (an adapter), which is in our expansion plans.
Data Management
Personal data collected during the trial will be handled and stored following the General Data Protection Regulation (GDPR) 2018. The use of the study data will be controlled by the principal investigator. All data and documentation related to the trial will be stored in accordance with applicable regulatory requirements, and access to data will be restricted to authorized trial personnel. The trial will be run by principal investigators and co-investigators of the three countries involved.
Dissemination
The dissemination program will involve peer-reviewed scientific journals and national and international conferences. The results will be disseminated to all participants.
Conclusions
The SAVE system was developed using environmental sensors, smartwatches, smartphones, and a Web application to organize and provide improved service levels to older people, their caregivers, and care organizers. The main functions of the SAVE technology are fall detection, activity level monitoring, heart rate monitoring, door opening detection, water leakage detection, localization, enhanced touch and vocal communication, text messaging, and emergency calls. Data are stored in a cloud-based system that is securely accessible to older care recipients, their caregivers, and care managers.
The cross-national study was designed to evaluate the acceptance, usability, and efficiency of the SAVE system among 165 users of the three types. The exclusion and inclusion criteria for each user type were defined. The planned test environment is the elderly user's home and the caregivers' and care organizers' Web platforms through their applications. The convenient user recruiting method will be applied with the help of retirement clubs and referrals through contacts. Training on using smart devices takes place during the first face-to-face meeting at system installation. The task of the older adult is to wear the watch during the day, put it on the charger at night, and test the alarm and message-receiving functions. Caregivers and care organizers are responsible for reacting to any dispatched notification and regularly opening the SAVE interface, reviewing the information on the interface, and using the information found as possible. The use of a control group is not planned.
The study design follows the mixed method approach where standardized tests and questionnaires, open-ended questions, and log data are collected from the users and the cloud database. The described study design could serve as an inspiring study design framework for similar usability and effectiveness studies. | 2022-12-14T16:11:14.328Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "0540b04748a5a39fc6b23d38c1fda8125f6767bd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/24/16604/pdf?version=1670665440",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59e05d7544812ebc3629807695080ecd4c6237a6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216245080 | pes2o/s2orc | v3-fos-license | Intrusion Detection Based on Spatiotemporal Characterization of Cyberattacks
: As attack techniques become more sophisticated, detecting new and advanced cyberattacks with traditional intrusion detection techniques based on signature and anomaly is becoming challenging. In signature ‐ based detection, not only do attackers bypass known signatures, but they also exploit unknown vulnerabilities. As the number of new signatures is increasing daily, it is also challenging to scale the detection mechanisms without impacting performance. For anomaly detection, defining normal behaviors is challenging due to today’s complex applications with dynamic features. These complex and dynamic characteristics cause much false positives with a simple outlier detection. In this work, we detect intrusion behaviors by looking at number of computing elements together in time and space, whereas most of existing intrusion detection systems focus on a single element. In order to define the spatiotemporal intrusion patterns, we look at fundamental behaviors of cyberattacks that should appear in any possible attacks. We define these individual behaviors as basic cyberattack action (BCA) and develop a stochastic graph model to represent combination of BCAs in time and space. In addition, we build an intrusion detection system to demonstrate the detection mechanism based on the graph model. We inject numerous known and possible unknown attacks comprising BCAs and show how the system detects these attacks and how to locate the root causes based on the spatiotemporal patterns. The characterization of attacks in spatiotemporal patterns with expected essential behaviors would present a new effective approach to the intrusion detection.
Introduction
Cyberattacks are becoming increasingly more sophisticated. For example, zero-day attacks exploit undisclosed vulnerabilities and advanced persistent threats (APT) attacks consist of multiple phases of attacks for a long period of time. With traditional intrusion detection systems based on signature and anomaly, it is challenging to detect these sophisticated attacks.
Signature-based intrusion detection systems (S-IDS) depend on known signatures to detect cyberattacks. There are two issues with S-IDSs. First, new attacks cannot be detected because new signatures are only obtained through post-analysis of attack events [1,2]. Even variant attacks are hard to detect as attackers work around the known signatures. Second, as the number of signatures increases, it is challenging to scale the detection mechanisms without impacting performance [3].
Anomaly-based IDSs (A-IDS) detect cyberattacks by comparing the system behavior with predefined normal behavior [2,4]. A-IDS can be effective for unknown attacks as it does not rely on known signatures. The major issue with A-IDS is the large number of false positives generated [5]. In simple applications, it is easy to define a normal behavior of the system. However, it is challenging to define a normal behavior in today's complex applications running in an N-tier architecture with dynamic features [6]. These applications obfuscate normal behaviors and thus create much false positives with anomaly detection based on a simple outlier detection.
Machine learning (ML) techniques are being actively employed in anomaly detection as an alternative to these issues. ML-based anomaly detection trains historical datasets to define normal behaviors and detect outlier events as attacks [5,7]. Although processing of massive datasets would help to set a flexible threshold of detection, there are still issues with false positives due to overfitting and unoptimized hyperparameters [8,9].
Most of existing S-IDS and A-IDS, including ML-based A-IDS, focus on a single computing or network element, whereas we focus on multiple elements. We use the terms element or host interchangeably to denote the computing or network element. Focusing on multiple elements in time and space rather than that of a single element would provide further evidence of an attack. Furthermore, this approach contributes to locate root causes by tracking the spatiotemporal behaviors.
In order to define the spatiotemporal attack patterns, we develop fundamental and essential behaviors that should appear in any attacks. We carefully study intrusion datasets as well as attack classifications including CAPEC [10] and characterize system and network features caused by intrusions. We define these behaviors of a single element as Basic Cyberattack Actions (BCAs).
BCAs allow detection of novel and complex cyberattacks as long as the attacks show any combination of BCA patterns. Future attacks could also consist of many combinations of BCAs. We propose to look at number of computing and network elements together in space (i.e., networked groups of hosts) and time rather than relying on individual BCA of a single element. Combination of BCAs describe the spatiotemporal characterization of an attack and would provide further insight into the attack. We also develop a stochastic graph model to represent the combination of BCAs.
In order to demonstrate our detection idea based on the spatiotemporal patterns, we develop an IDS in our production datacenter. We inject known and possible unknown attacks comprising BCAs and illustrate how the system detects these attacks and locates the root causes by tracking BCAs in time and space. The performance evaluation with extensive attacks comprising complex BCAs is not the focus of this paper and will be addressed in the forthcoming paper.
The remainder of this paper is organized as follows: We review related works in Section 2. Section 3 defines BCAs based on existing attack classifications. Section 4 defines a stochastic graph model to describe the behavior of BCA in time and space. In Section 5, we describe our BCA detection system. In Section 6, we evaluate our system with numerous attacks. Finally, the conclusions are presented in Section 7.
Related Work
S-IDSs detect signatures of known attacks. Kumar and Spafford [11] propose a pattern matching model for S-IDS based on Colored Petri Nets. Honeycomb [12] automatically generates attack signatures using a honeypot system and detects these signatures using pattern matching techniques. Josue et al. [13] propose a pattern matching algorithm to filter out the audit trail. Koral et al. [14] define a set of state transition signatures and detects an attack sequence of the transition. Zhengbing et al. [15] employ data mining techniques to develop more accurate signatures. These systems use known signatures and they are focused on improving the search and pattern matching speed. They do not consider unknown attacks without matching signatures.
A-IDSs define normal behaviors and detect outlier events as attacks. Although A-IDS is able to detect unknown attacks, it suffers from large numbers of false positives. Collaborative detection mechanisms are proposed to reduce false positives [16][17][18][19][20]. They aggregate and correlate a number of alerts generated by different IDSs. IDES [17] first proposes the IDS collaboration and EMERALD [18] refines IDES. Cuppens and Miege [16] used an expert system to develop an aggregation and correlation module. Valdes and Skinner [19] employed a probability-based approach for a similarity recognition. Yu et al. [20] develop a knowledge-based alert aggregation system. They collect a number of false alerts and process them based on correlation rules.
Numerous studies employ ML to identify legitimate behaviors. They define normal behavior patterns based on historical data from numerous system metrics. Bayesian network, decision Tree, and SVM (Support Vector Machine) are widely used in intrusion detection systems based on ML techniques. Kruegel et al. [21] propose an event classification scheme based on Bayesian network to mitigate false alarms. Bilge et al. [22,23] detect malicious domains by employing a passive DNS analysis based on a decision tree. Feng et al. [24], Kuang et al. [25], Thaseen and Kumar [26] apply SVM for better performance in intrusion detection. There are numerous studies that employ deep learning that belongs to ML. Khan et al. [27], Li et al. [28], Liu et al. [29] and Kim et al. [30] transform intrusion datasets to images and then detect attacks based on a convolutional neural network (CNN). Bontemps et al. [31], Staudenmeyer and Omlin [32] suggest an IDS model based on a long short-term memory recurrent neural network (LSTM) using the KDD dataset [33]. There are further IDS studies that perform binary and multiclass classifications based on a recurrent neural network [30,[34][35][36].
In addition, Dokas et al. [37], Hu and Panda [38] employ data mining techniques. Stephenson [39] combines forensics with the intrusion detection and response. Ren and Jin [40] develop a framework for the real time intrusion forensic system.
Although numerous studies on S-IDS and A-IDS have been addressed, most of the studies focus on a single element. Our focus is on behaviors of multiple elements in time and space rather than that of a single element. As an existing study considering the concept of time and space, Chen et al. [41] identify spatiotemporal patterns of cyberattacks by analyzing victims' IP addresses collected by Honeypots. The biggest difference from our work is that they define every packet arriving at Honeypots as attacks and analyze characteristics of attack traffic in order to predict cyberattacks, whereas our focus is on defining a novel method of detecting cyberattacks based on fundamental attack behaviors in time and space. They focus on the macroscopic characteristics of attack traffic and identify deterministic and stochastic patterns among a wide range of consecutive IP addresses. In addition, they only use IP addresses observed from the victim side, whereas we monitor not only the states of both attackers and victims but their spatiotemporal relationships.
Basic Cyberattack Action (BCA)
In order to detect an attack by looking at number of computing and network elements together, we carefully study existing attack classifications as well as intrusion datasets. We focus on system and network characteristics by intrusions. We finally define BCAs, fundamental behaviors of attacks. BCAs observed from multiple elements naturally lend themselves to be described in space and time.
CAPEC [10] organizes more than 500 attack patterns employed to exploit vulnerabilities. CAPEC contains a comprehensive list with detailed information about each pattern. By analyzing CAPEC, we find that all attack patterns can be described with 10 essential methods of attack (MA) as shown in Table 1. Every attack pattern in CAPEC consists of some combination of MAs. We define five types of BCAs associated with relevant MA. In this work, we do not include MA10 as it depends on the human trust behavior during an attack. For example, CAPEC-98 (phishing attacks) trick people into offering access to their sensitive information. It deals with the human trust issue and it does not manifest in a particular system behavior that can be attributed to particular BCAs. Table 1 also shows how each MA maps to BCAs. We analyze two types of intrusion dataset and as well as CAPEC to find out the mapping. Table 1 lists possible attacks corresponding to the mapping. The first intrusion dataset is KDD, the most widely-used dataset in intrusion detection. KDD classifies attacks into denial of service (DoS), remote-to-local (R2L), user to root (U2R) and Probing for IDS evaluation in the 1998 DARPA project. Numerous attacks belonging to the four classifications has been injected for dataset generation. The other one is CSE-CIC-IDS 2018 [42] that has been actively used in recent intrusion studies. CSE-CIC-IDS 2018 was generated by injecting 6 types of attack, such as brute force, DoS and botnet.
There are many proposed methods to detect MAs. In this work, we are mainly interested in BCAs and we could use any of these methods for MA detection. Focusing on common and fundamental features of cyberattacks rather than specific characteristics of each attack would become increasingly necessary to detect new and variant attacks. [48] and a single point of failure [46] belong to BCA-1. Detecting attacks based on a single MA may lead to many false positives. However, BCA detection that combines several essential attack behaviors would decrease false positives significantly.
• BCA-2. Iterative behavior Many cyberattacks begin by obtaining an access to a target element. Most common method to obtain an access is through the brute force method of login trials with different passwords [43, 46,48]. The resulting behavior is iterative access requests and corresponding responses. MA8 (Brute force) is based on a repetitive trial-and-error method. Known attacks such as login attempts [43] and authentication attacks [46,48] belong to BCA-2. This pattern manifests distinctively from common application requests and responses. Normal transactions in client-server systems do not exhibit this iterative behavior. Therefore, these iterative actions result in an essential attack behavior of a computing element.
• BCA-3. Propagating behavior Many attacks do not remain in a single target element. They tend to propagate to increase the number of infected hosts [43,50]. Attackers initially search for a vulnerable target. Once the target is infected by an attacker, the target becomes an attacker and starts propagating its search and infect tasks. This behavior is quite distinct from common application behavior. The resulting behavior is the increasing number of infected hosts as the time increases and such behavior translates to a spatiotemporal behavior of increasing infected elements. MA6 (Analysis) corresponds to the initial search such as probing [47] and scanning [43]. Known attacks such as worm [43,48] and port scanning [10] belong to BCA-3.
• BCA-4. Sudden increase or decrease in ingress and egress traffic
In additional to the performance degradation, the resulting behavior of attacks can be observed in either sudden increase or decrease of both ingress and egress traffic at the same time [51][52][53][54]. Usually the performance degradation would decrease the egress traffic corresponding to the responses of a server but the ingress traffic corresponding to the requests would remain the same. Decrease or increase in ingress and egress traffic usually result from malicious operation in computing or network elements. DDoS attacks [43,48] and flooding attacks [49] belong to BCA-4. In addition, BCA-4 could occur in combination with BCA-1 because this type of attacks could decrease the server performance.
• BCA-5. Uncorrelated ingress and egress traffic We observe in any servers that the ingress traffic is highly correlated to the egress traffic. As the number of requests increases, we expect the number of responses to increase. This behavior is true when the server is working in desired operational range. As long as the server is capable of responding all requests immediately, we expect the number of responses to closely track the number of requests. When the server is congested or malfunctioning, the ingress and egress traffic are not correlated. Many attacks manifest in this uncorrelated ingress and egress traffic behavior. For example, when attackers spoof their identities during an attack, they do not receive any responses while it sends large numbers of requests [55]. Uncorrelated ingress and egress traffic would describe an essential behavior of a computing element with forged identity. In the existing works, masquerade [48] belongs to BCA-5. Figure 1 illustrates each BCA with the key behavioral features described above.
BCA Description and Composition
We now describe each BCA with associated MAs and spatiotemporal patterns. We use a stochastic graph model to describe the behavior of BCA in time and space. We define the stochastic graph model as follows. Table 2 shows the notation of the graph model.
Definition:
A stochastic graph G(t) represent the overall stochastic graph comprising of all elements i at time t. Gi(t) represents the stochastic graph only related to element i, where i ∈ , thus a subset graph of G(t) = {Gi(t)}.
the state of the element i. If the element detects MA3, its state is MA3, for example. Null represent that no MA is detected and is operating normally. Ei(t) = {ei,j} where i , ∈ , is the set of edges that represent the communication between the node i and j. it= {i,j(t)} is the set of traffic volume from node i to all j, i,j(t), where i, ∈ . i,j(t)represents the stochastic random variable associated with ei,j. Gi(t) does not contain any vertices not connected to element i. BCAs can now be modelled using the stochastic graph G as follows. Table 2. Notation of the graph model G.
Notation
Description overall stochastic graph comprising of multiple elements at time t Gi(t) stochastic graph related to a single element i at time t Vi(t) a set of states of a single element i at time t Ei(t) a set of edges between a single element i and other elements at time t it a set of traffic volume from a single element i to other elements at time t i,j(t) a stochastic random variable associated with an edge between a single element i and j • BCA-1. Sudden performance degradation • BCA-2. Iterative behavior MA8 (Brute force) manifests in an iterative behavior. When host i repeats the same behavior such as continuous login trials, the host would generate consistent traffic during the period of attack. ∆ and denote the time period for the successive iteration and the window size for the traffic analysis respectively. During the password attack, iterative access requests and responses between the attacker and the target server generate consistent traffic volume. The password attack could target one or multiple servers. Brute force attacks may target different victims and the number of neighbors the stochastic graph would increase in time.
• BCA-3. Propagating behavior In propagating attacks, an infected host i becomes the attacker and starts infecting another host. Host i would infect more and more hosts as time increases. Host i keeps scanning other hosts j to find vulnerable hosts. MA6 corresponds to the scanning behavior. The total volume does not play significant role here. Traffic from i to all connected elements would usually increase but it is not necessary to show BCA-3 behavior.
• BCA-4. Sudden increase or decrease in ingress and egress traffic In DDoS attacks, the traffic volume of both attackers and targets would suddenly increase exponentially. Host i under the DDoS attack results in The traffic volume increases greater than the acceleration rate . HTTP DoS attacks disrupt a web application server by depleting the web resources. Ingress and egress traffic of the server i would suddenly decrease exponentially.
In this attack, . The traffic volume decreases faster than rate . MA1 (Flooding) and MA2 (Protocol manipulation) are the essential methods for the DDoS and HTTP DoS attack, respectively. Usually multiple new hosts show up in the DDoS attack, but it is not necessarily required.
• BCA-5. Uncorrelated ingress and egress traffic MA7 (spoofing) belongs to BCA-5. In IP spoofing attacks, the attacker does not receive any responses while it sends requests. This behavior results in uncorrelated ingress and egress traffic.
When Host i spoofs its identity, it satisfies Ri,j (∑ , Ri,j is the cross correlation of ingress and egress traffic of i. γ is a threshold coefficient of Ri,j and 0 1. Host i is hidden to other elements due to its spoofed IP address. As Host i is unknown to other elements during the attack, the number of its neighbor deceases.
BCA Detection System
Our system detects BCAs by monitoring spatiotemporal patterns according to the stochastic graph model. The spatiotemporal pattern describes the change of interactions among elements in time and space. There are many existing detection methods for MAs. We deploy any one of existing effective MA detection mechanisms. Periodically we generate a graph Gi for element i. MAs are associated to Gi when they are detected for element i. We match Gi against the stochastic graph models of BCAs to detect intrusions. We demonstrate the effectiveness of spatiotemporal patterns in detecting existing attacks as well as unknown attacks.
MA Detection
We apply existing mechanisms for MA detection in the host. Many of these mechanisms monitor system metrics and correlate metrics to detect a particular MA in a single computing or network element. We apply common MA detection mechanisms in the literature as shown in Table 3. Si denotes the system metrics for MA detection mechanisms. Our focus is not on performance of particular MA detection mechanisms but to demonstrate the advantage of BCA and their spatiotemporal patterns. Improvement in existing MA detection would improve our overall system. Again MA detection is limited to a single element and tends to have many false positives and false negatives.
BCA Detection
Individual host i monitors any change in MA, traffic volume, or temporal and spatial relationship among elements. Periodically its stochastic graph Gi is generated. The spatiotemporal pattern of Gi is then compared to BCA models. When there is a match between Gi and any of BCA models, we determine there is an intrusion and cyberattack to Host i and its associated elements.
Here is an example of BCA detection mechanism. Assume that host A generates GA(t) as shown in Figure 2. We then proceed to match with BCA graphs. Assume that t0 = t and t1 = t + ∆. BCA-3 matches GA(t1) in the example as shown in Figure 3.
Combination of BCA Detections
The stochastic graph G contains all elements with detected MAs. Each element carries out BCA detection through matching its own stochastic graph with BCA graphs. We then see all BCA detected elements collectively. If any of these graphs are connected, meaning that there is a connecting edge between these graphs, we consider the validity of given BCA detections by those elements. By considering multiple elements together, we reduce additional false positives by finding contradicting combination of BCAs. We also further reassure the accuracy of the detection by examining multiple elements.
Here are examples to illustrate further reduction in false positives as well as improving detection accuracy. Figure 4 shows a worm attack. Assume that hosts A-G detect BCA-3 at different times, t1, t2 and t3. Each host generates Gi(t), where i ∈ , , … , , according to BCA matching as shown in Figure 4. Once a host is infected by the worm attack, the host starts propagating the worm to other hosts continuously. Hosts A-G detect BCA-3 as the worm propagates in space and time. Assume that only one host detects BCA-3 and others do not detect any BCA patterns. There is no evidence of propagation and we determine that particular single BCA-3 detection has to be false positive. Combination of multiple Gi(t) help us to reduce false positive. On the other hand, if there are multiple connected elements detecting BCA-3, then it confirms the propagating attack. Thus, G(t) comprising of all Gi(t) gives overall view of elements and helps to reduce false positives in many attack scenarios. Figure 5 shows another example of advantage of having more comprehensive G(t). Host A guesses a B's password using the brute force password attack. Host A sends login requests continuously until it finds out the correct password. During the attack, host B repeats the same behavior to authenticate the passwords. In the password attack, the iterative behavior of either side of the host requires similar behavior from the other host. If only A or B detects BCA-2, we cannot definitely determine it as the brute force attack or a false positive. The combination of BCA-2 detected by A and B increases the confidence in detecting the attack.
Root Cause Analysis
Another advantage of using BCA graph is its ability to find possible root cause and location of the attack. The BCA graphs contain temporal and spatial relationship among elements. It is possible to trace the attack pattern to the originator using BCA graphs. Figure 6 shows an example of locating the root cause. At t0, host A, B, C, D and E are running normally in a multi-tier application. When C detects BCA-1 due to performance degradation, EC(t1) − EC(t0) = {eF,C}. Only eF,C shows up at t1 while other edges appeared at t0. Host F would be the attacker who disrupts host C by injection.
Experimental Evaluation
We deploy several experiments in our datacenter with a controlled VM cluster. We evaluate our system's performance in known attack and unknown attack detection. We also compare our system with those only relying on existing MA detection. We demonstrate how BCAs reduce false positives in several scenarios. We demonstrate that the spatiotemporal characterization of attack patterns helps in accuracy and reliability of intrusion and cyberattack detection. More extensive performance evaluation is not the focus of this paper and will be addressed in the forthcoming paper.
Experimental Setup
We implement our system and deploy in our production datacenter with a controlled VM cluster.
We run a small agent in virtual machines (VMs). Each agent runs MA detection and BCA detection using its own stochastic graph. The agent creates its stochastic graph periodically. The agent then match its graph to BCA graphs. When it finds the matching BCA, the agent sends an alarm along with its graph to the management server. The management server compiles graphs from all elements to generate and update overall stochastic graph, G. The management server then examines all connected graphs Gi to determine the attacks and possible root causes. The infrastructure for the experiments consists of the following components: • Physical servers: Fedora 21, QEMU 1.6.2 hypervisor • VM: Ubuntu 14.04 and Fedora 22, 1v CPU, 1024MB RAM • Cloud web application: Rubbos application [59] For the high reliability of the experimental evaluation, we deploy the Rubbos web application running in an N-Tier architecture. During attacks, web servers and database servers keep processing service requests from 100 clients on average per second.
Known Attacks
As shown in Table 4, we inject four known attacks selected by analyzing intrusion datasets as well as CAPEC as described in Section 3. We use released attack scripts as well as a penetration software for attack injection. Both Scenarios 1 and 4 detect multiple BCAs including BCA-4. Scenario 1 detects sudden decrease in traffic while scenario 4 detects sudden increase in traffic in BCA-4 detection. Slowloris attack is a DoS attack targeting an application layer. The attacker modifies HTTP headers with wrong termination characters. The attacker then sends the packets to a web application server. This attack disrupts the web server due to a large number of incomplete open HTTP connections. The attacker consumes all connections on the server.
Existing HTTP DoS detection systems manually configure the web application parameters or set appropriate firewall rules to drop the suspicious packets [60]. Our system monitors the application metric S8 (requests/s), S9 (responses/s) and S10 (ratio of requests and response). Here we have four hosts A, B, C and D as shown in Figure 7a. We deploy A (client), B (web server), and C (DB server) running a Rubbos application at t0. We inject the Slowloris attack into D using a released script [61]. Host D sends the modified HTTP requests (200 packets/s) to B.
• BCA detection S10 (ratio of response over requests) indicates the performance of B for processing HTTP requests. When B is operating normally, the value of S10 fluctuates from 1 to 4, as shown in Figure 7b. S10 suddenly decreases when the attack is injected. S10 decreases as S9 (responses/s) suddenly decreases due to the performance degradation, as shown in Figure 7c. Host B detects MA2 (Protocol manipulation) based on S9 and S10. Host B detects BCA-1 and BCA-4 based on GB(t1) as shown in Table 5. B satisfies EB(t1) and B(t1) as well as VB(t1) for BCA-1 and BCA-4. Existing systems would also detect this attack by monitoring only MA2 using S8, S9 and S10.
GB(t1)
Detection BCA Although our focus is not on detection methods of MAs, we analyze false positives in detecting MA2 for the validation. Because our attack scenario has 100 clients in the cloud application, we deploy 50 clients, 100 clients, and 200 clients without attacks. Table 6 shows the false positive rate (FPR) for S8, S9 and S10, respectively.
Scenario 2
We inject a password attack that tries guessing a victim's password. In our experiment, we have two hosts, A and B as shown in Figure 8a. We inject the attack into A using Metasploit, a penetration software [62]. Metasploit is open-source software and allows us to inject a variety of attacks with our custom modules. Host A sends login requests more than 4000 times for 20 s guessing B's password. Host B is a MySQL server.
• BCA detection Our system monitors S8 (requests/s) and S9 (responses/s). From A's perspective, S8 shows the number of trials of guessing the password for one second. S9 is the number of responses from the MySQL server, B. Host A and B detect MA8 (Brute force) according to large values of S8 and S9, as shown in Figure 8b. Our system detects BCA-2 on both hosts from GA(t1) and GB(t2). Host A and B have a new neighbor (EA(t1) and EB(t2)) and they generate very consistent traffic (A(t1) and B(t2)) during the attack. GA(t1) and GB(t2) show that A and B satisfy all conditions for BCA-2 as shown in Table 7. Without MA detection (VA(t1) and VB(t2)), it could be either BCA-1 or BCA-2. Table 7. BCA-2 detection on host A and B at t1 and t2 in Scenario 2.
BCA-2
Associating MA improves detection capability of our system. BCA-2 requires similar BCA-2 behavior from connected elements. The combination of BCA-2 detected by A and B in our system increases the confidence of correct detection. Existing systems that analyze elements independently could introduce many false positives.
For MA detection using S8 and S9, we have no false positive found. This is because number of requests to the database server is less than 70 per second for all 50 clients, 100 clients, 150 clients in normal state. However, FPR could increase if the application has much more clients than 150 clients.
• Root cause
According to G(t) comprising of GA(t1) and GB(t2), the new edge between A and B appears at t1 when A detects BCA-2. − = {eA,B}. A is more likely to be the attacker sending the login requests to B, because A detects BCA-2 earlier than B.
Scenario 3
We inject a worm spreading over a local network. An attacker infects a target via an SSH. The attacker usually uses known_hosts file to collect target addresses and to bypass the authentication process. In our experiment, we deploy 10 hosts A to J which have all of other hosts' credentials. We first inject the worm into A using Metasploit. Host A repeats infecting other hosts. Once a target host is infected by the worm, it becomes the attacker and starts infecting another host.
• BCA detection Our system monitors S7 (number of neighbors) to detect the worm. S7 refers to the number of trials to infect the worm via the SSH connection. Every host detects MA6 (Analysis) as S7 increases as time increases as shown in Figure 9b. Our system detects MA6 when the number of neighbors (S7) is greater than 5 (more than half of the entire hosts). Our system detects BCA-3 on every host based on Gi(t) where i ∈ , , , … , . Table 8 shows an example of the BCA detection of GA(t1). Host A has new neighbors (EA(t1)). Traffic volume, (A(t1)), increases as the infection propagates through elements. GA(t1) matches all conditions of BCA-3. All hosts detect BCA-3 as the time increases. Overall G(t) consisting of multiple BCA-3 elements is consistent with the expected behavior of BCA-3 with propagating attacks. Again the overall view of all related hosts increases the confidence of correct detection in this scenario.
GA(t1)
Detection BCA In order to analyze false positives in MA detection based on S7, we monitor clients in our datacenter. Because the clients usually communicate with a web server, the number of neighbors is not proportional to the number of clients. In our experiment without attacks, the number of neighbors is less than 3 with a normal application running.
VA(t1)
• Root cause According to G(t), A and B first detect BCA-3 at t1, while other hosts detect BCA-3 at between t2 and t4. Either A or B could be the attacker that initiated the worm among the hosts.
Scenario 4
We inject a distributed SYN flooding attack with a spoofed IP address using Hping3 [63]. The attacker sends massive SYN packets to zombies with the victim's IP address. The zombies then send SYN-ACK packets to the victim. The massive SYN-ACK packets deplete bandwidth of the victim. In this experiment, we deploy 6 hosts (A-F) as shown in Figure 10a. A is the attacker. Host A keeps sending SYN packets to B-E with F's IP address.
• BCA detection Our system monitors S4 (inbound traffic/s), S5 (outbound traffic/s) and S6 (ratio of inbound and outbound traffic). In this experiment, S4 and S5 are used for MA1 (Flooding) detection. S6 is used to detect MA7 (Spoofing). Host A sends massive SYN packets but does not receive any responses during the attack. These SYN packets increase S5 as shown in Figure 10b and S6 increases accordingly. Host A detects MA1 and MA7 due to the high value of S5 and S6 respectively. Host F receives massive SYN-ACK packets from four hosts (B-E). F detects MA1 due to the high value of S4, as shown in Figure 10c. In our experiment, the four hosts (B-E) do not detect MA1 because each host does not meet the detection threshold (500 kb/s). The range of S4 and S5 are from 180 kb/s to 450 kb/s. The total amount of traffic going to F exceeds the threshold, thus host F detects MA1. Our system detects BCA-4 and BCA-5 as shown in Table 9. Host A detects BCA-5 as it has a low correlation between inbound and outbound traffic. Host A and F detect BCA-1 as their outbound and inbound traffic suddenly increase respectively. Both A and F match all conditions for BCA-4 and BCA-5. By combining BCA graphs, our system correctly detects not only the DDoS attack to F but also the spoofing attack from A. Table 9. BCA detection in Scenario 4 (BCA-5 detection on host A at t1; detection of BCA-1 and BCA-4 on host A and F at t2). For detection of MA1 and MA7, we have no false positive found until we deploy 150 clients with normal behaviors. In the application, the values of S4 and S5 are less than 100 kb/s and 200 kb/s with 100 clients and 150 clients, respectively. In addition, S6 has a value of at least 0.8 or higher with normal clients.
• Root cause
After host A detects BCA-5 at t1, both A and F detect BCA-4 at t2. Based on GA(t1) and GA(t2), we find host A spoofs its identity and sends massive traffic. According to GF(t2), F has new edges between F and the 4 hosts (B-E). We can infer that A initiated DDoS attack to F using B-E's IP addresses.
Unknown Attack
We create an unknown attack based on the bait and switch method. It consists of a bait attack and the intended attack. The bait attack is designed to distract security managers' attention away from the intended attack. The ultimate goal of this attack is to distribute malicious codes. We deploy 3 malicious hosts (A, B, C), 4 clients (D, E, F, G), two web servers (H, I), and one DB server (J). Figure 11 shows seven hosts (D-J) running normally in the multi-tier application at t0. The unknown attack consists of three attacks as follows: Password attack (intended attack) at t1 : This attack requires gaining access to the target server H. The attacker employs a slow password attack to find host H's password. The slow brute force attack is harder to detect using the existing brute force detection mechanisms. We inject the slow password attack into host B which is one of the malicious hosts. Host B repeatedly sends HTTP login requests to host H (web server) until it finds the correct password.
Flooding attack (bait attack) at t2 : The attacker employs a flooding attack to distract the security manager's attention from the intended attack. We inject the flooding attack to malicious host A. Host A starts sending large number of SYN packets to I in order to disrupt the server I.
Redirection attack (intended attack) at t3 : After host B gains access to host H through slow password attack, host B controls host H. Host B changes server H's configuration to redirect all incoming requests to host C (malicious host) instead of intended DB server J. When C receives requests from H, C sends malicious codes as a response to all clients.
• BCA detection Figure 12 shows overall G(t) from our system when the unknown attack is injected. Figure 13. BCA-2 requires similar BCA-2 behavior in the related host in the traditional password attack detection. In Figure 12, host H does not detect BCA-2 unlike host B. Host H fails to detect a slow rate of login request attack embedded among normal application requests. Our system detects host B's brute force behavior while existing systems fail to detect attacks on both B and H. Figure 13. Iterative behavior of host B. Table 10. BCA-2 detection on host B at t1 during a password attack.
BCA-2
Flooding attack (bait attack) : In the bait attack, host A and I detect MA1 (Flooding) due to high inbound (S4) and outbound (S5) traffic at t2. According to GA(t2) and GI(t2) in Table 11, these hosts have a new edge between them and have a sudden increase in traffic as shown in Figure 14. This flooding attack also results in the sudden decrease of traffic in host J as host I is disrupted by flooding attack. The security manager is distracted by host I being attacked by host A through the flooding. Redirection attack (intended attack) : According to GJ(t3) in Table 12, host J detects a removed edge between host H at t3. The removed edge triggers the detection of BCA-5. The removed edge belongs to application elements. In normal operation, we do not expect any application element to be removed without prior notification. Thus it further confirms the attack behavior. Host J also detects a low correlation between inbound and outbound traffic due to the flooding and redirection attacks. Table 12. BCA-5 detection on host J at t3 during a redirection attack. Overall G(t) graph indicates high possibility of the redirection attack based on other connected BCA detections. Our system correctly detects not only the bait attack but the intended attack where existing systems fail to detect the intended attack.
Conclusions
We have presented a different perspective on ways to detect cyberattacks. Rather than relying on traditional signatures and anomaly patterns, we proposed an approach based on fundamental and essential behaviors of cyberattacks. We defined these behaviors as Basic Cyberattack Action (BCA) and proposed five types of BCA such as a sudden performance degradation, iterative behavior, propagation behavior, sudden increase or decrease in ingress and egress traffic, and uncorrelated ingress and egress traffic. Individual BCA is detected by monitoring not only Methods of Attack (MAs) and traffic volume of a single element, but also the spatiotemporal relationship among elements. In order to represent combination of BCAs, we developed a stochastic graph model. The combination of BCAs describes the change of interactions among elements in time and space. By considering multiple elements together, we can reduce additional false positives by finding contradicting combination of BCAs. We also implemented and deployed our spatiotemporal-based intrusion detection system in our datacenter for preliminary validation of our idea. We demonstrated the effectiveness of BCAs in numerous known and unknown attack scenarios. For known attacks, we injected a Slowloris attack, password attack, SSH worm attack, and Smurf attack selected by analyzing intrusion datasets and CAPEC. Our experimental results showed that our system accurately detects all the known attacks comprising BCAs and locates possible root cause as well. Furthermore, we built an example of unknown attack based on a bait-and-switch method that combines three types of attacks such as a password attack, flooding attack, and redirection attack. The experimental results showed that such unknown attack is effectively detected by our system while existing detection mechanisms fail to detect the intended attack. Many existing systems may not be adequate for future unknown and advanced attacks. In addition, today's complex applications may trigger a significant number of false positives. We believe that the characterization of attacks in spatiotemporal patterns with expected essential behaviors of any attack presents a new effective approach to the intrusion detection. The performance evaluation with not only extensive attacks comprising complex BCAs but a variety of applications will be addressed in the future. | 2020-03-12T10:31:48.157Z | 2020-03-09T00:00:00.000 | {
"year": 2020,
"sha1": "294f3197faf3d7e112c6b914c1aeba08d54f2152",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/9/3/460/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "486e95f000ce25c221a75dc097d4616c8b328654",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
254656936 | pes2o/s2orc | v3-fos-license | Determination of glucose exchange rates and permeability of erythrocyte membrane in preeclampsia and subsequent oxidative stress-related protein damage using dynamic-19F-NMR
The cause of the pregnancy condition preeclampsia (PE) is thought to be endothelial dysfunction caused by oxidative stress. As abnormal glucose tolerance has also been associated with PE, we use a fluorinated-mimic of this metabolite to establish whether any oxidative damage to lipids and proteins in the erythrocyte membrane has increased cell membrane permeability. Data were acquired using 19F Dynamic-NMR (DNMR) to measure exchange of 3-fluoro-3-deoxyglucose (3-FDG) across the membrane of erythrocytes from 10 pregnant women (5 healthy control women, and 5 from women suffering from PE). Magnetisation transfer was measured using the 1D selective inversion and 2D EXSY pulse sequences, over a range of time delays. Integrated intensities from these experiments were used in matrix diagonalisation to estimate the values of the rate constants of exchange and membrane permeability. No significant differences were observed for the rate of exchange of 3-FDG and membrane permeability between healthy pregnant women and those suffering from PE, leading us to conclude that no oxidative damage had occurred at this carrier-protein site in the membrane.
Introduction
Preeclampsia (PE) is a human-specific hypertensive disorder of pregnancy which manifests itself after 20 weeks of gestation, developing in almost 10% of pregnancies (El Hassan et al. 2015;Kumru et al. 2006). The cause of PE is still not fully understood, though it is a major cause of maternal and foetal morbidity and mortality. PE affects the mother by vascular endothelial dysfunction and causes intrauterine growth restriction of the foetus (Hubel 1999;Poston et al. 2006). Delivery of the baby is the only method of reversing the syndrome.
Numerous etiologies have been asscociated with PE, though the attack of reactive oxygen species (ROS), and oxidative stress, is thought to be the most important factor contributing to the pathogenesis of PE, causing uncontrolled lipid peroxidation, protein modification and changes to cell membrane structure (Raijmakers et al. 2008;Ethordevic et al. 2008;Adiga et al. 2007; Mohan and Venkataramana 2007;Shoji et al. 2008;Shoji and Koletzko 2007). It has been suggested that polyunsaturated fatty acids are attacked by ROS and converted into lipid hydroperoxides as the initial factor leading to vascular endothelial dysfunction in PE (Howlander et al. 2007;Kaur et al. 2008;Hubel et al. 1989;Davidge et al. 1992;Mehendale et al. 2008;Patil et al. 2007).
Products of lipid peroxidation have the ability to cause further oxidative damage by attacking proteins present in the cells and tissue, which in turn causes lysis of erythrocytes (Salvi et al. 2001;Negre-Salvayre et al. 2008;Davies 1987;Esterbaur et al. 1991). Oxidative damage to Abstract The cause of the pregnancy condition preeclampsia (PE) is thought to be endothelial dysfunction caused by oxidative stress. As abnormal glucose tolerance has also been associated with PE, we use a fluorinatedmimic of this metabolite to establish whether any oxidative damage to lipids and proteins in the erythrocyte membrane has increased cell membrane permeability. Data were acquired using 19 F Dynamic-NMR (DNMR) to measure exchange of 3-fluoro-3-deoxyglucose (3-FDG) across the membrane of erythrocytes from 10 pregnant women (5 healthy control women, and 5 from women suffering from PE). Magnetisation transfer was measured using the 1D selective inversion and 2D EXSY pulse sequences, over a range of time delays. Integrated intensities from these experiments were used in matrix diagonalisation to estimate the values of the rate constants of exchange and membrane permeability. No significant differences were observed for the rate of exchange of 3-FDG and membrane permeability between healthy pregnant women and those suffering from PE, leading us to conclude that no oxidative damage had occurred at this carrier-protein site in the membrane. membrane proteins can also occur by direct free radical attack-such protein modifications can cause structural changes and may also result in detrimental changes in their function, affecting activity of enzymes, receptors or membrane transporters (Roche et al. 2008;Jones 2008;Stadtman 1993).
Abnormal glucose tolerance has also been implicated as a risk factor for PE (Joffe et al. 1998;Parra-Cordero et al. 2014). Glucose enters and leaves the red blood cell by passive transport via an intrinsic protein, GLUT1, so that the facilitated diffusion of this species in and out of erythrocytes is in dynamic equilibrium (May 1998;Gabel et al. 1997;O'Connell et al. 1994;Potts et al. 1990;Potts and Kuchel 1992;London and Gabel 1995). However, any damage to membrane proteins or lipids due to oxidative stress may cause changes in their conformation, which could affect the rate at which the diffusion of species across the membrane occurs. We have demonstrated previously that NMR is capable of identifying metabolomic differences in the blood of healthy pregnant women and those suffering from PE (Turner et al. 2007(Turner et al. , 2008(Turner et al. , 2009. We therefore used NMR spectroscopy to investigate the hypothesis that a difference in glucose exchange rate and cell membrane permeability will be observed between healthy pregnant women and those suffering from PE. NMR exchange experiments allow processes to be investigated under dynamic equilibrium (Perrin and Dwyer 1999;Perrin and Engler 1990;Perrin and Gipe 1984;Robinson et al. 1985). This is achieved by monitoring the transfer of longitudinal magnetisation during a delay in the NMR pulse sequence applied (Grassi et al. 1986;McConnell 1958;Forsén and Hoffman 1963;Muhandiram and McClung 1987). In the application of dynamic NMR (D-NMR) to the study of exchange in erythrocytes from pregnant women, both one dimensional and two-dimensional spectra have been recorded to detect exchange pathways and determine rates of exchange and relaxation of species. By applying both experiments, the results obtained for each can be compared and confirmed, to ensure that the estimates of the values of the rate constants obtained are reliable. As demonstrated previously, 2D methods provide a qualitative map of the exchange process and are tolerant to small differences in chemical shift between the peaks involved in exchange (Johnston et al. 1986;Macura and Ernst 1980); the 1D methods provide faster data acquisition and analysis, especially for two-site systems, as long as the exchange network is known (Perrin and Engler 1990;Robinson et al. 1985;Engler et al. 1988). The integrated intensities obtained from the 1D and 2D experiments can then be used to estimate the values of the first order rate constants of exchange, and establish if oxidative damage has indeed compromised the erythrocyte membrane (Gabel et al. 1997;Perrin and Dwyer 1999;Perrin and Gipe 1984).
Whilst the processes of exchange and the nuclear Overhauser effect are different, both rely on the transfer of longitudinal magnetisation, which is why the same pulse sequence can be applied.
Several one-dimensional experiments exist based on the NOESY pulse sequence, which can be employed to measure the transfer of magnetisation (Robinson et al. 1985;Bellon et al. 1987;Engler et al. 1988;Bulliman et al. 1989;Perrin and Engler 1990). Whilst the "Overdetermined" 1D EXSY pulse sequence of Bulliman et al. (1989) has also been used to study erythrocytes (Potts and Kuchel 1992), the more complex matrix diagonalisation methods are different to those employed in most other exchange applications. Selective inversion was the 1D method of choice in this investigation (Robinson et al. 1985). The exchange of species monitored by 2D spectroscopy was initially documented by Jeener et al. (1979), and has become invaluable in establishing the mechanisms of exchange (Perrin and Gipe 1984;Meier and Ernst 1979;Macura and Ernst 1980;Bremer et al. 1984;Johnston et al. 1986;Montelione and Wagner 1989). The principle is similar, as expected, to that for the selective inversion experiment and so should give comparable results for the elements of the rate matrix. Exchange occurs during the mixing time and in the 2D EXSY four peaks will be produced in the two-site case of cellular exchange i.e. two cross peaks and two diagonal peaks (Gabel et al. 1997;O'Connell et al. 1994;Kirk and Kuchel 1985;Potts et al. 1989). The volumes of all these peaks can then be used in matrix diagonalisation to estimate the values of the exchange rate and relaxation rate constants. A full explanation of the matrix diagonalization method has been included in the Appendix for completeness, as the procedure is not used nor fully described very often in the literature.
By estimating the values of the rate constants of cellular exchange, the measurement of the permeability of the erythrocyte membrane is possible. This will give information on the condition of the membrane with a higher permeability in PE providing an indication of oxidative stress or attack and compromise of the lipid bilayer by ROS.
The inward permeability is calculated from the inward rate constant, previously determined from the NMR data: here Ht is the haematocrit (or red blood cell count); V e is the extracellular volume (mL), calculated as V o (1-Ht), where V o is the NMR sample volume; A is the total surface area of the cells, calculated from (A cell Ht)/MCV, where A cell = 1.43 × 10 − 6 cm 2 and MCV (mean cell volume) = 85 fL for erythrocytes in isotonic solution; k 1 = influx rate constant (Raftos et al. 1990;O'Connell et al. 1994;London and Gabel 1995;Chapman and Kuchel 1990). (1) Similarly, the outward permeability can be calculated using the efflux rate constant: where f w is the fraction of red cell volume which is accessible to solutes, and k − 1 is the efflux rate constant (Raftos et al. 1990;O'Connell et al. 1994;London and Gabel 1995). It is clear from Eq. (2) that the outward permeability is independent of haematocrit.
Patient selection and sample preparation
Women chosen for this part of the study were all beyond 20 weeks of gestation and were attending The Leeds Teaching Hospitals NHS Trust, Leeds, UK. The women were of any ethnicity and were not all in their first pregnancy. The PE group exhibited fully established PE, diagnosed according to the criteria of American College of Obstetrics and Gynecologists (ACOG) i.e., a rise in blood pressure after 20 weeks gestation to >140/90 mm Hg on two or more occasions 6 h apart in a previously normotensive woman, combined with proteinuria (Davey and MacGillivray 1988). Proteinuria was defined as protein dipstick >1 + on two or more midstream urine samples, or a 24 h urine excretion of >0.3 g protein, in the absence of a urinary tract infection (Harsem et al. 2006). Healthy control women were generally from later in pregnancy, i.e. >30 weeks, to ensure that they remained healthy controls and did not develop PE weeks after sample collection. Venous blood was collected in heparinized (lithium salt) anticoagulant tubes. All fresh whole blood was centrifuged for 6 min at 3000 g and 4 °C, before removing and discarding the plasma and buffy coat. The same conditions were used in all subsequent washings of erythrocytes. For the transport of 3FDG, a saline buffer solution was prepared containing 132 mM NaCl, 15 mM Tris-HEPES (pH 7.4), 5 mM ascorbic acid and 10 mM 3FDG (O'Connell et al. 1994;Pallotta et al. 2014). Erythrocytes (still in the anticoagulant tube) were washed with the saline buffer solution in D 2 O containing the fluorinated glucose, using approximately three times the volume of the RBCs. The tube was inverted three times to mix the solution and RBCs before repeating centrifugation, and removing and discarding the wash solution. This washing procedure was repeated three times. After washing, carbon monoxide gas was bubbled through the cells for approximately 30 s with gentle stirring to remove deoxyhaemoglobin and paramagnetic O 2 from the sample (O'Connell et al. 1994). Finally, the haematocrit of the sample was measured in duplicate using heparinised capillary tubes, and spun at approximately 1300 g for 5 min using a Haematospin 1300 (Hawsley, Lancing, Sussex, UK). It was assumed that 0.717 of the intracellular volume was accessible to the 3FG molecules (Potts and Kuchel 1992). The RBCs were incubated at 37 °C for 1 h, before transferring 700 μl of the RBCs/glucose solution to an NMR tube for analysis.
1D 19 F spectra fluorinated glucose in D 2 O
The 1D 19 F-NMR FID of 5 mM 3-fluoro-3-deoxyglucose (3FDG) in D 2 O was acquired at 470.34 MHz and at 37 °C into 65,536 data points, using a relaxation delay of 5 s, a pulse duration of 10 µs, over 4 transients, at a temperature of 20 °C. An exponential line broadening of 1 Hz was applied to the FID, prior to zero filling to 131,072 points, followed by Fourier transformation. Resultant spectra were phased and baseline corrected using Vnmr 6.1 C (Varian Inc., Palo Alto, California, USA).
1D 19 F spectra of RBCs and fluorinated glucose, with and without proton decoupling
Two one dimensional spectra were acquired of the RBCs and fluorinated glucose at 470.34 MHz and at 37°C, where broadband proton decoupling was applied in the second experiment. This allowed the 19 F intracellular and extracellular resonances to be resolved without the complication of the geminal 1 H-19 F coupling (Gabel et al. 1997). For both experiments, an interpulse relaxation delay of 8 s was used, a delay which was longer than 5T 1 (O'Connell et al. 1994). The 19 F 90° pulse duration was determined for each new sample though was often 17 µs. The coupling constant measured in the first 1D spectrum was used in the calibration of the 1 H 90° pulse duration for the proton decoupling in the second experiment, during which WALTZ decoupling was applied for the duration of the pulse and acquisition. 128 transients were collected into 16,384 data points for each spectrum, with a spectral width of 10,000 Hz. An exponential line broadening of 1 Hz was applied to each of the FIDs, prior to zero filling to 32,768 points, followed by Fourier transformation. Resultant spectra were phased, baseline corrected and integrated using Vnmr 6.1 C (Varian Inc., Palo Alto, California, USA). Manual integration was repeated and the mean average taken to minimise errors.
Selective inversion
One dimensional 19 F magnetization transfer experiments were performed on RBCs and fluorinated glucose at 470.34 MHz and at 37°C using the selective inversion method (O'Connell et al. 1994;Gabel et al. 1997;Robinson et al. 1985). Two series of experiments were performed for each anomer, using the 1D NOESY pulse sequence [RD-90˚x-t 1 -90˚x-t m -90˚x-acq], where either the intracellular or the extracellular peak was selectively inverted by setting the transmitter offset to the frequency of the resonance to be inverted. RD represents a relaxation delay of 8 s, a delay which was longer than 5T 1 (O'Connell et al. 1994). The delay t 1 = 1/(2|ν i -ν e |), where ν i and ν e are the frequencies of the intracellular and extracellular peaks respectively. The mixing time t m was arrayed at delays of 0.001 (nominal zero), 0.05, 0.075, 0.10, 0.15, 0.30 and 0.45 s. After calibration of the 1 H 90° pulse duration, broadband proton decoupling was applied using WALTZ decoupling during the final pulse and acquisition. The 19 F 90° pulse duration used in the initial one dimensional experiments was applied (often 17 µs). 128 transients were collected into 16,384 data points for each spectrum, with a spectral width of 10,000 Hz. Again, exponential line broadening of 1 Hz was applied to each of the FIDs, prior to zero filling to 32,768 points, followed by Fourier transformation. Resultant spectra were phased, baseline corrected and integrated using Vnmr 6.1 C (Varian Inc., Palo Alto, California, USA). Manual integration was repeated and the mean average taken to minimise errors.
2D EXSY
Four two dimensional magnetization transfer experiments were performed on the RBC and fluorinated glucose samples at 470.34 MHz and at 37°C, using the broadband proton decoupled 2D NOESY pulse sequence [RD-90˚x-t 1 -90˚x-t m -90˚x-acq] (Gabel et al. 1997;Macura and Ernst 1980;Johnston et al. 1986). Each experiment had a different mixing time, t m , of either 0, 200, 400 or 600 ms. In all four experiments, 8 transients were collected into 4,096 data points in the directly detected dimension and 64 points in the second dimension, with a spectral width of 10,000 Hz. The same relaxation delay as in the 1D experiments was used (8 s), as well as the same previously calibrated 19 F and 1 H (for decoupling) pulse widths. Proton decoupling was provided in the directly detected dimension by application of WALTZ decoupling during the final pulse and acquisition. An exponential line broadening of 2 Hz was applied in both dimensions to each FID (O'Connell et al. 1994), prior to zero filling the second dimension to 2048 points, followed by Fourier transformation. Resultant spectra were phased and baseline corrected using Vnmr 6.1C (Varian Inc., Palo Alto, California, USA). Each spectrum was integrated using Lorentzian Fitting mode in the software Sparky 3.114 (T. D. Goddard and D. G. Kneller, SPARKY 3, University of California, San Francisco, USA), where peaks within a contour boundary were grouped, and where the data that were used were above the lowest contour.
Matrix diagonalisation of integrated intensities
Integrated peak data from the 1D and 2D magnetization transfer experiments were analysed by matrix diagonalization using the software Maple 11 (Maplesoft, Waterloo Maple Inc, Waterloo, Ontario, Canada). Plots of the linearised matrix data were produced in Microsoft Excel (Microsoft Corporation, Redmond, WA USA), where the gradients of the lines in the plots were equal to elements of the relaxation matrix.
Statistical analysis
After tests of normality had been performed, comparison of mean values (of integrated peaks, or rates of exchange) were performed, between the PE group and control group, using the t test or Mann-Whitney test in SPSS 13.0 software (SPSS Inc., Chicago, Illinois, USA). All p values were adjusted for multiple comparisons using false discovery rate in the software R 2.4.1 (R Foundation for Statistical Computing, Vienna, Austria), and values of <0.05 were regarded as statistically significant.
Results
1D 19 F spectra of 3FDG and washed erythrocytes are shown in Figs. 1 and 2 respectively, as well as examples of the 1D Selective Inversion (Fig. 3) and 2D EXSY (Fig. 4) magnetisation transfer experiments. Figure 3 shows the 2D EXSY spectrum of red blood cells washed with exchanging 3FDG. It is clear that mutarotation between anomers is too slow to occur on the timescale of the experiment, as no chemical exchange peaks are present between the β-and α-anomer. This allowed a simplification of the matrix diagonalisation methods; each anomer was treated as a separate probe, therefore producing 2, 2 × 2 rate matrices, rather than 1, 4 × 4 matrix (O'Connell et al. 1994;Gabel et al. 1997;Macura and Ernst 1980;Johnston et al. 1986). An example of a plot of the linearised data from the exchange equation is shown in Fig. 5. The mean average elements of the rate matrix, estimated from the magnetisation transfer experiments, and the calculated permeabilities for each anomer are shown in Table 1.
When testing the hypothesis that differences would occur between the elements of the rate matrix of women with PE and that of healthy control pregnant women, it was found that no significant differences were observed for any element of the rate matrix, for either glucose anomer. We therefore conclude that the rate of carrier-mediated exchange of fluorinated glucose is the same for women suffering from PE as that of healthy pregnant women. In turn, the membrane of an erythrocyte from a woman suffering from PE is no more or less permeable to 3FDG than that of erythrocytes from healthy pregnant women.
When testing the hypothesis that no differences would be identified between the elements of the rate matrix from the 1D Selective Inversion and that of the 2D EXSY experiments, significant differences were observed for the sum of the longitudinal relation rate constant of the intracellular peak and efflux rate constant i.e. R 11 , 1 T 1 i + k io , (p = 0.008 for PE samples and 0.016 for control samples), and for both anomers of glucose (see Suppl Mat). Similarly, significant differences were found between the sum of the longitudinal relaxation rate constant of the extracellular peak and the influx rate constant (R 22 , 1
Discussion
The NMR investigation into exchange across the erythrocyte membrane was successful in estimating the values of the rates of exchange of a mimic of a natural product, and has been useful in investigating the effect of preeclampsia on the intrinsic protein GLUT1 involved in this facilitated diffusion.
For comparison of rates of exchange and permeabilities of membranes, the efflux rate constant k io and outward permeability P io are the most reliable parameters (Kirk and Kuchel 1986;O'Connell et al. 1994;Kuchel et al. 1987). The efflux rate constant is independent of haematocrit. Once inside the cells, the rate at which a molecule leaves a cell will not depend on the total number of cells in the sample outside the membrane. Analogous to this, the outward permeability is calculated from the efflux rate constant, and is therefore not dependent on haematocrit. The mean efflux rate constants of 2.284 ± 0.695 and 2.200 ± 0.421 s − 1 for the α-and β-anomers of 3FDG respectively are comparable to those found previously in the literature as well as supporting anomeric preference for α-anomer (Kuchel et al. 1987;Potts and Kuchel 1992;London and Gabel 1995). This slight anomeric preference was explained by London and Gabel (1995), who showed that the α-anomer preferentially binds to the carrier on the inside of the membrane, due to the conformation of the carrier at that time inside the cell. After transportation, the conformation of this carrier changes outside the membrane, preferentially binding β-glucose. The higher rate obtained in this investigation into PE could be attributed to pregnancy in general as the permeability of erythrocytes may be affected by pregnancy.
However it is not possible to confirm this without performing the same experiments on an equivalent number of nonpregnant controls.
No significant differences between the 1D Selective Inversion and the 2D EXSY for both the efflux rate constant and the outward permeability suggests that the results obtained are reliable and the methods robust. The only significant differences observed between the 1D and 2D data were in the elements of the rate matrix which are dependent on haematocrit. R 11 includes the longitudinal relaxation rate of the broad intracellular peak 1 T 1 i , whilst R 22 includes the influx rate constant k oi (see Suppl Mat). This difference can therefore be attributed to the estimation of peak volume and peak fitting in the 2D data due to the broadness of the intracellular peak, which is why previous studies favoured the 1D methods over 2D for simple two-site exchange (Perrin and Dwyer 1999;Engler et al. 1988;Robinson et al. 1985).
The substitution of a hydroxyl group for a fluorine atom on a glucose molecule does not seem to have an adverse effect on its exchange through the erythrocyte membrane. The exchange of 3FDG using the same protein as glucose has been demonstrated by Riley and Taylor (1973) who found that dilute solutions of glucose inhibited the transport of 3FDG. It has been suggested that the affinity of 3FDG for the binding site of the carrier is marginally higher, though not significantly so, than that of glucose itself, due to the F atom being directly involved in the hydrogen bonding in the binding site, mimicking that of the OH group of glucose (Riley and Taylor 1973;O'Connell et al. 1994). It is this hydrogen bonding which causes the difference in chemical shift between the intracellular and the extracellular populations. The intracellular hydrogen bonding will be different to that outside the cell as a result of the extent of the interactions present due to compartmentalisation and high protein concentration. The position of the fluorine atom on the hexose ring will also affect the extent of interactions. Preliminary investigations measuring the exchange of 2FDG clearly demonstrated this (see Suppl Mat). By increasing the osmolality of the wash solutions, the cellular volume is reduced, ensuring that the cytosol is isotonic with the extracellular medium, thus leading to a change in the intracellular interactions, and therefore a change in the chemical shift and broadness of the peak Kuchel 1985, 1988;Xu et al. 1991).
These effects, and the sharing of protein carrier GLUT1 by 3FDG and D-glucose, makes this study particularly useful in attempting to determine the effect of PE on the protein content of the erythrocyte membrane. Clearly, no significant differences between permeability or efflux rate constant of 3FDG between PE patients and healthy pregnant women showed that PE did not affect this protein part of the membrane. This result does not, however, rule out damage to the membrane by ROS of oxidative stress in PE, and does not contradict the results from earlier investigations; this study simply confirms that PE did not affect this particular protein transporter in executing facilitated diffusion of glucose and its mimics.
19
F-Dynamic NMR spectroscopy proved to be a successful technique in measuring the cellular exchange rate of analogues of endogenous metabolites-the results of both the 1D and 2D magnetisation transfer experiments suggest that preeclampsia does not have deleterious effects on the erythrocyte membrane protein involved in glucose exchange.
Appendix
The 1D Selective Inversion experiment is performed over an array of mixing times, during which the labelled probe is transported across the cell membrane, allowing the nucleus of interest to precess at a different frequency to that at its previous location Robinson et al. 1985;Gabel et al. 1997;O'Connell et al. 1994;London and Gabel 1995). The integrated intensities of the intracellular (I) and extracellular (E) peaks are measured for each mixing time throughout the range, whilst the intracellular peak is inverted. This procedure is repeated with the extracellular peak inverted. Additionally, it is necessary to ascertain the integrated intensities of both the I and E peak under equilibrium conditions (Bulliman et al. 1989;Perrin and Engler 1990). Once all integrated intensities have been obtained, matrices can be formed, based upon the Exchange Eq. (3): where Matrices of (4) are calculated from three matrices produced directly from the integrated intensities. These three where, for example, A A inverted is the integrated intensities of the resonance at site A, when this resonance is inverted. Matrices in (4) can be produced by simple subtraction. This process is repeated for each mixing time used in the range. The Exchange Eq. (3) can then be linearised: However, the difficulty of calculating the logarithm of a matrix is circumvented by using an alternative solution (7) where exponentials are eliminated: Here X is the square matrix of eigenvectors of (M t M 0 −1 ), X − 1 is its inverse, and ln Λ is the diagonal eigenvalue matrix (Jeener et al. 1979;Bremer et al. 1984;Johnston et al. 1986;Hernandez-Garcia et al. 2007;Szekely et al. 2006). This is formed as ln Λ = diag(ln λ), as shown in the Matrix of [8] (Johnston et al. 1986): These linearised data can then be plotted as a function of mixing time, as shown in Fig. 5. The gradients of the best-fit straight lines produced will give the elements of the square rate matrix R (Johnston et al. 1986;Engler et al. 1988;O'Connell et al. 1994). In this case of two-site exchange across the erythrocyte membrane, these elements will form the 2 × 2 rate matrix R (Potts and Kuchel 1992;Gabel et al. 1997;Bulliman et al. 1989;Szekely et al. 2006) Eq. (9): The linear equations of the lines with negative gradient will correspond to the first-order influx and efflux rate constants, whilst those with positive slope give the sum of the longitudinal relaxation rate constants of the I and E peaks Inward permeability (cm s − 1 ) 3.85 ± 0.56 × 10 − 5 3.73 ± 0.51 × 10 − 5 1.000 4.15 ± 0.54 × 10 − 5 4.16 ± 0.68 × 10 − 5 1.000 Outward permeability (cm s − 1 ) 9.06 ± 0.31 × 10 − 5 9.13 ± 0.59 × 10 − 5 0.841 1.01 ± 0.13 × 10 − 4 9.25 ± 0.94 × 10 − 5 0.286 Inward permeability (cm s − 1 ) 4.52 ± 0.87 × 10 − 5 4.84 ± 0.42 × 10 − 5 0.841 5.23 ± 2.03 × 10 − 5 4.69 ± 2.79 × 10 − 5 1.000 Outward permeability (cm s − 1 ) 1.05 ± 0.10 × 10 − 4 9.96 ± 1.02 × 10 − 5 0.310 9.94 ± 1.71 × 10 − 5 8.49 ± 1.95 × 10 − 5 0.413 P oi P io 0.43 ± 0.06 0.49 ± 0.05 0.151 0.52 ± 0.16 0.53 ± 0.29 0.905 and the exchange rate constants (Bulliman et al. 1989;Perrin and Engler 1990). The same principles apply to the analysis of the 2D EXSY data as that of the 1D Selective Inversion although the application is slightly different. Whilst a range of mixing times is still required, the longer acquisition time of the 2D EXSY imposes some restrictions on the number of mixing times used and therefore it is usual to use fewer mixing times than with the 1D equivalent. A 2D EXSY experiment is performed for each mixing time; one of these mixing times used must be 0 (Johnston et al. 1986;O'Connell et al. 1994;Gabel et al. 1997;Perrin and Dwyer 1999).
The matrix methods differ in that the matrices M t M t m − M equi and M 0 M t m =0 − M equi of [4] are produced directly from the volumes of cross peaks and diagonal peaks of the of the spectra at each mixing time (M t ), including when t m = 0 (M 0 ) (O'Connell et al. 1994;Johnston et al. 1986). If: then: or, from (7): where X is the square matrix of eigenvectors of A, X − 1 is its inverse, and ln Λ is the diagonal eigenvalue matrix (Jeener et al. 1979;Johnston et al. 1986;Macura and Ernst 1980). Clearly, this procedure is identical to that of the 1D Selective Inversion analysis, but with the alternative direct formation of matrix A from the 2D NMR data (Johnston et al. 1986): where a AA and a BB are the diagonal peak amplitudes of site A and site B in an experiment with mixing; a AB and a BA are the cross peak amplitudes (showing exchange between site A and site B) in an experiment with mixing; and A 0 and B 0 are the diagonal peak amplitudes of site A and site B in an experiment without mixing (t m = 0). | 2022-12-15T14:35:50.496Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "074a0b66dc9b036905d9ab3d241c7208ed22bcec",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10858-017-0092-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "074a0b66dc9b036905d9ab3d241c7208ed22bcec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
204775137 | pes2o/s2orc | v3-fos-license | Profiles and trajectories of impaired social cognition in people with Prader-Willi syndrome
Introduction People with Prader-Willi syndrome (PWS) have a distinctive behavioral phenotype that includes intellectual disability, compulsivity, inattention, inflexibility and insistence on sameness. Inflexibility and inattention are at odds with the cognitive flexibility and attention to social cues needed to accurately perceive the social world, and implicate problems in social cognition. This study assessed two social cognition domains in people with PWS; emotion recognition and social perception. We identified changes in social cognition over an approximate two-year time period (M = 2.23 years), relative strengths and weakness in social cognition, and correlates and predictors of social cognition. Methods Emotion recognition and social perception were examined at two time points in 94 individuals with PWS aged 5 to 62 years (M = 13.81, SD = 10.69). Tasks administered included: standardized IQ testing; parent-completed measures of inattention and inflexibility; standard emotion recognition photos (fear, sadness, anger, happy); and videotaped social perception vignettes depicting negative events with either sincere/benign or insincere/hostile interactions between peers. Results An atypical trajectory of negative emotion recognition emerged, marked by similar levels of poor performances across age, and confusion between sad and anger that is typically resolved in early childhood. Recognition of sad and fear were positively correlated with IQ. Participants made gains over time detecting social cues, but not in forming correct conclusions about the intentions of others. Accurately judging sincere intentions remained a significant weakness over time. Relative to sincere intentions, participant’s performed significantly better in detecting negative social cues, and correctly judging trickery, deceit and lying. Age, IQ, inattention, and recognition of happy and sad accounted for 29% of variance in social perception. Conclusion Many people with PWS have deficits in recognizing sad, anger and fear, and accurately perceiving the sincere intentions of other people. The impact of these deficits on social behavior and relationships need to be better understood.
Introduction someone's belief about another's belief (second-order belief). Performances on these tasks were associated with Verbal IQ, but not with age or PWS genetic subtype.
People with PWS also have difficulties recognizing emotional cues within social contexts. Koening et al. [15] administered a short video of moving shapes in which 18 adolescents with PWS had to first recognize the visual stimuli as social phenomena, and then extract cues from the video to create a meaningful story. Participants with PWS performed worse than IQ matched controls, and on par with those with pervasive developmental disorders. It was particularly difficult for those with PWS to ascribe affective states to the moving stimuli, which reduced the quality of their social stories.
Identifying affective states from facial expressions is also problematic for many with PWS. Among 52 children and adults with PWS, Whittington and Holland [16] report that while most could recognize happiness, fewer could correctly identify negative emotions (e.g., sad, angry, worried). Adults with histories of depression were more impaired in recognizing fear, and those with psychosis in recognizing anger. Emotion recognition was associated with IQ, but not with age or PWS genetic subtype.
The present study addressed several salient gaps in the PWS social cognition literature. First, researchers have yet to explore the social perceptions of those with PWS, or how they use cues to draw inferences about social situations. Social perception is often measured via scenarios in which respondents must use verbal and/or nonverbal cues to interpret ambiguous or conflictual social situations [17]. Accurately interpreting social scenarios depends on several executive function skills, including attending to pertinent social cues and being cognitively flexible in order to change ones' ideas or behaviors in response to social stimuli [18]. People with PWS, however, typically exhibit: rigid thinking; inflexibility; needs for sameness; resistance to change; inattention and impulsivity [7,8,9,19]. Woodcock and colleagues [19] identified specific deficits in task switching in 28 children with PWS that predicted their preferences for routine, repetitive questioning, and temper outbursts. Further, symptoms or diagnoses of Attention Deficit Hyperactivity Disorder (ADHD) have been reported in up to 70% of children with PWS [20,21], as have problems with selective and divided attention [19]. Both inattention and inflexibility may thus impede the ability of people with PWS to attend to and process divergent social cues, which may ultimately contribute to faulty social perceptions.
A second research gap concerns changes over time in social cognition. It is well established that both emotion recognition and social perception skills develop across infancy, childhood and adolescence [22]. While basic social cognition skills develop throughout childhood (e.g., recognizing straightforward emotions, theory of mind), more complex skills emerge in adolescence and young adulthood, including recognizing complicated emotions (e.g., sexual/romantic interest, fear, contempt) or social perception skills (e.g., detecting white lies, irony, dares) [23,24,25]. Indeed, relative to children or adolescents, adults perform better on these more complex tasks, underscoring that social cognition abilities evolve from basic to more nuanced understandings of emotional expressions and social exchanges over time.
Even so, the trajectories of social cognition are understudied in people with intellectual disabilities in general [2], including those with PWS. Given their intellectual disabilities, people with PWS may show delays in social cognition skill acquisition, yet still follow a similar developmental course as the typically developing population. This possibility reflects the theoretical assumption that people with intellectual disabilities go through the same sequences of development as typically developing individuals, but at a slower rate [26]. Alternatively, cognitive impairments or other factors (e.g., hyperphagia, compulsions) may impose constraints on social cognitive skills, leading to an atypical developmental trajectory. Theoretically, this possibility presumes that those with intellectual disabilities have core deficits or features that set the apart from others and lead to altered developmental courses [27].
Finally, social cognition theorists increasingly appreciate that the ability to recognize emotions in oneself and others helps people accurately decipher social interactions. Lemerise and Arsenio [28] assert that while people bring their own emotional arousal or mood to social interactions, they also use the emotional cues of others to guide how they encode and interpret social situations. Similarly, Ladd and Crick [29] propose that emotions play an important role in social information processing, especially evaluating one's own responses to social interactions. Those who struggle to recognize basic emotions in others may thus be at a disadvantage in forming accurate perceptions of social situations.
In brief, the present study examined how 94 participants with PWS performed on two domains of social cognition-emotion recognition and social perception-assessed at two different time points. As the development of these skills is relatively under studied in people with intellectual disabilities, we were uncertain if people with PWS would show relative stability, gains, or atypical developmental patterns in emotion recognition or social perception skills. We thus first examined changes over time for the sample as a whole, and the effects of age on these changes. Second, we assessed relative strengths and weaknesses in emotion recognition and social perception. Based on previous literature [16], we expected to find strengths in recognizing happy relative to negative emotions. On the social perception task, we predicted that participants with PWS would perform better when provided with obvious or blatant cues (e.g., a sincere apology) as opposed to subtle, harder to read social cues (e.g., a sarcastic remark). Finally, we predicted that lower cognition and heightened inattention and/or inflexibility would detract from participant's performances. Controlling for these potential predictors, and consistent with the role of emotions in forming social percepts, we expected that basic emotion recognition skills would account for some of the variability in social perception task performance.
Participants
The sample included 94 children, adolescents, and adults aged 5 to 62 years with genetically confirmed PWS (45.7% male, 54.3% female). Longitudinal power analyses using standard parameters, alpha < .05, power = 80%, two-sided tests, and an effect size of 0.5, yielded a sample size of 63 [30]. The study was thus appropriately powered to detect medium to small effect sizes.
Families were recruited from throughout the U.S. to participate in a longitudinal study on behavior and development in PWS. Given their wide age range, participants were divided into three age groups: children aged 5 to 10 years (n = 44); adolescents aged 11 to 19 years (n = 34); and adults aged 20 to 52 (n = 16); see Table 1 for the mean ages of these groups. These are developmentally appropriate age groups, and as previously discussed, they also index periods of growth (children, adolescents and young adults) or stability (adults) in social cognition skills in the general population. The distribution of PWS genetic subtypes, also noted in Table 1 [31], likely relates to cohort effects and the cognitive benefits of growth hormone treatment (GHT) [32], which is FDA approved for children and youth. As such, the effect of GHT status on task performance was examined, and IQ was controlled for in between age-group analyses.
We ensured that the 94 participants understood the expectations of the social cognition tasks and had adequate expressive language and verbal skills to complete the work at hand (see Table 1 for Verbal IQ's). An additional 4 individuals were excluded from the study as they did not appear to understand the tasks and/or their responses to the test stimuli were highly perseverative.
The test-retest interval ranged from 1.5 to 4 years, and averaged 2.28 years. Table 1 shows the mean test-retest interval across age groups, which were similar across groups. Test-retest intervals were not significantly associated with social cognitive tasks, and as such, were not used as a control variable in analyses.
Procedures
Prior to enrolling participants, this study was first approved by Vanderbilt University's IRB Social/Behavioral Sciences Committee. Consistent with University IRB regulations, parents of offspring with PWS provided written, informed consent for the study, and individuals with PWS provided written, informed assent.
Following consent or assent procedures, a test battery was individually administered in a quiet room by trained research assistants who were highly experienced in working with individuals with PWS and their families. The test battery included the following measures.
Social cognition assessments
Emotion recognition. Emotion recognition was assessed with 24 standardized photos from the Japanese and Caucasian Facial Expressions of Emotion (JACFEE) [33].
Stimuli included 11 Caucasian women and 13 men who depicted six emotions: happy, sad, angry, fear/afraid, disgust and contempt. Each emotion had four photos that were randomly administered using the prompt "I am going to show you pictures and I want you to tell me how that person feels. Are you ready?" Given their cognitive deficits, no time limits for responding were imposed. Additional prompts were used on an as-needed basis, e.g., "Take another look, how is (s)he feeling?" Responses based on physical attributes of the photo (e.g., "smiling") were prompted with "How does she feel when she's smiling?" Emotion recognition scoring. Scores ranged from 0 (none correct) to 4 (all correct) for each emotion. The following responses were scored as correct: Angry = angry, anger, mad, furious, irritated, pissed, grumpy, ticked off; Happy = happy, excited, silly, gleeful; Sad = sad, disappointed, unhappy, depressed, down in the dumps; and Afraid = scared, nervous, anxious, fearful, frightened, worried. Just 2 adults could reliably identify contempt or disgust at Time 1 or Time 2. Given this floor effect, these emotions were not included in analyses.
Social perception task. Developed by Leffert and colleagues [34], this task is based theoretically on models of social cognition that distinguish between two related processes; encoding and interpreting social stimuli [35]. It was specifically designed to assess how individuals with intellectual disabilities attend to, encode, and interpret the intentions of others. The task consists of six, brief, 1-2 minute videotaped vignettes using youth actors in a school setting that depict a problematic situation or negative event that places one of the characters at a disadvantage. Participants were instructed to pay special attention to the protagonist (e.g., "the boy with the blue shirt") because they would be answering questions about him/her and the story.
Three vignettes depicted negative events involving sincere/benign intentions in which either the negative event or the social cues differed in salience. In one, involving a subtle apology, two friends are doing their homework together, and as one leaves to get pencils the other student reaches for a clean piece of paper and unknowingly writes on the back of her friend's homework. Her friend returns and demands, "What are you doing? You scribbled all over the final copy of my book report!" to which the student embarrassingly says "uh oh!" The second, illustrating a blatant apology, depicts a student at her locker packing up her backpack. As she swings it on her back, it accidently bumps into a student, causing her to drop her stack of unclipped papers, and exclaim "Oh no! Now they're all out of order!" to which the girl responds, "I'm so sorry! I should've been watching where I was going!" And the third portrays an excuse with strong emotions. Two boys plan to see a movie after school, but at the end of the day, one of them remembers that he has a doctor's appointment. He tells his friend that they can't see the movie, the peer becomes visibly aggravated and cries out "You're just telling me now?" to which the other boy defensively responds, "I told you I forgot that I had a doctor's appointment!" Three scenarios conveyed insincere/hostile intentions, with insincere cues mixed with ostensibly benign cues. In one, depicting manipulation and rejection, a student asks to join two friends playing catch, and is told "Sure, but let us finish this game first. Do you mind grabbing a jump rope while you wait?" When she returns with the jump rope, the girl say "Thanks!" takes the jump rope, gives her the ball and skips away with her friend saying" Let's go play jump rope!" In another involving teasing and rejection, a group of girls is making fun of another student's clumsiness at sports. The student overhears them and says, "It's really not nice to call people names, you know" to which one of the girls sarcastically replies, "Oh, we weren't making fun of you, you are a really smart kid, and not everyone can be good at sports." In the third vignette depicting lying and rejection, a student approaches a game of jump rope, asks a girl who is swinging one end of the rope if she can play, and is told no, they already have enough players. Another girl approaches, also asks to play and is told, "Sure, you're next" to which the first girl asks, "How come you let her in and not me?" The student swinging the rope stalls for time, looks up, then away, and without looking at the girl eventually says "Well . . .um . . .ah, I already told her that she could play with us. Sorry." Vignette scoring. At the end of each vignette, participants were asked, "What happened?" and prompted as needed (e.g., "Then what happened" or "Anything else?"). Their responses were recorded and evaluated against a checklist, provided by the test developers, of 7 to 9 pertinent events and social cues imbedded in each story. For example, in the vignette with the dropped papers, the checklist includes 3 items related to the negative event (child A drops papers, papers are out of order, child A is upset) and 4 benign cues (child B didn't see child A when she swung her bag around; Child B apologizes; Child B adds "I should have ben watching where I was going"; participant makes any mention of hitting the papers by accident). The checklist thus indexes participants' observations of the presence (or absence) of those social cues that guide the accurate interpretation of the story. As vignettes varied in the number of hostile or benign cues, proportions were calculated for each type of response. The proportions of negative and benign cues were then summed across the 3 sincere and insincere vignettes, respectively.
After recalling the story, participants were then asked to judge whether the protagonist was either "mean" or "not mean". Responses were scored as correct (i.e., not mean in the sincere scenarios, and mean in the insincere vignettes) or incorrect (mean in the sincere vignettes or not mean in the insincere scenarios).
Correlates of social cognition
Performances on the two social cognition tasks were examined in relation to: cognition (IQ), inattention, and needs for sameness/inflexibility.
Inattention. Inattention was assessed by the Child Behavior Checklist [36], a widely used, 113-item, parent-completed measure of problem behaviors. The CBCL has excellent psychometric properties, and has been used in other studies of individuals with developmental disabilities. The CBCL yields two broad domains (Internalizing, Externalizing problems) and 9 subdomains, including Attention Problems. The Attention Problems subdomain is comprised of 10 items, e.g., "Can't concentrate", "Inattentive", "Impulsive, acts without thinking" and "Fails to finish things (s)he starts". Given the age range of our sample, one item was modified to be appropriate for both children and adults (poor school/work evaluations). Consistent with prior CBCL studies that include both children and adults with developmental disabilities, Attention Problems raw scores were used in data analyses. Scores in our participants ranged from 1 to 19, with a mean of 7.50, SD = 3.87, see Table 1.
Cognition. Participants were individually administered the Kaufman Brief Intelligence Test-2 (KBIT-2) [37], which was designed for research and screening purposes. The KBIT-2 has been successfully used in previous studies of people with developmental disabilities. Compared to "short forms" of traditional IQ tests, the KBIT-2 has more robust psychometric properties [38]. The KBIT-2 provides standard scores (M = 100, SD = 15) for Verbal, Nonverbal and Composite IQ's. See Table 1 for means scores across age groups.
Inflexibility. The Repetitive Behavior Scale-Revised RBS-R [39,40] was used to measure needs for sameness and inflexibility. The RBS-R assesses a wide range of restricted and repetitive behaviors in people with developmental disabilities. Informants complete 43 items using a fourpoint Likert scale: 0 = behavior does not occur; 1 = behavior occurs and is a mild problem; 2 = behavior occurs and is a moderate problem; and 3 = behavior occurs and is a severe problem. Data analyses used the 17 items that comprise the Sameness/Rituals factor of the RBS-R. This factor aptly reflects the inflexibility that is highly characteristic of PWS, including: "Resists changing activities", "Insists on the same routine household, work or school schedules every day"; "Becomes upset if interrupted in what he/she is doing"; and "Repetitive questioning or insisting on certain topics of conversation." Higher raw scores index more problems. Scores in our sample ranged from 3 to 42, with an overall mean of 17.36, SD = 7.10, see Table 1.
Statistical analytic plan
Emotion recognition data analyses. We first conducted four, 2 X 3 ANCOVA's (time by 3 age groups) for each of the 4 emotion recognition scores, controlling for IQ. ANCOVA's allowed us to identify main effects between test times for the group as a whole, between age groups, and possible time by age group interactions. Changes within each age group were assessed by matched t-tests comparing Time 1 and Time 2 emotion recognition scores. Collapsing across age and assessment times, relative strengths or weaknesses in recognizing specific emotions were assessed in a within-group, repeated measure ANOVA. Again collapsing across assessment times and age, Pearson correlations were conducted between emotion recognition scores and age, Composite IQs, the CBCL's Attention Problem subdomain and the RBS-R Sameness/Rituals Domain.
Finally, an error analysis was conducted to determine if participant's incorrect responses were markedly discrepant from the negative emotions portrayed in the facial stimuli. Incorrect responses were reviewed and categorized as: happy or another positive affective state (e.g., "proud", "courageous"); one of the other emotions under study; or a more general negative statement (e.g., "grossed out", "confused", "tired", "woozy", "stressed", "bored.") Social perception data analyses. Two 2 X 3 ANCOVAs (time by age group) were conducted with the sincere and insincere cue scores, again controlling for IQ. Matched t-tests of Time 1 and Time 2 sincere and insincere cue detection scores assessed changes within each age group. Correct responses to "mean" or "not mean" judgments were summed for the sincere versus insincere vignettes (range = 0 to 3). Similar to cue detection scores, 2 X 3 ANCO-VA's identified differences in judgments scores between time points and age groups, and match t-tests assessed change over time within each age group.
Relative strengths or weaknesses in overall vignette performances were first assessed in a matched t-test between the sincere versus insincere cue detection scores. Then, two repeatedmeasure ANOVA's identified relative strengths or weaknesses in participant's abilities to accurately interpret the "not mean" or "mean" intention of protagonists within the three sincere and insincere vignettes, respectively.
Collapsing across age groups and time, Pearson correlations were conducted with the two cue detection scores and the combined total mean of these scores, with age, Composite IQs, the CBCL's Attention Problem subdomain, the RBS-R Sameness Domain, and emotion recognition scores.
Relations between emotion recognition and social perception. A hierarchical multiple linear regression determined if emotion recognition predicted the detection of pertinent social cues, regardless of valence. As such, the dependent variable was the combined mean of sincere and insincere cue scores. Predictors in the first block included: age, Composite IQ, the CBCL Attention Problems and RBS-R Sameness/Rituals domain. Controlling for effects of these variables, the second block included the four emotion recognition scores.
Effect sizes. Effect sizes were calculated for all analyses. ANOVA effects sizes were estimated with partial eta squared, ηρ 2 , and the regression used R 2 and ηρ 2 . For matched t-tests, Cohen's d was calculated using the formula for paired samples.
Preliminary analyses
ANOVAs determined if gender or growth hormone treatment status had a significant impact on emotion recognition and social perception scores, and would need to be controlled for in subsequent analyses. No significant differences emerged; see Tables A and B in S1 File. An Table C in S1 File. Finally, although the testretest interval was not correlated with social cognition scores, we re-ran analyses with testretest interval as a covariate. No effects of this time interval were found. Thus, none of the possible covariates (genetic subtype, gender, test-retest interval) were included in final analyses.
Emotion recognition
Change over time. Table 2 19. Adolescents also showed an improvement in anger recognition scores, t(33) = -2.86, p = .007, d = . 48. No significant changes were found within any age group in the recognition of sad or happy.
Relative strengths and weaknesses. A repeated measures ANOVA was conducted with the three negative emotions for the sample as a whole, collapsed across age groups and time (see Table 2 for means). Happy was not included as preliminary analyses indicated that these scores were significantly higher than all other emotions, with participants performing at near ceiling levels at both assessments. Mauchly's Test indicated that the assumption of sphericity was met; w = .99; X 2 (2) = 1.22. Significant differences emerged between the 3 negative emotions; F(1,186) = 39.61, p< .001; ηρ 2 = .17. Bonferroni post-hoc comparisons revealed that participants had higher anger versus fear or sad scores, with no difference between fear or sad. Participants earned an average of 56% correct for sad, 50% correct for fear, and 73% for anger.
Correlates. Pearson correlations were conducted using data collapsed across both time and age group. Anger recognition was negatively associated with Attention Problems, r(187) = -.20, p = .011. KBIT-2 Composite IQs were correlated with recognition scores for fear, r(187) = .24, p< .001, and sad, r(187) = .23, p< .001. Error analysis. A review of errors indicated that participant's incorrect responses to negative emotions generally had a negative valence. Sad, for example, was misconstrued as anger in 53% of incorrect responses, and anger was misinterpreted as sad in 44.3%; see Table D in S1 File. Although the majority of errors identifying fear had a negative valence (73%), 11.4% were misconstrued as surprised, and 11.1% of errors had a positive valence (e.g., happy, courageous, proud). A mismatch of valence was also found in responses to sad faces (16.5% positive valence), but was relatively infrequent in anger (4.6%).
Change over time: Judgments. Table 4 presents mean scores for the correct "not mean" or "mean" judgments for the sincere/benign versus insincere/hostile vignettes (range = 0 to 3). As reflected in the total mean scores in Table 4, the ANCOVA assessing correct responses to the benign vignettes was not significant. In contrast, the ANCOVA of insincere/hostile vignettes was significant, F(6, 187) = 5.05, p < .001, ηρ 2 = .18, with a main effect for age group, F(2, 187) = 8.56, p < .001, ηρ 2 = .11, such that children scored lower than adolescents and adults. This main effect, however, was qualified by a significant age by time interaction, F (2,187) = 3.56, p = .031, ηρ 2 = .05. Bonferroni post-hocs revealed that children scored lower than remaining groups at Time 1 only.
Within age groups, matched t-tests revealed significant improvements in children's judgments of mean intentions portrayed in the insincere/hostile vignettes, t (43) = -2.87, p = .007, d = .60, but in no other age group. Exploring this finding, follow-up t-tests revealed that children made significant gains in all three of the insincere /hostile vignettes. Relative strengths and weaknesses. Robust strengths were found in the detection of insincere/hostile cues relative to sincere/benign cues, t(93) = -14.83, p < .001, d = 1.75. Participants were also more likely to make the correct "mean" judgments in the insincere/hostile vignettes compared to the correct "not mean" conclusion in the sincere/benign vignettes, t(93) = 3.05, p = .003, d = .41. A within-group, repeated measure ANOVA determined if participants were better at judging the intent of story protagonists in any one of the three sincere/ benign vignettes. Mauchley's test indicated that the assumption of sphericity was met; w = .96, X 2 (2) = 1.19. A significant effect was found, F(2,374) = 20.71, p< .001, ηρ 2 = .17. Bonferonni post-hoc analyses indicated that participants had relative strengths in judging the emotional excuse scenario, and weaknesses in the subtle apology vignette. The within-subjects repeated measure ANOVA assessing the three insincere/hostile vignettes was not significant, F(2, 374) = 1.72, p = .18, indicating relatively even performances across these scenarios.
Correlates. Collapsing across test times and ages, Table 5 summarizes Pearson correlations of sincere/benign and insincere/hostile cue scores, and the total cue score, with age, K-BIT2 Composite IQ, CBCL Attention Problems, RBS-R Sameness/Rituals, and the four emotion recognition scores. Inattention was negatively associated with cue detection, while IQ and recognition of sadness, fear and happy were positively correlated with these scores. Correlations were not significant between vignette judgment scores and IQ, attention problems, and emotion recognition. Table 4. Mean (SD) correct judgments of either "mean" or "not mean" to the sincere/benign and insincere/hostile vignettes across age groups.
Predictors of social perception
The hierarchical multiple linear regression identified the extent to which participant's abilities to recognize emotions facilitated their detection of social cues in general, regardless of valence. Two factors led to the decision to use total cue scores. First, the pattern of correlations in Table 5 was remarkably similar for the two cue valance scores, and the total cue score. Second, even though relative strengths were found in detecting negative cues, there is no theoretical justification for the idea that predictors of cue detection will vary by cue valence. Without a framework for interpreting the results, the total score offered a parsimonious solution that addressed our hypothesis that emotion recognition, or other variables, would predict the detection of pertinent cues. The dependent variable was the mean of the benign and hostile proportion scores, collapsed over time and age groups. Predictors in the first model included four control variables that could detract from, or facilitate, cue detection: age, K-BIT2 IQ, and the CBCL Attention Problems and RBS-R Sameness/Rituals subdomains. Controlling for these variables, the second model added sad, fear, happy and angry recognition scores. We ensured that assumptions were met regarding linearity, collinearity, outliers, and normality of data.
The first model was significant, F(4,184) = 7.23, p < .001, R 2 = .16, with effects emerging for age, IQ and Attention Problems. The second model was also significant, ΔF(8,184) = 6.55, p< .001, ΔR 2 = .13, with additional effects noted for recognition of happy and sad. Overall, the model accounted for 29% of total variance in social cue scores; results for each model are summarized in Table 6. As indexed by standardized β's, recognition of happy was the strongest predictor, followed by age, sadness recognition, IQ, and attention problems.
Discussion
Successfully navigating the social world depends on recognizing emotions and using social cues to determine the intentions of other people. Despite some gains over time, people with Social cognition in PWS PWS have deficits in both of these critical areas of social cognition. Findings have novel implications for interventions to help people with PWS strengthen or acquire specific social cognition skills. Participants readily identified happy, and they were significantly better at identifying anger than sadness or fear, a pattern also seen in a previous study [16]. Over time, all three age groups made significant gains in their recognition of fear. With the exception of adolescents, these gains reflected medium to large effects. Adolescents also improved in their recognition of anger. In general, however, performances were still relatively poor, with participants earning an average of 50% and 56% correct for fear and sad, respectively. No significant gains were seen in the recognition of sad, and sad was often mistaken for anger, and anger for sad.
Typically developing children reliably recognize basic human emotions from facial expressions (specifically, happy, sad, angry) by 4 to 6 years of age [24,25]. In a large, normative sample of children, rates of identifying happy, sad and anger in 6 year olds did not differ from 16 year olds. Fear recognition, however, increased gradually in this cohort such that 9 to 10 year olds were approximately 50% accurate, and 16 year olds, 76% accurate [24]. Thus, fear and other complex or more nuanced emotions (e.g., disgust, contempt, surprised) continue to evolve in adolescence and young adulthood [24,41].
Compared to these normative trajectories, those with PWS were strikingly off. Older participants did not perform better than younger ones, and within age groups, improvements over time in fear and anger essentially brought the three age groups to the same level of performance. Participants achieved a certain level of competency, but advancing age did not necessarily lead to higher scores. Unlike the general population, then, people with PWS appear to have an atypical developmental trajectory in emotion recognition.
The errors made by participants further implicate an altered developmental trajectory of emotion recognition in PWS. When they were incorrect, participant's often confused sadness and anger, and fear with surprised or other negative emotions. Widen [42] proposes that young children initially use valence to differentiate within two broad emotion recognition categories: "feels good" versus "feels bad." Toddlers and young children start by identifying one emotion, happy. They then recognize either sad or angry, and by age 4, they can differentiate sad from anger. Children subsequently recognize either fear or surprised, and over time, differentiate between these two. Participant's incorrect responses to angry, sad or fear generally had a negative valence or overtone (e.g., "stressed out", "bored", see Table D in S1 File), reflecting the "feels bad" category. As such, they could make good use of valence, but their confusion between sad and angry reflect difficulties at the earliest stages of emotion recognition development.
Consistent with previous work [16], KBIT2 Composite IQ's were significantly correlated with emotion recognition scores for fear and sad. Thus, the cognitive resources of individuals with relatively high IQ's may have enabled more accurate scores. General cognitive ability also plays an important role in emotion recognition in typically developing children and adolescents [24], and in other genetic, neurodevelopmental disabilities, including Williams syndrome and Down syndrome [43]. Martínez-Castilla and colleagues [43] concluded that impaired cognitive functioning in individuals with both of these syndromes constrained their emotion recognition performance, which advanced to a certain level, then became static did not get better with advancing age.
Future studies are needed to establish how emotion recognition deficits in people with PWS impact their social exchanges with others. Successful social interactions require abilities to detect and respond to shifts in the mood states of other people, or to share affect with them. Being emotionally aware of, or in tune with, the affect of others is instrumental in establishing close, reciprocal relationships [44]. A reasonable prediction for further study is that emotion recognition impairments in people with PWS are associated with their generally poor peer relationships [9].
Several themes emerged from participant's responses to the social perception tasks. First, the sample as a whole improved over time in their ability to detect pertinent social cues, regardless of valence. Between age group findings point to a developmental progression in the detection of sincere/benign cues, with significant increments in scores between children, adolescents and adults. Increases were more substantial for the sincere cues, in part because of the significant, meaningful strength in participants' detection of negative or hostile social cues. In addition to their relative strength in observing hostile or negative cues, participants were also more likely to correctly interpret the "mean" intent of protagonists in the insincere/hostile (versus sincere/benign) vignettes. Participants thus performed relatively well detecting peer rejection in the context of trickery, lying, and ridicule.
In contrast, participants performed poorly in judging the sincere intentions of others in the vignettes depicting accidental mishaps. Although participant's noticed more sincere cues over time, they did not necessarily use these cues to form increasingly accurate interpretations of these vignettes. No significant improvements over time were found in participant's interpretations of the sincere vignettes, as assessed both within and between age groups.
Even so, meaningful differences emerged across the sincere/benign vignettes that partially support the hypothesis that more salient cues would facilitate performance. Consistent with our prediction, participants performed poorly on the paper-scribbling vignette, with its' subtle "uh-oh" embarrassed apology. Surprisingly, however, most did not take advantage of a clear, blatant cue ("I'm so sorry. I should have been watching where I was going") in order to draw the correct "not mean" conclusion in the dropped papers vignette. Unexpectedly, they performed significantly better in the vignette that lacked a clear apology but instead included a strong emotional exchange. Perhaps, then, affect salience facilitated participants' correct perceptions of this social situation.
Why, then, did most participants make the wrong call when judging sincere intentions of others, and the correct call in perceiving hostile intentions? Several explanations seem plausible. First, social interactions present complex, divergent stimuli that compete for attention, and social cues that arouse and dominate are more likely to remembered [45]. The gravity of the negative events won out (e.g., anger of the girl with dropped papers), and outweighed participant's perceptions or recall of opposing or lower-priority cues (e.g., apology from the person who bumped into her).
The salience of negative events is also reflected in predictors of social cues. As expected, emotion recognition skills, specifically for happy and sad, were significant predictors of participant's detection of cues, accounting for 11.4% of variance. Surprisingly, however, emotion recognition abilities were not associated with accuracy in judging the "mean" or "not mean" intentions of protagonists. Thus, even when participants had emotion recognition cues to employ to their benefit, they still could not move beyond the negative event itself to correctly interpret the sincere intentions of others.
Second, cognitive and attentional processes are recruited in perceiving social cues; one must attend to cues in order to encode and interpret them. Both IQ and attention problems emerged as significant predictors of cue detection scores. Although cognition explained more variance in cue detection than inattention, attending to pertinent social cues is an essential step in accurately deciphering social interactions.
Third, participants' relative strengths in detecting and judging hostile scenarios may relate to a phenotypic tendency to perceive the world though a negative lens. A "negative personality" has previously been noted in people with PWS, primarily as a shorthanded way to capture such problems as irritability, argumentativeness, inflexibility, and insistence upon sameness [31]. Moving beyond such behaviors, Key et al., [46] used event-related potentials to assess neural responses of 24 adolescents and young adults with PWS to social and nonsocial stimuli that had both positive and negative emotional valences. Larger anterior late positive potential amplitudes were found for negative (versus positive) nonsocial stimuli and facial expressions at right fronto-temporal locations. Results suggest a bias toward negative emotional expressions (e.g., angry face) and nonsocial stimuli (e.g., mean looking dog).
It is important to emphasize, however, that people in general have a negativity bias that includes, among other factors, greater attention and cognitive processing of negative stimuli, and ascribing more complexity to negative stimuli [47]. Unlike the general population, however, a negativity bias in people with PWS may be especially problematic as the syndrome's phenotype includes proneness for such features as irritability, argumentativeness, and inflexibility.
Although this is the first study to longitudinally examine both social perception and emotion recognition in PWS, several study limitations deserve mention. First, although the emotion recognition task in this study is standardized and widely used, it is static in nature. Skwerer and colleagues [48] administered both static emotion recognition (restricted to the eye region) and dynamic facial expressions (short video clips) to individuals with Williams syndrome and others with intellectual disabilities. Relative to the static eyes, both groups performed better with dynamic facial stimuli. Moving faces are ecologically valid, yet people with PWS, like those with other developmental disorders, may be slow in recognizing affect from dynamic stimuli [49]. If so, they could be disadvantaged in processing facial expressions as they occur in vivo, especially in complex or fast-paced social interactions.
In a related weakness, the study did not assess the many cues, aside from emotion recognition, that people use to interpret social situations. Extending beyond the spoken word, such cues include nonverbal body language, and such acoustic properties of speech as tone of voice, intonation, pitch or prosody [3]. Participants with PWS were more apt to accurately interpret the "not mean" intention of the protagonist in the scenario involving a loud and emphatic emotional exchange. Perhaps, then, future studies could explore how people with PWS use the vocal properties of speech, as well as nonverbal cues, to guide their social perceptions.
Third, we administered a social perception task that has not been widely used, and with unknown test-retest reliability. Even so, given the two-year average lag time between assessments, it is unlikely that results are attributable to practice effects. Although other social perception tasks and questionnaires exist, they were primarily designed for patients with schizophrenia or other psychiatric disorders [50]. Given our study sample, we instead opted to use a measure with a sound theoretical basis that was specifically developed for individuals with intellectual disabilities.
An additional weakness is that the study used a measure that captures the behavioral manifestations of inflexibility in PWS, but is not a sensitive index of cognitive inflexibility. The RBS-R Sameness/Rituals domain was not associated with social cognition performance, yet cognitive flexibility is crucial for accurate social perceptions and interactions [18]. Future social cognition studies might thus administer tasks that tap elements of cognitive switching that are known to be impaired in PWS, including engaging and disengaging attention, response inhibition, and task set reconfiguration [19]. Until such studies are performed cognitive inflexibility should not be ruled out as a possible contributor to social cognition deficits in PWS.
Finally, the study did not identify differences in social cognition in PWS relative to people with intellectual disabilities in general, or with other genetic syndromes. In this vein, Leffert et al. [34] also found that children with intellectual disabilities had trouble getting beyond the salience of negative events to make correct judgments of vignettes. It thus remains unknown if emotional recognition or social perception findings are distinctive to PWS. Although future work might include comparison groups, findings nevertheless have important implications for interventions in PWS.
Social cognition and social skills interventions abound in other disorders, especially autism spectrum disorder and schizophrenia. Recent social cognition intervention advances in these disorders include: integrating modeling from typical peers in theater productions [51]; creating diverse virtual reality training platforms [52]; using robotics to teach specific skills [53]; simultaneously treating multiple social cognitive domains [54]; using brain imaging as a biomarker for response to social cognitive treatment [55]; and implementing multi-modal trials that combine medication with social training [56].
In contrast, no studies have been published on the effectiveness of any type of formal social training or intervention program in PWS. The current study, however, points to specific targets for future interventions. The detection of pertinent social cues may improve with age, but the accurate interpretation of them does not. Interventions might thus focus on how to understand social cues, including basic emotions, and translate them into accurate judgments of social exchanges. Areas of relative strength (i.e., recognition of happy, angry, interpreting insincere/hostile cues) could be used as a template for addressing weaknesses in judging sincere intentions. Throughout, the focus should be on using social cues as evidence that can alter faulty thinking that leads to misperceptions of social interactions [57].
People with PWS have historically been described as "egocentric" [58], yet all people are egocentric. We must deliberately jettison our own perceptions in order to see the world through the eyes of others. Although these processes unfold in the typical population with learning and maturation, innovative interventions are increasingly and successfully used in patient populations with deficits in social cognition, including those with neurodevelopmental disabilities.
Supporting information S1 File. Table A. Mean scores and (standard deviations) for emotion recognition and social perception tasks across gender. Table B. Mean scores and (standard deviations) for emotion recognition and social perception tasks across participants on growth hormone treatment (GHT) (n = 62) versus treatment naïve participants (n = 32). Table C. Mean scores and (standard deviations) for emotion recognition and social perception tasks across genetic subtypes of PWS. | 2019-10-19T13:02:24.675Z | 2019-10-17T00:00:00.000 | {
"year": 2019,
"sha1": "3bc56d56b089b72be79f32116f84683130fe002b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0223162&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b968d6e90815226e7247099d1793a0a2c1384332",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
52090133 | pes2o/s2orc | v3-fos-license | A Challenge: Pulmonary Sclerosing Haemangioma
A 48‐year‐old woman had a symptom of chest pain a month ago, without obvious inducement, coughing, sputum, hemoptysis, or chest tightness. The pain in the chest was irregular. The patient was admitted to the First Hospital of Jilin University for further treatment. The patient denied the history of hypertension, coronary heart disease, diabetes mellitus, and other diseases. She also denied family history of hereditary disease. However, she has had a smoking history for 20 years, about 10 cigarettes per day, no quit. Physical examination and blood tests did not show apparent abnormality. All tumor markers were negative. The computed tomography (CT) showed a high‐density mass of right upper lobe of the lung, which was about 17 mm × 15 mm [Figure 1a]. The boundary was smooth, and the density was uniform. The image of a narrow arc of fluid was seen in the right thoracic cavity and there was no significant lymph node (LN) metastasis in the mediastinum [Figure 1b]. The patient underwent exploration under thoracoscope, and the intraoperative rapid pathology showed suspicious adenocarcinoma, the patient received the resection of the upper lobe of the right lung. However, the final pathology revealed the pulmonary sclerosing hemangioma (PSH). Microscopic findings of the postoperative specimen showed a mixture of papillary and sclerotic patterns with two cell types as follows: cuboidal surface and stromal round cells. The cuboidal surface cells resembling pneumocytes and round stromal cells with well‐defined borders were centrally located round to oval vesicular nuclei and rare nucleoli. The round stromal cells mostly had slightly eosinophilic cytoplasm with some showing a more vacuolated or foamy appearance. In other areas, there were large blood‐filled spaces lined by flattened cells. Solid sheets of round cells with scattered cuboidal surface cells forming small tubules were also noted [Figure 1c]. The round stromal cells were positive for thyroid transcription factor‐1 (TTF‐1) and vimentin but negative for epithelial membrane antigen (EMA) and pan‐Cytokeratin, and the cuboidal surface cells were positive for TTF‐1, vimentin, EMA, Ki‐67, and pan‐Cytokeratin [Figure 1d–1h].
Comment
A 48-year-old woman had a symptom of chest pain a month ago, without obvious inducement, coughing, sputum, hemoptysis, or chest tightness. The pain in the chest was irregular. The patient was admitted to the First Hospital of Jilin University for further treatment. The patient denied the history of hypertension, coronary heart disease, diabetes mellitus, and other diseases. She also denied family history of hereditary disease. However, she has had a smoking history for 20 years, about 10 cigarettes per day, no quit. Physical examination and blood tests did not show apparent abnormality. All tumor markers were negative. The computed tomography (CT) showed a high-density mass of right upper lobe of the lung, which was about 17 mm × 15 mm [ Figure 1a]. The boundary was smooth, and the density was uniform. The image of a narrow arc of fluid was seen in the right thoracic cavity and there was no significant lymph node (LN) metastasis in the mediastinum [ Figure 1b]. The patient underwent exploration under thoracoscope, and the intraoperative rapid pathology showed suspicious adenocarcinoma, the patient received the resection of the upper lobe of the right lung. However, the final pathology revealed the pulmonary sclerosing hemangioma (PSH). Microscopic findings of the postoperative specimen showed a mixture of papillary and sclerotic patterns with two cell types as follows: cuboidal surface and stromal round cells. The cuboidal surface cells resembling pneumocytes and round stromal cells with well-defined borders were centrally located round to oval vesicular nuclei and rare nucleoli. The round stromal cells mostly had slightly eosinophilic cytoplasm with some showing a more vacuolated or foamy appearance. In other areas, there were large blood-filled spaces lined by flattened cells. Solid sheets of round cells with scattered cuboidal surface cells forming small tubules were also noted [ Figure 1c]. The round stromal cells were positive for thyroid transcription factor-1 (TTF-1) and vimentin but negative for epithelial membrane antigen (EMA) and pan-Cytokeratin, and the cuboidal surface cells were positive for TTF-1, vimentin, EMA, Ki-67, and pan-Cytokeratin [ Figure 1d-1h].
PSH is a rare tumor of the lung which was first reported by Liebow and Hubbell. [1] It occurs more frequently in Asian women than in Western women. [2] It was reported that most of the patients were usually asymptomatic, and were incidentally discovered because of a routine checkup. While some patients had chronic symptoms such as cough, chest pain, and blood in phlegm. Many aspects of PSH remain unclear because of its low occurrence. Chest CT often shows as an isolated, restricted mass. Relevant immunological histology has revealed that the tumor was mainly composed of the primary respiratory epithelium, the incomplete diapolitic Type II cell or the clara cell. Macroscopically, the tumor mainly contains four patterns as follows: papillary, sclerotic, solid, and hemorrhagic.
PSHs are very difficult to identify only by chest CT. PSHs are often misdiagnosed as lung cancer, inflammatory pseudotumor, hamartoma, and other lung diseases because there is little information and image data about it. [3] The gold standard for diagnosis of this disease is pathological evidence. However, pathologic results can also cause misdiagnosis. In this case, the intraoperative rapid pathology misdiagnosed the mass as pulmonary adenocarcinoma. Saha et al. [3] reported a case of PSH that was misdiagnosed as lung adenocarcinoma relying on fine-needle aspiration cytology.
The only treatment for PSH is surgical resection. [4] For patients with PSH, the systemic LN dissection remains controversial because of the possibility of regional LN metastasis is very low and its prognosis is good even for a patient with LNs metastasis. [5] Hu et al. [5] reported patients with bilateral multiple tumors and pleural metastases had no recurrence or metastasis during the follow-up of 4.2-13.5 years. Kim et al. [2] reported a case of PSH with bone metastasis, and indicated that PSH could metastasize not only to LNs with benign histologic features but also to bone with a malignant histology. There was no doubt that the operation for the patient of this case was completely correct.
PSH is both an opportunity and a challenge for the clinicians, radiologist, and pathologists. Clinicians need to combine clinical symptoms with imaging and pathologic results to improve the ability of diagnosing the disease. With the deepening of the understanding of PSH, there will be less controversy about the treatment and prognosis of this disease.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient has given her consent for her images and other clinical information to be reported in the journal. The patient understands that her name and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
This study was supported by a grant from the Natural Science Foundation of Jilin Province (No. 20180101099JC). | 2018-08-28T09:51:17.052Z | 2018-10-05T00:00:00.000 | {
"year": 2018,
"sha1": "bf92ed5af77a9379f9169f9e4f407979e8b3fec8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.239689",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "496cc895ff1cc874646fac63f147049fc472df35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13523490 | pes2o/s2orc | v3-fos-license | Design of a randomised controlled trial on immune effects of acidic and neutral oligosaccharides in the nutrition of preterm infants: carrot study
Background Prevention of serious infections in preterm infants is a challenge, since prematurity and low birth weight often requires many interventions and high utility of devices. Furthermore, the possibility to administer enteral nutrition is limited due to immaturity of the gastrointestinal tract in the presence of a developing immune system. In combination with delayed intestinal bacterial colonisation compared with term infants, this may increase the risk for serious infections. Acidic and neutral oligosaccharides play an important role in the development of the immune system, intestinal bacterial colonisation and functional integrity of the gut. This trial aims to determine the effect of enteral supplementation of acidic and neutral oligosaccharides on infectious morbidity (primary outcome), immune response to immunizations, feeding tolerance and short-term and long-term outcome in preterm infants. In addition, an attempt is made to elucidate the role of acidic and neutral oligosaccharides in postnatal modulation of the immune response and postnatal adaptation of the gut. Methods/Design In a double-blind placebo controlled randomised trial, 120 preterm infants (gestational age <32 weeks and/or birth weight <1500 gram) are randomly allocated to receive enteral acidic and neutral oligosaccharides supplementation (20%/80%) or placebo supplementation (maltodextrin) between day 3 and 30 of life. Primary outcome is infectious morbidity (defined as the incidence of serious infections). The role of acidic and neutral oligosaccharides in modulation of the immune response is investigated by determining the immune response to DTaP-IPV-Hib(-HBV)+PCV7 immunizations, plasma cytokine concentrations, faecal Calprotectin and IL-8. The effect of enteral acidic and neutral oligosaccharides supplementation on postnatal adaptation of the gut is investigated by measuring feeding tolerance, intestinal permeability, intestinal viscosity, and determining intestinal microflora. Furthermore, short-term and long-term outcome are evaluated. Discussion Especially preterm infants, who are at increased risk for serious infections, may benefit from supplementation of prebiotics. Most studies with prebiotics only focus on the colonisation of the intestinal microflora. However, the pathways how prebiotics may influence the immune system are not yet fully understood. Studying the immune modulatory effects is complex because of the multicausal risk of infections in preterm infants. The combination of neutral oligosaccharides with acidic oligosaccharides may have an increased beneficial effect on the immune system. Increased insight in the effects of prebiotics on the developing immune system may help to decrease the (infectious) morbidity and mortality in preterm infants. Trial registration Current Controlled Trials ISRCTN16211826.
Background
Preterm infants are at increased risk for the development of serious nosocomial infections, especially very low birth weight infants at a NICU [1]. In a recent review of the literature, we found that the intestinal bacterial colonisation in preterm infants is much more diverse than in term infants and that antibiotics cause a significant delay in the intestinal bacterial colonisation [2]. Furthermore, the possibility to administer enteral nutrition is limited due to immaturity of the gastrointestinal tract in the presence of a developing immune system.
Human milk has anti-inflammatory effects and bifidogenic effects on the intestinal microflora [3,4]. Term breastfed infants have less infections and develop less atopy compared with formula fed infants [5,6]. Many factors have been implicated in this effect, including human milk oligosaccharides [7,8]. Many attempts have been made to mimic this effect of human milk. Addition of prebiotics, consisting of neutral oligosaccharides, to infant formula has been found to show potential advantageous effects in term and preterm infants [9,10]. Besides neutral oligosaccharides, breast milk also contains acidic oligosaccharides [8]. In the past, research has mainly focussed on neutral oligosaccharides such as galacto-oligosaccharides and fructo-oligosaccharides (GOS/FOS). Supplementation of GOS/FOS in term and preterm infants results in: 1. Stimulation of a bifidogenic intestinal flora [11,12]; 2. Reduction of pathogens in the intestine [12]; 3. Production of beneficial fermentation metabolites such as short chain fatty acids (SCFA) [10]; 4. Decrease of stool pH [13]; 5. Improved intestinal physiology (stool characteristics, motility) [14]; 6. Less infections and atopy [15,16].
In breast milk 80% of the oligosaccharides are neutral (as in GOS/FOS), and 20% are acidic. Acidic oligosaccharides (AOS) can be derived from carrots with their active component pectin. Pectin is a common structural component of all higher plants. Cooking of pectin-containing vegetables induces the cleavage of the long-chain pectin polymers into acidic oligosaccharides. For already nearly 100 years, carrots are known to have health promoting effects. In 1908, carrot soup was used as treatment of diarrhoea [17]. In 1997, Guggenbichler identified the anti-adhesive effect of acidic oligosaccharides [18].
As a result of these effects, we hypothesise that preterm infants receiving a combination of GOS/FOS with AOS may have: 1. Less infections; 2. Better response to immunizations; 3. Less atopy later in life; 4. Less feeding intolerance.
As infections are still a major cause of morbidity and mortality in preterm infants, reducing the incidence of serious infections is very important. Controversy exists on the definitions for serious infections in neonates. Therefore in a previous study, we adjusted the criteria of the Centers for Disease Control and Prevention for serious infections in children < 1 year for use in neonates [1], and found in a prospective study these criteria applicable in preterm infants [22].
In conclusion, this double-blind randomised controlled trial aims to determine the effect of enteral supplementation of acidic and neutral oligosaccharides on infectious morbidity (primary outcome), immune response to immunizations, feeding tolerance and short-term and long-term outcome in preterm infants. In addition, an attempt is made to elucidate the role of acidic and neutral oligosaccharides in postnatal modulation of the immune response and postnatal adaptation of the gut.
Methods/Design
The study is designed as a double-blind placebo controlled randomised clinical trial. Approval of the study protocol by the medical ethical review board of VU University Medical Center Amsterdam is obtained before the start of the study.
Study population
Infants with a gestational age <32 weeks and/or birth weight <1500 gram admitted to the level III neonatal intensive care unit (NICU) of the VU University Medical Center, Amsterdam, are eligible for participation in the study. Written informed consent is obtained from all parents.
Exclusion criteria are: major congenital or chromosomal anomalies, death <48 hours after birth, transfer to another hospital <48 hours after birth and admission from an extra regional hospital.
Treatment allocation and blinding
To balance birth weight distribution into treatment groups, each infant is stratified to one of three birth weight groups (≤ 799 g, 800-1199 g, ≥ 1200 g) and randomly allocated to treatment within 48 hours after birth. An independent researcher uses a computer-generated randomisation table (provided by Danone Research, Friedrichsdorf, Germany) to assign infants to treatment N or O. Investigators, parents, medical and nursing staff are unaware of treatment allocation. The randomisation code is broken after data analysis is performed.
Treatment
Acidic and neutral oligosaccharides powder and the placebo powder (maltodextrin) are prepared by Danone Research, Friedrichsdorf, Germany and are packed sterile. During the study period, acidic and neutral oligosaccharides and placebo powder are monitored for stability and microbiological contamination.
Between days 3 and 30 of life, acidic and neutral oligosaccharides supplementation (20%/80% mixture) is administered in a dose of maximal 1.5 g/kg/day to breast milk or preterm formula in the intervention group. Two members of the nursing staff daily add supplementation to breast milk or to preterm formula (Nenatal Start ® , Nutricia Nederland B.V., Zoetermeer, The Netherlands), according to the parents' choice. Per 100 mL, Nenatal Start ® provides 80 kcal, 2.4 g protein (casein-whey protein ratio 40:60), 4.4 g fat, and 7.8 g carbohydrate. When infants are transferred to another hospital before the end of the study, the protocol is continued under supervision of the principal investigator (EW).
Nutritional support
Protocol guidelines for the introduction of parenteral and enteral nutrition follow current practice at our NICU. Nutritional support is administered as previously described [23].
For each infant in the study, a feeding schedule is proposed based on birth weight and the guidelines as mentioned above. However, the medical staff of our NICU has final responsibility for the administration of parenteral nutrition and advancement of enteral nutrition.
After discharge all infants receive breast milk or preterm formula Nenatal Start (without GOS/FOS) ® until term, and Nenatal 1 (without GOS/FOS) ® until the corrected age of 6 months.
Study outcome measures
Clinical outcome measures Primary outcome of the study is the effect of acidic and neutral oligosaccharides (20%/80% mixture) supplemented to the enteral nutrition on infectious morbidity as previously defined [1,22]. The occurrence of serious infections is determined by two investigators, unaware of treatment allocation, as previously described [1,22].
The following perinatal characteristics are registered to assess prognostic similarity: maternal age and race, obstetric diagnosis, administration of antenatal steroids and antibiotics, mode of delivery, sex, gestational age, birth weight, birth weight <10 th percentile [24], Apgar scores, pH of the umbilical artery, clinical risk index for babies [25], and administration of surfactant.
During the study period, actual intake of enteral and parenteral nutrition, powder supplementation and type of feeding (breast milk or preterm formula) are recorded daily. Feeding tolerance and short-term outcome are evaluated. (Table 1) Immune response The effect of acidic and neutral oligosaccharides supplemented enteral nutrition on the immune response is investigated, in collaboration with the National Institute for Public Health and Environment, by determining the development of the immune response to DTaP-IPV-Hib(-HBV) + PCV7 immunizations (after the first 3 doses), and the development of the memory function of the immune response to these immunizations by measuring the response after the 4 th booster dose. In addition, the plasma cytokine concentrations (Il-2, Il-4, IL-5, Il-8, IL-10, TGF, IFN), faecal Calprotectin measured by ELISA (Buhlmann, Switzerland), and IL8 measured by random-access chemiluminescence immunoassay (Siemens, The Netherlands) are determined.
Postnatal adaptation of the gut
The effect of acidic and neutral oligosaccharides supplemented enteral nutrition on postnatal adaptation of the gut is studied by measuring feeding tolerance, intestinal permeability, intestinal microflora and intestinal viscosity.
Intestinal permeability is measured by the sugar absorption test [26]. After instillation of the test solution, 2 mL/ kg by nasogastric tube, urine is collected for 6 hours. After collection, 0.1 mL chlorohexidine digluconate 20% (preservative) is added to the urine and samples are stored at -20°C until analysis. Lactulose and mannitol concentrations (mmol/mol creatinine) are measured by gas chromatography as previously described [27]. The lactulose/ mannitol ratio is calculated and used as a measure of intestinal permeability.
Long-term outcome
To determine the incidence of allergic and infectious disease in the first year of life standardized questionnaires will be sent to the parents prior to the follow-up visit at the corrected age of 1 year [29]. Faecal samples (FISH, Calprotectin and IL-8) and IgE/IgG4 levels in blood will be measured at the age of 5 and 12 months.
To investigate neurodevelopmental outcome, neurological status, vision, hearing and Mental Development Index (MDI) and Psychomotor Development Index (PDI) of the Bayley Scales of Infant Development II (BSID-II) at the corrected age of 1 and 2 years (as part of the regular follow-up of NICU infants) are assessed [30,31].
To determine the frequency of side-effects after the first 4 immunizations, standardized questionnaires will be given to the parents at the time of immunizations. (Table 2)
Statistical analysis
To determine whether randomisation is successful, prognostic similarity (perinatal and nutritional characteristics) between treatment groups is assessed. Generalised estimated equations [32] are used to analyse differences and changes over time in plasma cytokine concentrations, faecal Calprotectin and IL-8, intestinal permeability, intestinal microflora and intestinal viscosity. Differences of optimal and non-optimal neuromotor development and normal and abnormal mental/motor development in oligosaccharides and control groups is examined by logistic regression with adjustments for possible confounding factors as gestational age and birth weight.
All statistical analyses are performed on an intention to treat basis. In addition, alternative per protocol analyses are performed, excluding all patients who are not treated according to protocol, defined as more than 3 consecutive days or a total of 5 days on minimal enteral feeding or without supplementation. For all statistic analyses a p value <0.05 is considered significant (two-tailed). SPSS 15.0 (SPSS Inc., Chicago, IL, USA) is used for data analysis.
Discussion
There is increasing evidence that prebiotics play an important role in the development of the intestinal microflora and the immune system, and may help to decrease the risk of infectious diseases. Especially preterm infants, who are at increased risk for serious infections, may benefit from supplementation of prebiotics. Most studies with prebiotics only focus on the colonisation of the intestinal microflora. The influence on the immune system is not yet fully understood [33]. Studying the immune modulatory effects is complex because of the multicausal risk of infections in preterm infants [34]. The combination of neutral oligosaccharides with acidic oligosaccharides may have an increased beneficial effect on the immune system of preterm infants due to the specific conditions in the luminal part of the developing gut wall. Not only the immune effects, such as morbidity due to infections and response to immunizations will be investigated, but also other signs and symptoms such as feeding tolerance, short-term, long-term and postnatal adaptation of the gut (intestinal microflora, intestinal permeability, intestinal viscosity). Increased insight in the effects of prebiotics on the developing immune system may help to find ways to decrease the (infectious) morbidity and mortality in preterm infants.
Publish with Bio Med Central and every scientist can read your work free of charge | 2014-10-01T00:00:00.000Z | 2008-10-23T00:00:00.000 | {
"year": 2008,
"sha1": "ef0483e187eaa86a025db984df5f4c2462a2e4cf",
"oa_license": "CCBY",
"oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/1471-2431-8-46",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9e0d95ad8b85c5219cdeba087349262d324676e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261160734 | pes2o/s2orc | v3-fos-license | Toward a Fully-Observable Markov Decision Process With Generative Models for Integrated 6G-Non-Terrestrial Networks
The upcoming sixth generation (6G) mobile networks require integration between terrestrial mobile networks and non-terrestrial networks (NTN) such as satellites and high altitude platforms (HAPs) to ensure wide and ubiquitous coverage, high connection density, reliable communications and high data rates. The main challenge in this integration is the requirement for line-of-sight (LOS) communication between the user equipment (UE) and the satellite. In this paper, we propose a framework based on actor-critic reinforcement learning and generative models for LOS estimation and traffic scheduling on multiple links connecting a user equipment to multiple satellites in 6G-NTN integrated networks. The agent learns to estimate the LOS probabilities of the available channels and schedules traffic on appropriate links to minimise end-to-end losses with minimal bandwidth. The learning process is modelled as a partially observable Markov decision process (POMDP), since the agent can only observe the state of the channels it has just accessed. As a result, the learning agent requires a longer convergence time compared to the satellite visibility period at a given satellite elevation angle. To counteract this slow convergence, we use generative models to transform a POMDP into a fully observable Markov decision process (FOMDP). We use generative adversarial networks (GANs) and variational autoencoders (VAEs) to generate synthetic channel states of the channels that are not selected by the agent during the learning process, allowing the agent to have complete knowledge of all channels, including those that are not accessed, thus speeding up the learning process. The simulation results show that our framework enables the agent to converge in a short time and transmit with an optimal policy for most of the satellite visibility period, which significantly reduces end-to-end losses and saves bandwidth. We also show that it is possible to train generative models in real time without requiring prior knowledge of the channel models and without slowing down the learning process or affecting the accuracy of the models.
expected to provide seamless connectivity not only to users but also to massive machine-type devices. Three main scenarios for 6G have been identified. The first scenario is Immersive Communication, an evolution of 5G enhanced Mobile BroadBand (eMBB) but with new use cases such as extended reality (XR) and holographic communication which require more bandwidth than 5G eMBB. The second scenario is Massive Communication, which assumes 5G Massive Machine Type Communication (mMTC) but aims to increase connection density, i.e., connecting many devices in a small area, using technologies such as Internet of Things (IoT), Internet of Everything (IoE) and Industrial IoT (IIoT). The third scenario is hyper-reliable and low-latency communications, which will evolve 5G Ultra-Reliable and Low Latency Communications (URLLC) to support use cases such as remote telesurgery, fully autonomous driving, industrial control and operations. In general, 6G is expected to address the shortcomings of current mobile networks and respond to growing communications needs by offering ultra-high peak data rates of around 200 Gbit/s compared to 20 Gbit/s in 5G, ultra-low latency, wide coverage and high connection density, Quality of Service (QoS) and energy efficiency, high sensing resolution and accuracy, and high security and privacy [2]. Two other important advances in 6G are the incorporation of ubiquitous and distributed Artificial Intelligence (AI) at all levels of communication [3] and the paradigm shift from network-centric to user-centric communication, where users can collaborate with the network to decide on the service they expect from the network and the allocation of channel resources.
Despite the rapid evolution of terrestrial mobile networks, supporting the 6G communications requirements described above requires new and advanced communications technologies, infrastructures, and standards. The WP5D has called for urgent research and innovation in the design of future network infrastructures and the development of various enabling technologies to support new 6G scenarios and use cases. Several enabling technologies for 6G have been identified, including the application of data and AI in distributed and collaborative ways, Integrated Sensing and Communications (ISAC), Reconfigurable Intelligent Surface (RIS), Full Duplex Operation, Radio Access Network (RAN) Slicing and Infrastructure Sharing, among others [1]. In addressing the 6G requirement for wide coverage and full connectivity, the ITU report on Future Technology Trends for Terrestrial International Mobile Telecommunications systems towards 2030 and Beyond [1] applied to 6G what the Third Generation Paternship Project (3GPP) proposed for 5G [4] and recommends integrating 6G mobile networks with Non-Terrestrial Network (NTN) technologies. NTN platforms are network segments that use transmission equipment or base stations mounted on an airborne or spaceborne vehicle. NTN platforms include satellites such as geosynchronous (GEO), Medium Earth Orbit (MEO) and Low Earth Orbit (LEO), High Altitude Platforms (HAPs), and Unmanned Aerial Systems (UASs). The white paper on 6G wireless networks [5] also recommends that future wireless networks must be able to connect seamlessly with terrestrial and satellite networks. Since satellites have wide coverage, they can complement terrestrial mobile networks in partially connected and unconnected areas such as maritime areas, mountainous regions, and deserts. Although satellites have not been widely used in the past due to high construction costs, as technology advances and communication requirements increase, various satellite constellations such as Starlink, OneWeb, and Telesat [6] have been launched. High Altitude Platform (HAP) systems include airborne base stations deployed above 20 km and below 50 km to provide wireless access to devices in large areas. HAP systems can be used as HAP Stations (HAPS) to offer Internet access between fixed points in suburban and rural areas and in emergency situations [7]. HAPS offer wide coverage, flexible deployment, and low construction costs. They also have low latency due to their relatively lower altitude compared to satellites. Another application of HAP systems is to use HAPS as International Mobile Telecommunication (IMT) Base Station (HIBS) to complement IMT requirements for mobile phones or other terminals in areas not covered by HAPS. So, with HIBS, some of the access functionalities in the terrestrial networks can be moved to the non-terrestrial infrastructure. UASs, commonly known as Unmanned Aerial Vehicles (UAVs) or drones, can also be used as IMT base stations. UAVs have attracted a lot of attention because they are lightweight, easy to deploy, and offer flexible services. Exploiting the advantages of terrestrial networks and nonterrestrial platforms will support a range of new applications and use cases such as remote monitoring, rescue operations, reconnaissance, goods delivery, connected autonomous vehicles (CAVs), and high-speed transportation (e.g., trains or aircraft). In this paper, we focus on the integration between LEO satellites and the upcoming 6G mobile networks.
The main challenge in integrating terrestrial IMT and NTN is the channel modeling of the service link, i.e., the link between the NTN terminal or User Equipment (UE) and the satellite or an NTN platform, as this link requires Line-of-Sight (LOS), which is impaired when both the satellite and the UE are in relative motion. In dense urban scenarios, tall buildings, and other tall infrastructure can severely degrade LOS communications as signals are blocked or reflected. In addition, the LOS probability varies with the elevation angle of the satellite, with low elevation angles having a low LOS probability due to blocking. The LOS variations can lead to unreliable communication due to poor connectivity, network unavailability, or service interruption, making it difficult to meet 6G communication requirements. Existing ITU service link models take into account the elevation angle, frequency, and propagation environment (e.g., urban or rural), [8] but not the relative movement of the UE and satellite, which can make the propagation environment non-stationary because the LOS probability may vary with time.
II. REFERENCE SCENARIO AND MOTIVATIONS
In this paper, we propose an AI-based intelligent system for LOS estimation and traffic scheduling on the access link of 6G-NTN integrated networks. We use the Actor-Critic (AC)-Reinforcement Learning (RL) framework, in which an RL agent continuously monitors and learns the LOS probability of multiple links and selects an appropriate subset of the available links on which to schedule traffic to increase link availability and reliability by increasing the probability of good traffic reception. Since our proposed framework is not deterministic but learning-based, it can track the dynamic variations of LOS due to terrain and mobility. As shown in Figure 1, our reference scenario, the UE with multiple interfaces can connect to two satellites in multi-connectivity mode. The two satellites are equipped with BS through which the UE connects to the terrestrial IMT Core Network (CN) and theData Network (DN). The UE can be any user terminal, a UAV, or an IoT device. Our RL agent learns the channel characteristics of each access link and schedules traffic according to link characteristics such as LOS and Packet Loss Rate (PLR) to increase link availability, reliability and throughput. Given the limited computational resources of the UE, the RL agent can be deployed on the edge device with high computational resources or anywhere in the network and offered as AI-as-a-Service (AIaaS) as envisaged in 6G [1]. To further improve link reliability and throughput, we use a multipath transmission technique that splits a single traffic flow into sub-flows and transmits each subflow over a separate path, achieved by one or more communication channels, to increase the probability of good reception by leveraging the different link characteristics. We then couple multipath with traffic duplication, which adds redundancy to further increase the probability of good reception because the redundancy traffic is transmitted on different links than the information traffic, so that traffic lost on one link can be recovered on other links. We perform redundancy optimisation to avoid excessive bandwidth consumption.
To support duo-connectivity and multipath transmission, we use the standard mechanism known as Access Traffic Steering, Switching, and Splitting (ATSSS), originally introduced by 3GPP for IMT-2020 [9], but needs to be further developed and improved for IMT-2030 to support Multi-Access Packet Data Unit (MA-PDU) session services through self-learning decision policies supported by AI. Access traffic steering means the selection of an access network over which a particular new data flow is to be transmitted. On the other hand, traffic switching refers to the process of moving all the traffic of an ongoing flow from one access network to another while maintaining the continuity of the flow. On the other hand, traffic splitting refers to the process of dividing a data flow into parts that are transmitted over different access networks. 3GPP standard defines two ATSSS functionalities: ATSSS high-layer functionality and ATSSS low-layer functionality (ATSSS-LL). In the former, traffic steering is performed above the Internet Protocol (IP) layer, where each substream is identified with a unique IP address, as shown in Figure 2. Link monitoring and performance measurements such as PLR or Round-Trip Time (RTT) are performed End-to-End (E2E) between the UE and the DN through a multipath server proxy in the core and can be used as criteria for traffic steering decisions. The standard identifies two protocols for ATSSS higher-layer functionality: Multi-Path-TCP (MPTCP) for multipath Transmission Control Protocol (TCP) traffic and Multi-Path-QUIC (MPQUIC) for Quick UDP Internet Connections (QUIC) User Datagram Protocol (UDP) traffic. ATSSS-LL, on the other hand, is implemented at the link layer, where Media Access Control (MAC) addresses identify sub-flows and can handle any traffic, including TCP, UDP and Ethernet traffic. ATSSS is a very important feature for the 6G paradigm shift from a networkcentric to a user-centric approach, as it supports collaborative network performance measurements between the network and the user. The user can measure access link performance in terms of LOS, delay, PLR, bandwidth, link availability, or unavailability and either share the measurements with the core network or use the measurements autonomously for uplink (UL) traffic steering over the access networks. In the future, this feature can be used to support UE decisions on channel resource allocation, which is one of the provisions in the user-centric 6G networks.
The framework proposed in this paper performs traffic splitting and steering at the link layer in accordance with the ATSSS-LL functionality. The ATSSS standard has introduced a function called Performance Measurement Function (PMF) that enables the exchange of messages between the UE and the core for performance measurements. We have developed a stub that provides our learning agent with link performance measurements such as LOS, link-PLR, and E2E-PLR for traffic steering over the two satellite networks. LOS and link-PLR are used to decide which link to steer the traffic to, while E2E loss is used to decide whether to use a single transmission or multiple transmissions with traffic duplication to compensate for E2E losses. In this case, the E2E loss occurs when no traffic is received on either link. Since the LOS changes with the elevation angle of the satellite, the RL agent constantly retrains to track the ever-changing LOS of multiple moving satellites and allows the UE to distribute traffic to the appropriate link(s). However, we found that each time the elevation angle changes, the agent takes a long time to re-train and converge compared to the duration of the satellite visibility [10]. Normally, a moving satellite is visible from a UE near Earth or on Earth for a certain period of time called the satellite visibility period, which can be very short for large constellations. For example, the satellite visibility period in Paris, France, was found to be 3.5 minutes for Starlink constellations. During this visibility period, the satellite changes its elevation angle and consequently, the LOS probability also changes. If the learning agent converges slowly, it cannot make the best use of the satellite visibility period because the elevation angle and LOS probability change before it converges. As a result, the agent transmits with non-optimal policies.
In this paper, we use generative models to solve the problem of slow convergence of the learning agent. Although there can be several reasons for slow convergence, we focus our investigation on learning-based LOS estimation, which in such scenarios is modeled as Partially Observable Markov Decision Process (POMDP) [11], since the learning agent can only observe the states of the links it selects for transmission at a given time, for scalability reasons. 1 With multiple channels, the agent needs a lot of time to fully know the states of all available channels and to select the appropriate channels. The obvious and simple solution would be to duplicate the traffic and transmit it over all available links to quickly learn the LOS probability of each link. Although this seems to be a simple solution, it is inefficient as it wastes bandwidth. In this work, we provide a more efficient and intelligent solution that transforms the POMDP into a Fully Observable Markov Decision Process (FOMDP) so that the agent can have the CSI of all available channels, including the channels it does not select on each transmission event, without having to transmit on all links. To this end, we use deep generative models (Generative Models (GMs)) which we train to generate synthetic channel states that closely 1. Channel State Information (CSI) analytics have computational and storage costs, and if multiple interfaces can be used, a policy to limit data collection must be considered. Therefore, limiting the analysis to only one interface being used at any given time can be a reasonable choice. resemble the real channel state of the links not selected by the agent. Specifically, we use two deep GMs [12]: Conditional Tabular Generative Adversarial Networks (CTGANs), a version of the most popular and powerful deep generative model called Generative Adversarial Network (GAN), and Tabular Variational Autoencoders (TVAEs), a variant of Variational Autoencoder (VAE), another powerful and commonly used deep generative model. When the agent selects a subset of the available channels at each transmission event and learns their LOS probability, the trained GMs generates synthetic LOS estimates for the remaining subset. In this way, the agent has a complete view of the channel states for each transmission event. As a result, the agent learns quickly, converges faster, and transmits with an optimal policy for most of the satellite visibility period. As explained in Section VI, the GMs can be trained offline or during deployment.
Our main contributions can be summarized as follows: 1) We propose the use of reinforcement learning (RL) and generative models (GMs), to provide intelligence into integrating terrestrial and non-terrestrial networks for supporting 6G communication requirements such as improved network accessibility and connectivity, link availability and reliability, and high data rates. 2) We use generative models, specifically GANs and VAEs, to transform a POMDP into a FOMDP. The GMs generate synthetic states of a partially observable Markov process that are not visited by the agent during the learning process and, thus, transform a partially observable process into a fully observable Markov decision process by providing the agent with a complete view of all states. This method can be applied not only to LOS estimation, as in this work, but also to any partially observable Markov decision process.
To the best of our knowledge, this is the first work that uses generative models to transform a POMDP into a FOMDP. 3) We develop an actor-critic-RL framework to estimate the LOS probability of multiple service links between UE and LEO satellites in IMT-NTN integrated networks with heterogeneous characteristics. The RL agent learns to determine the LOS probability of each link and select an appropriate subset of the available links for transmission, i.e., the link(s) with a relatively higher LOS probability, to increase the probability of good traffic reception, improve link availability and reliability, and increase data rates. 4) We couple multipath with traffic duplication to proactively compensate for E2E losses and consequently increase throughput. Since traffic duplication can increase bandwidth consumption, we optimize the use of redundancy to avoid excessive bandwidth consumption. We show through intensive simulations that our RL agent can track low E2E losses when deployed in different propagation environments with different E2E loss thresholds according to the end-user QoS agreement.
5)
Since the satellite visibility period is shorter than the convergence time of the RL agent, we use our proposed model for transforming a POMDP into a FOMDP, to convert a learning-based LOS estimation which is a POMDP, into a FOMDP to accelerate the convergence of the RL agent within the satellite visibility period. We use GANs and VAEs to generate synthetic LOS link states of the links not visited by the agent and thus, convert a POMDP into a FOMDP since the RL agent now has complete knowledge of the LOS state of all links. This allows the agent to learn and converge within a short time, and transmit with an optimal policy for most of the satellite visibility period. 6) Finally, we show through simulations that GMs training can be performed in real-time without slowing down the RL agent learning process or affecting GMs accuracy. The rest of the paper is organized as follows: In Section III, we review the state-of-the-art techniques with respect to our work. We present our system model in Section IV and describe the training and evaluation of the GMs in Section V. Section VI presents the architecture and training of the Actor-Critic Reinforcement Learning Agent while its performance evaluation is presented in Section VII. Section VIII concludes the paper and identifies future research directions.
A. LOS ESTIMATION AND TRAFFIC SCHEDULING
Several methods for estimating LOS and scheduling traffic through multiple channels have been suggested. In [13], a theoretical model for LOS prediction in cloud-free sky is proposed which takes into account the angle between the satellite and the ground station. In [14], a maximum likelihood-based method for detecting the presence of Non-Line-of-Sight (NLOS) is proposed. In [15], the authors propose an empirical model for probability estimation of LOS for satellite and HAPs communications. All of these approaches are empirical and deterministic and therefore not suitable for dynamic and nonstationary NTN propagation environments. Traditional and static traffic scheduling techniques such as Round-Robin (RR), Weighted Round Robin (WRR) have been shown to be inefficient in heterogeneous and time-varying wireless channels [16]. With the pursuit of self-reconfigurable networks, the improved schedulers such as deficit round robin (DRR) and weighted fair queuing (WRQ) schedulers [16], RTT, PLR, [17], the lowest-RTTfirst schedulers [16], [18], [19] are becoming increasingly unpopular and research is leaning towards learning-based schedulers. For example, in [20] a Deep-Q (DQ) RL-based scheduler is presented for dynamically allocating bandwidth to different WiFi applications. Wu et al. [21] have proposed a RL-based multipath scheduler for multipath QUIC on WiFi and cellular applications. In [22], a AC agent is used for multi-channel access in wireless networks to avoid collisions. Yang and Xie [23] propose an AC-based scheduler for cognitive Internet-of-Things (CIoT) systems. Another ACbased scheduler is proposed in [24] to address end-to-end delay in Fog-based IoT systems. However, all these works are partially observable processes that may suffer from the slow convergence of the learning agent. We aim to address this problem in this work by using GMs to transform a POMDP into a FOMDP. Since our proposed framework is designed for multipath systems, it provides not only a scheduling mechanism but also traffic protection. We schedule traffic by steering and splitting it over multiple paths to increase the probability of good reception leveraging the different path properties as in [25]. Our framework also avoids delays caused by traffic protection systems such as Automatic Repeat reQuest (ARQ) that uses retransmissions to compensate for the loss, which may be unsuitable for satellite communications with large propagation delays. In addition, our system limits the waste of bandwidth like some layered Forward Error Correction (FEC)-based systems do [26], which are difficult to use with fixed coding rates in dynamic contexts, and avoids introducing delays due to the encoding-decoding chain [27] as well as further complexity.
B. DEEP GENERATIVE MODELS
Deep generative models have attracted much attention and found several applications, especially in computer vision, including the generation of realistic images, videos, music relics, texts, and language processing. In [28], [29], and [30], GANs are used for image generation, while the authors in [31] use VAE and GANs to generate videos from texts. GMs are used in [32] to improve the quality of the training dataset for Electrocardiogram (ECG) signal classification. Although the application of GMs for communication is still being explored, some work has already been proposed. For example, in [33], the authors use VAE to generate channel parameters such as path loss, delay, and arrival and departure angles. They first estimate the LOS and NLOS state of a link using a ray tracer and use these estimates to train VAE and generate other channel parameters. The use of VAEs and GANs to improve the LOS estimation was also discussed and compared in [34], with a similar scenario, while the use of a federated approach with VAEs was introduced in [35], and investigated for the first time. The Conditional GAN (cGAN) is used in [36] to model channel effects in an E2E wireless network and optimize receiver gain and decoding. In particular, the cGAN is used to support the learning of the Deep Neural Networks (DNNs)-based communication system when the CSI is unknown. This work is similar to our study in which we use the CTGANs and TVAEs to generate missing LOS estimates for the AC-based transmission system to improve the QoS by reducing E2E losses.
IV. SYSTEM MODEL
The WP5D group has recommended that the existing 3GPP architecture for integrating terrestrial IMT and NTN also be used for the integration of 6G mobile networks with NTN, where the Base Station (BS) is split into Distributed Unit (DU) and Centralized Unit (CU) [1]. Although the WP5D group has not specified the placement of the DU and CU, the existing 3GPP [4] provides that the DU can be mounted on the satellite, while the CU forms part of the terrestrial infrastructure. As shown in Figure 1, the two satellites have a DU on board to provide BS functionalities. The UE accesses the network via these satellites in a multiconnectivity mode and connects to the CN and the DN via a common CU on the ground. We use the StarLink LEO satellite constellations [37].
A. CHANNEL MODEL
In this work, we adopt the channel model provided by ITU [38] for designing Earth-space communication systems. We simplify this model using the Lutz approach [39], [40] and assume two channel states: the good state (G) and the bad state (B). The good state is characterized by the presence of the LOS, and good traffic reception and is modeled by a Rician fading model for unshadowed areas. The bad state, on the other hand, is marked by NLOS, losses or bad reception and is modeled using the Rayleigh fading model. We adopt these models to compute the channel state transition probabilities which we use to create the dataset to train our learning agent and the generative models. For the sake of simplicity, in this work, we did not consider interference.
Computation of the Link State Transition Probabilities: We define the transition matrix as follows [27]: where P b is the probability to transition from good state to bad state and P g from bad state to good state. It follows that, where T g and T b indicate the time duration of the good and bad states respectively and are given as follows: where, d g and d b are the mean duration of the good and bad states [39]; v is the speed of the UE in (m/s) transmitting packets of size k bits at a rate r.
Since the LOS probability depends on the elevation angle of the satellite, the ITU recommendation [38] provides statistical parameters to determine the mean duration d g and d b of the good and bad states respectively at different elevation angles, frequency, and different propagation environments, such as urban and rural. In this work, we use the parameters for the urban environment at 2.2GHz as reported in Table 1. These parameters are the statistics of the duration of the good and bad states which include the mean μ G,B , the standard deviation σ G,B , and the minimum duration d min of each state. Substituting these parameters in equation (3), we calculate the mean duration d g and d b .
Finally, we combine equations (1) and (2) to obtain the transition probabilities P b and P g as follows.
We report the computed transition probabilities in Table 2 and use these probabilities to create Markov states dataset with LOS/NLOS traces to train our models.
B. DEEP GENERATIVE NEURAL NETWORKS
Deep Generative AI refers to unsupervised and semisupervised Machine Learning (ML) Algorithms that use Neural Networks (NNs) to learn and model the distribution of the true data and generate new synthetic data with a similar distribution to the true data. GMs are used to produce high-quality images, videos, sounds, and text that closely resemble the original data. They are also used to augment data and generate large amounts of data for training other ML algorithms, using only a small amount of real data. There are many types of deep GMs, but two are most commonly used: The GAN and the VAEs. There are many variants of these two as well. In this work, we use the CTGANs, which is a variant of the GAN, and the TVAEs, which is the variant of the VAE. They are built in the TensorFlow library and belong to the Synthetic Data Vault (SDV) package. The choice of the CTGAN and TVAE was motivated by the fact that these two models can handle tabular data and therefore allow us to train only one model that can generate synthetic data for any number of available service links since they can learn the data distribution in each column of the training dataset. For training the GMs, we considered three elevation angles: 70 • , 60 • , and 45 • and organized the training dataset into a table of three columns with each column containing the LOS/NLOS traces of one of the elevation angles or channels. Thus, knowing the data distribution in each column, a single CTGAN or TVAE model can generate synthetic data for all the columns at once, which would otherwise require training one model for each channel. Below is a brief description of the structures and functionalities of the CTGAN and TVAE.
1) CTGAN-CONDITIONAL TABULAR GAN
The Generative Adversarial Network (GAN) [28] is a type of generative neural network that has become popular due to its ability to produce high-quality synthetic data. The basic architecture of the GAN consists of two neural networks, the generator and the discriminator. The generator generates synthetic data that resembles real data, while the discriminator is a classifier that attempts to distinguish fake data from real data. The generator and discriminator are trained in an adversarial way based on a two-player game theory that aims to find a Nash equilibrium [41] in which the generator tries to fool the discriminator by generating data that looks like real data, while the discriminator tries to catch the generator by distinguishing real data from fake data generated by the generator. After training, the generator is able to generate data that is too real for the discriminator to distinguish from real data. The discriminator is trained to maximize the equation shown in (5) log D(x) + log(1 − D(G(z))) (5) while the generator minimizes the equation shown in (6), in both previous equations D and G are functions of the generator and discriminator networks, respectively, and z and x are noise and real data samples, respectively. The Conditional Tabular GAN (CTGAN) is a type of GAN developed by [12] for dealing with tabular data. The original GANs were developed primarily for images and could not handle tabular data. The CTGAN is conditional in that, unlike the general GANs, it can produce data with a particular property or distribution. For example, the basic or vanilla GANs trained to generate human faces can only generate random faces as found in the training data. It cannot generate a specific face. To condition the model to generate data with specific features, patterns, or distribution, the generator, and discriminator are given additional information about the data as input. This may be labels of the training data or a particular distribution. This allows the generator to produce data with a desired distribution or property.
2) TVAE -TABULAR VARIATIONAL AUTO-ENCODERS
Variational autoencoders are among the widely used unsupervised deep GMs. Like autoencoders, VAEs have a twonetwork structure, the encoder and the decoder. However, unlike autoencoders, VAEs are used to generate new data. The encoder maps the input real data into a compressed latent vector and the decoder generates new data from the latent vector. The VAEs differ from autoencoders because in VAEs the latent vector is regularized for generating new data. Instead of encoding an input into a single point, it is encoded as a distribution, which is then regularized by parameterization, using a normal distribution such as a Gaussian distribution so that the decoder can use any sample from it to generate new data. Equation (7) gives the loss function used to train the VAE [42].
The VAE is trained to minimize the reconstruction error (the first term of the expression) between the input data and the generated data, and to maximize the likelihood of the parameters of the Gaussian distribution (the second term of the expression) that defines the latent space. The second term acts as a regularizer to measure the loss when q θ e (z|x i ) is used to represent the distribution p(z) of the latent space z. q θ e (z|x) is the distribution of the input variables x and p θ d (x|z) represents the distribution of the decoded variables, while θ e and θ d are the parameters of the encoder and decoder, respectively. This paper adopts the TVAEs a version of VAE available in the same package as CTGAN for handling tabular data as described above.
V. TRAINING THE CTGAN AND TVAE MODELS
The generative models were trained in two ways. We first trained the models offline using the training data generated according to the transition probabilities in Table 2. Then, we simulated the real-time training, i.e., training the GMs when the RL is in operation. In this case, the training data is acquired by the RL as it learns the channel states. In the following, we describe the two training methods in detail and evaluate the accuracy of the GMs in each case. The training parameters are shown in Table 3. Two performance evaluation metrics were used to evaluate the performance of the trained GMs: the Kolmogorov-Smirnov Test (KS-test) and the Kullback-Leibler divergence (KL -divergence). The KStest measures the distance between two empirical Cumulative Density Function (CDF) and is usually presented as a complementary measure, i.e., 1 -the difference in CDF. Thus, the higher the KS-test value, the more similar the two CDFs are. In our case, we compare the CDFs of the real and synthetic data. The KL divergence, on the other hand, measures the difference between two probability distributions. The lower the KL divergence, the greater the similarity between the two distributions.
A. TRAINING DATASET
The datasets to train the GMs and the AC-RL agent were created as follows: we used the transition probabilities computed in Section III and reported in Table 2 to create the Markov States for the LOS and NLOS for different elevation angles. The LOS was coded as 1 and NLOS as −1. Thus, the dataset consisted of a set of traces [−1, 1, . . .] for each elevation angle according to the state transition probabilities. The datasets created in this way were used to train the AC-agent in a partially observable Markov process, and the generative models in offline mode, while the dataset for real-time training of GMs consisted of the channel states collected during the learning process of the agent. The dataset to train the AC-agent in a FOMDP is a combination of the traces obtained by using the state transition probabilities (for the channels selected by the agent) and the synthetic states generated by the trained GMs (for the channels not selected by the learning agent. See Algorithms 1 and 2). The training datasets have different sizes depending on the model to be trained as described in the appropriate sections below.
B. OFFLINE TRAINING OF GENERATIVE MODELS
The offline training involved two ways: using separate dataset and combined dataset. In the separate dataset, we used the transition probabilities given in Table 2 to create LOS/NLOS traces for each of the three channels or elevation angles (70 • , 60 • , 45 • ). The traces were organized in a tabular form of three columns with each channel traces for each column. The CTGANs and TVAEs models were trained to generate new traces for each column or for each channel. For the combined dataset, all the traces for the three elevation angles were combined into a one-column dataset and reshuffled to balance the data.
1) GENERATIVE MODELS PERFORMANCE EVALUATION (OFFLINE TRAINING)
The accuracy of the models trained on the separate dataset was evaluated by comparing the generated traces for each channel or column with the real traces of the corresponding channel. In the case of the combined dataset, the comparison was made between the combined generated traces with the of each channel or column. Then, the two training models were compared in terms of model accuracy and training time. The aim is to find out which training mode achieves high accuracy in a short time and which model between CTGAN and TVAE performs better than the other in each training mode. Table 4 and Table 5 show the accuracy and training time for the two models trained with separate and combined datasets respectively. Accuracy is measured by the distance between the real and generated data. Figure 3 shows the comparison between the distribution (PDF) of the real and generated data for the two models trained with the separate and the combined dataset for the three channels. The results show that our models achieved very high accuracy in all scenarios, with KT-test up to 98% and KL -divergence up to 0.0006. Both models show similar performance with minor differences in all scenarios. However, the models perform better when trained on the separate dataset than on the combined dataset. This may be due to the fact that the three channels are not correlated, so combining the channels does not give good results. This means that training with a separate data set is suitable for uncorrelated channels and with a combined data set for correlated channels. In terms of training time, the results show that both models train faster with the separate dataset than with the combined dataset, with TVAEs training relatively faster than CTGANs in both cases. Based on these results, the models trained with the separate dataset were used for the remainder of this work to generate data traces for training our RL agent.
C. REAL-TIME TRAINING OF GENERATIVE MODELS
Real-time training refers to the scenario where the GMs are trained when the RL agent is already deployed for transmission. This is a more realistic scenario that can occur when the channel model is not known in advance, which is usually the case, or when there are no LOS datasets for training the GMs. In this case, the RL agent must transmit for a certain time on all the available channels to acquire the CSI of all the channels. Then the acquired traces are used to train the GMs. Finally, the trained models are used to generate synthetic states of the channels that the agent does not select for transmission at each transmission event so that the RL agent can have a complete observation of the states of all channels. This is a very challenging scenario due to the time constraint. First, the time to acquire CSI should be very short to avoid wasting bandwidth since the agent has to transmit by duplicating traffic over all available channels. Second, the training time of the GMs should be very short because of the limited satellite visibility period. To simulate this scenario and overcome these challenges, we first created training datasets with different sizes: 2k, 5k, 10, 20k, 30k, 40k, and 50k to train the GMs. The goal is to determine the minimum size of the dataset that will train the models in the shortest possible time and achieve the highest possible model accuracy. In this way, we can evaluate whether our proposed approach is feasible for online training. We trained both the CTGANs and TVAEs models with only a single epoch and recorded the training time for each training dataset. Table 6 shows the accuracy and the training time of the CTGANs and TVAEs models in terms of the KL-Divergene and the KS-Test between the real data and the synthetic data generated by the two models. The models were trained with datasets of different sizes containing the states of the satellite links at different elevation angles. The aim was to determine the minimum size of the datasets that can be used to train the models and achieve good accuracy. From these results, it can be seen that training with a 10k dataset is the best compromise, sine with this size of dataset the models train within a short time of 3.89 seconds and 2.39 seconds for CTGAN and TVAE, respectively, achieving relatively good accuracy at the three elevation angles (70 • , 60 • , and 45 • ). Figure 4 is the graphical representation of the variations of the KS-Test between the real data and synthetic data generate by CTGAN and TVAE models trained with datasets of different sizes. It can also be seen that 10k dataset achieves good accuracy for both models. The results also show that increasing the size of the dataset does not have much effect on the accuracy of the TVAE model. TVAE can thus be trained with a very small dataset and achieve a good accuracy. Figure 5 shows the variation of the training time for CTGAN and TVAE models at different training datasets. These results show that CTGAN requires longer time to train than TVAE at all the sizes of the datasets considered. In the rest of this work, we used the models trained with the 10k dataset to generate synthetic datasets to evaluate the performance of the RL agent with real-time trained GMs.
VI. ACTOR-CRITIC REINFORCEMENT LEARNING
After discussing the structure, training, and evaluation of the CTGAN and TVAE models in the previous sections, in this section, we present the architecture and the learning process of our proposed Actor-Critic Reinforcement Learning framework.
A. PROBLEM FORMULATION
We formulate the LOS estimation on multiple links as a POMDP [43] since, the learning agent observes on the link(s) it selects for transmission. A POMDP is expressed as {S, A, P(s t+ t |s t , a t ), r t }, where S and A are state space and action space respectively while P(s t+ t |s t , a t ) is a transition probability from state s t ∈ S to state s t+ t ∈ S and r t is the immediate reward for the action a t .
4) Reward:
The immediate reward r t is expressed as a penalty whenever the E2E loss exceeds the defined threshold and is expressed as follows: where ξ represents the E2E loss evaluated over an episode, is the loss threshold, and ρ is the number of channels selected by the agent. When the loss is greater than the threshold, the first term motivates the agent to use multiple links to overcome the loss while the second term encourages the use of single link to conserve bandwidth in good channel conditions. The basic architecture of the Actor-Critic RL consists of two networks: The actor and the critic as shown in Figure 6. The actor takes the action and the critic evaluates the action taken by the actor. In this work, we used two critic networks. The critic network calculates the current action-state value while the target-critic network computes the Bellman estimates of the future rewards. This approach improves the stability of the critic network because the target-critic is updated less often compared to the critic network. The three networks are updated according to equations (9), (10), and (12) respectively.
where ∇φ a J(φ a ) is the policy gradient and J(φ a ) is the policy objective function.
where φ a and φ c are the actor and critic network parameters, and The choice of the Actor-Critic (AC) was motivated by the fact that the AC algorithm does not require prior knowledge of the model underlying the transmission channel. The AC algorithm searches the optimal policy on a parametrized family of functions using a gradient-based approach. We designed the AC networks using fully connected multilayer perceptron NN with TensorFlow-2 [44] and Keras [45] libraries. More design and simulation parameters are given in Table 3.
B. TRANSFORMING A POMDP INTO A FOMDP WITH GMs
To accelerate the convergence of the RL agent, we propose the use of GMs to generate synthetic channel states for the states that are not accessed by the agent at a given time. This transforms the POMDP into a FOMDP and gives the Algorithm 1 The Learning Process of the Actor-Critic Agent 1: Set L as the total number of iterations, M the episode length, and N as the target-critic updating interval. Then, initialize the actor, critic, and target-critic networks with parameters φ a ,φ c φ tc respectively. 2: τ ← 0; 3: l ← 0; 4: while l ≤ L do -The actor selects the action a t ∼ π φ a (s t ) i.e., the number of transmission links. 5: i ← 0 6: while i ≤ M do -Transmit the video on the selected links; 7: if i = M − 1 then -Record the receiver report (channel states and loss rate) -Calculate the reward r t using (8); -The critic computes the state value; -Compute the TD error, δ t using (11); -Update the actor and critic network parameters using (9) and (10) respectively: 8: φ a ← φ a (t) 9: φ c ← φ c (t) -Update the Agent's Observation of the states according to Algorithm 2; 10: end if 11: i ← i + 1;
12:
τ ← τ + 1; 13: end while 14: if τ = N then -Update the target-critic network using (12); 15: τ ← 0; 16: end if 17: end while agent complete knowledge of all channels. As a result, the agent converges faster to maximize the use of the satellite visibility period. As shown in Figure 7 (a), with POMDP, the agent only observes the states of the channels it selects for transmission, marked as 1 if the channel is in LOS and −1 if it is in NLOS, where 0 indicates that the channel was not accessed in that time slot and thus the agent has no state information for that channel. Whenever a channel is not accessed, its state is generated by the CTGAN and TVAE models. Therefore, the agent's observation of the channel states is modified as shown in Figure 7 (b), where the values in red mark the synthetic states generated by the GMs. In Figure 7 (b), the agent has a complete observation FOMDP of all states at any time slot. These state observations are fed as input to the actor-network, which learns the LOS probability of each channel and estimates the scheduling policy.
C. TRAINING PROCEDURE
As detailed in Algorithm 1, at the start of an episode, the actor selects transmission links according to its observation t of the channel states and transmits the traffic on the selected links for the entire episode. At the end of the episode, the agent records the receiver report which contains end if 10: i ← i + 1 11: end for the E2E loss rate for that episode and the state (LOS/NLOS) of each selected link determined by the last bit reception status. The E2E is used to calculate the reward which is then used by the critic and the target-critic to compute the current and future state values respectively. The Temporal Difference (TD) is found using (11) and is used to update both the critic and the actor networks. The agent's state observation is updated according to Algorithm 2. After a given number of iterations, the target critic is updated with a soft-update method; i.e., copying the weights of the critic network according to a defined update factor, which in our case is the learning rate of the critic. Figure 6 shows the schematic representation of the whole training procedure.
VII. PERFORMANCE EVALUATION
The goal of this study is to investigate whether converting a POMDP of the RL agent to a FOMDP by using GMs to generate synthetic channel states of the channels that the agent does not select can accelerate the convergence rate of the RL agent and allow it to transmit with the optimal policy for most of the satellite visibility time.
To this end, we ran several simulations to train our agent in four different cases with different E2E loss thresholds: 0.0001, 0.0005, 0.001, and 0.01. In each of these four cases, the agent was trained using different channel states. First, we trained the agent in a partially observable Markov decision process (POMDP), where the agent learns from only the states of the channel it selects for transmission. Then we trained it in a fully observable Markov decision process (FOMDP) using real data obtained by using channel models. Finally, we trained the agent in FOMDP using synthetic data generated by CTGAN and TVAE models. In total, we ran 16 simulations to test the learning performance of our agent and the effect of using generative models. Each simulation lasted 1000 episodes with 1000 iterations for each episode. In the following, we evaluate the performance of the agent in terms of its learning performance, convergence rate, its ability to overcome the E2E loss, and the bandwidth used. For comparison, we also report the performance of the optimal scheduling policy. This is the policy that assumes that the LOS states of the channels are known in advance so that the steady-state probabilities for all available paths are exactly known.
A. LINK SELECTION PERFORMANCE
In this part, we evaluate the ability of our AC-agent to select suitable transmission links in various situations. Figure 8 compares the categorical distributions achieved at convergence by our AC-agent (red) and the optimal policy (blue). Categorical distributions are the probabilities of transmitting with satellite 1 (sat 1 ) at 70 • , satellite 2 (sat 2 ) at 60 • , and both satellites (sat 1,2 ).
1) EFFECT OF E2E LOSS THRESHOLD
Four different E2E loss thresholds were considered: 0.0001, 0.0005,0.001,0.01. This means that in each case the learning agent has to determine the suitable links to use and whether to use single or double transmissions in order to overcome the E2E loss rate to meet the predefined threshold. It is expected that when the threshold is very low the agent should favor double transmissions compared to when the threshold is high. The results show that our agent is able to recognize this pattern and use double transmissions with redundancy when the E2E loss threshold is low at 0.0001 and 0.0005. Also when the threshold is moderate at 0.001, the agent still favors double transmission, but it also uses single transmission more than the previous two cases (0.0001 and 0.0005). However, when the E2E loss threshold is high at 0.01 the agent uses more single transmission because it is easy to meet the threshold without using redundancy to preserve bandwidth. Figure 8 shows that in the fourth case when the E2E loss threshold is 0.01 the agent uses single transmission and transmits more via satellite 1 than satellite 2 because satellite 1 is at a higher elevation angle of 70 • compared to satellite 2 which is at 60 • . Thus, satellite 1 is assumed to have a higher LOS probability than satellite 2 because, in urban areas, there are fewer obstacles like buildings at higher elevation angles than at lower elevation angles. These results show that our agent can learn the LOS probabilities of different links and select the suitable links that have higher LOS probability and higher chances of good traffic reception.
3) EFFECT OF USING GENERATIVE MODELS
In each of the E2E loss thresholds considered, four different simulations were performed: using POMDP, FOMDP with real data, FOMDP with CTGAN model, and FOMDP with TVAE model as shown in Figure 8. The POMDP and FOMPD with real data are used as benchmarks to evaluate the effect of using synthetic data generated by CTGAN and TVAE models. It can be seen that in all four E2E loss thresholds, when the AC-agent uses FOMDP with synthetic data generated by CTGAN and TVAE, it achieves good performance similar to FOMDP with real data and outperforms the POMDP, especially in the fourth case when the E2E loss threshold is 0.01. This shows that using generative models to transform a POMDP into a FOMDP increases the learning performance of the AC agent in selecting suitable transmission links.
4) COMPARISON BETWEEN THE AC-AGENT AND THE OPTIMAL POLICY
In Figure 8, the categorical distributions achieved by our agent are shown in red and those achieved by the optimal policy are shown in blue. The optimal policy is the scheduling policy that is assumed to have prior knowledge of the satellite LOS probabilities. The results show that in all the simulation scenarios considered, our learning agent achieves good performance comparable to the optimal policy which is assumed to know the channel states in advance.
B. CONVERGENCE RATE
In Table 7, we report the episodes in which the agent achieved convergence, i.e., the episode in which the KL -divergence between its categorical distributions and those of the optimal policy is minimal. These results show that for all E2E loss thresholds considered, using GMs to generate synthetic channel states increases the convergence rate of the learning agent compared to the case where the agent learns with partially observable channel states. For example, with an E2E loss threshold of 0.0001, the agent converges after 597 episodes in POMDP while it converges after 320 episodes in FOMDP with CTGAN and TVAE. This corresponds to a 47% increase in the convergence rate, a performance similar to the benchmark FOMDP when the agent uses real datasets and converges after 319 episodes. Similar improvements in convergence rate can be observed in all other scenarios. These results show that using GMs to generate synthetic channel states of the channels not selected by the agent, thereby converting a POMDP to a FOMDP, significantly improves the convergence rate of the learning agent. Figure 9 and Table 7 also show that TVAE converges faster and is more stable than CTGAN; perhaps because TVAE directly learns the distribution of the input data, unlike CTGAN. We also find that as E2E loss threshold increases, the agent converges relatively faster and arrives at a relatively better steady-state policy. This shows that our agent can operate in a wide range of propagation environments with different QoS requirements. The results in Table 7 also show that both offline and real-time-trained GMs achieve comparable performance. For example, at the 0.0001 E2E threshold, using real-time trained CTGAN and TVAE models the learning agent converges after 325 and 367 episodes, only 6 and 48 episodes respectively higher than with the offline-trained models. It can be concluded that our proposed approach of using GMs to accelerate the convergence of the learning agent can also be used in real-time operations without slowing down the agent's rate of convergence.
C. CONVERGENCE TIME AND THE SATELLITE VISIBILITY PERIOD
In this part, we compare the agent convergence time and satellite visibility period. The results in Table 7 show that for partially observable states, the maximum convergence time is reached with a 0.0001 threshold and the agent converges after 597 episodes, which corresponds to 597k iterations. One iteration corresponds to the transmission of one bit. Assuming that the data rate of both the transmitter and receiver is 1 Mbps, which is feasible for satellite communications, convergence takes about 1.194 seconds, considering the time to transmit and receive feedback. This is about 0.001 times the satellite visibility period of 210 seconds in the considered scenario. In the case of the real-time trained-GMs, the training dataset of 10k was used, which can be obtained online in only 0.02 seconds, taking into account the transmission and feedback time. Table 6 shows that the CTGAN and TVAE models are trained in 3.89 and 2.39 seconds, respectively. Table 7 shows that when using real-time, the agent converges after 325 episodes with CTGAN and 367 episodes with TVAE, which correspond to 0.65 and 0.734 seconds, respectively. This means that the total time required to acquire the training data and train the GMs, as well as the time required for the learning agent to converge, is approximately 4.56 seconds for CTGAN and 3.144 seconds for TVAE, corresponding to 0.02 and 0.01 times the satellite visibility period respectively. It can be concluded that the learning agent can converge fast enough to make the best use of the satellite visibility time even when the GMs are trained in real-time. Figure 9 is the graphical representation of the convergence rates of the learning agent in different scenarios. At the end of each learning episode, we recorded the KL-Distance between the categorical distributions achieved by the AC-agent and those achieved by the optimal policy. The results in Figure 9 show that in all the scenarios described above, the KL-Distance decreases as the simulation progresses. This shows that our agent is able to learn the LOS of the channels and converge to a steady-state scheduling policy when the KL -Distance approaches 0, i.e., the learning agent achieves the same categorical distributions as the optimal policy. It can also be seen that using generative models, in this case, CTGAN and TVAE, accelerates the convergence. This shows that our proposed approach of using GMs to transform a POMDP into a FOMDP can enable the learning agent to converge faster within the satellite visibility period.
D. E2E LOSS RATE
In multipath transmission, traffic is considered lost if the transmitted traffic cannot be recovered on any of the available paths. In our case, the loss granularity is the bit. Therefore, the E2E loss rate (in BER) is defined as follows: where M is the number of iterations in an episode, and ω and υ are the number of bits transmitted and lost per iteration respectively. Figure 10 shows the E2E loss rates achieved by the learning agent compared to the optimal policy at each of the loss thresholds:0.0001, 0.0005, 0.001, and 0.01 in different scenarios: POMDP and FOMDP with real data, CTGAN and TVAE. These results show that in all scenarios, as the agent continues to learn, the E2E losses decrease toward the end of the simulation. It can be seen that as intended, using GMs lowers the E2E loss more than using partially observable states, as the agent converges faster and transmits with the best policy most of the time. We also observe that TVAE shows better performance than CTGAN. Table 8 shows the numerical values of the average E2E loss rates. The high loss rates observed may be due to the fact that the reference city has satellite links with high losses due to low LOS probabilities, as shown in Table 1. For this reason, even with the optimal policy, the loss rate is higher than the thresholds, except for the highest threshold of 0.01. Figure 11 shows the bandwidth used in terms of the average number of bits transmitted by the learning agent and the optimal policy in each learning episode. It can be seen that at low loss thresholds (0.0001, 0.0005, and 0.001), both the learning agent and the optimal policy trade the bandwidth to overcome the E2E loss. While the optimal policy uses double transmission most of the time, the learning agent starts with single transmission and slowly learns and converges towards double transmissions. However, at the higher loss threshold of 0.01, both converge to a single transmission. Results in Figure 8 too, show this behavior of using high bandwidth at low loss thresholds and low bandwidth at high loss thresholds. Table 9 shows the average throughput in megabits per second (Mbps). In our simulations, we assumed 1.5 Mbps as the source rate. These results show that our agent can learn the link characteristics and proactively transmit with redundancy to overcome high losses and use single transmission in low-loss conditions to save bandwidth.
VIII. CONCLUSION
In this work, we presented an AI-based framework for the upcoming 6G-NTN integrated networks. The framework consists of an AC-RL agent and GMs. The RL agent estimates the LOS probabilities and schedules traffic over multiple access links connecting the UE to LEO satellites in a multiaccess mode, while the generative models (GANs and VAEs) are used to transform a POMDP into a FOMDP to accelerate the learning process of the agent so that it can converge within the satellite visibility period. Simulation results have shown that our approach significantly improves the learning process and shortens the convergence time. As a result, the agent is able to transmit with an optimal policy for most of the satellite visibility period, thus satisfying the QoS requirements by reducing E2E losses without incurring additional bandwidth costs. In the 6G context, our framework can offer learning-based LOS estimation and traffic scheduling in 6G-NTN integrated networks to improve link reliability and availability, increase data rates and throughput, and improve the QoS and the user Quality of Experience (QoE) which are among the main pillars of the upcoming 6G mobile networks. In addition, we have shown that the GMs can be trained in real-time using network data collected by the RL agent, eliminating the need for prior knowledge of the channel model or training data. | 2023-08-26T16:10:49.996Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "0216787bf1d01091bbf911f20cf938748446dae9",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/8782661/8901158/10225588.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "e597b838d0b13e73884f360ae695c8cc40667eba",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118372198 | pes2o/s2orc | v3-fos-license | The cooperative Lamb shift in an atomic nanolayer
We present an experimental measurement of the cooperative Lamb shift and the Lorentz shift using an atomic nanolayer with tunable thickness and atomic density. The cooperative Lamb shift arises due to the exchange of virtual photons between identical atoms. The interference between the forward and backward propagating virtual fields is confirmed by the thickness dependence of the shift which has a spatial frequency equal to $2k$, i.e. twice that of the optical field. The demonstration of cooperative interactions in an easily scalable system opens the door to a new domain for non-linear optics.
We present an experimental measurement of the cooperative Lamb shift and the Lorentz shift using an atomic nanolayer with tunable thickness and atomic density. The cooperative Lamb shift arises due to the exchange of virtual photons between identical atoms. The interference between the forward and backward propagating virtual fields is confirmed by the thickness dependence of the shift which has a spatial frequency equal to 2k, i.e. twice that of the optical field. The demonstration of cooperative interactions in an easily scalable system opens the door to a new domain for non-linear optics.
One of the more surprising aspects of quantum electrodynamics (QED) is that virtual processes give rise to real phenomena. For example, the Lamb shift arises from a modification of the transition frequency of an atom due to the emission and reabsorption of virtual photons. Similarly in cavity QED [1][2][3] the reflection of the virtual field by a mirror modifies the absorptive and emissive properties of the atom. In a cooperative process such as superradiance the light-matter interaction is modified by the proximity of identical emitters. The dispersive counterpart of superradiance is known as the cooperative Lamb shift [4] (also sometimes referred to as the collective or N -atom Lamb shift [5]). The cooperative Lamb shift and the cooperative decay rate (i.e. super-or subradiance) arise from the real and imaginary parts of the dipoledipole interaction, respectively. Although superradiance has been investigated extensively [6], experimental studies of the cooperative Lamb shift are scarce. Evidence for the shift is restricted to two particular cases, involving three-photon excitation in the limit of the thickness ℓ being much larger than the transition wavelength λ in an atomic gas [7], and X-ray scattering from Fe layers in a planar cavity [8], demonstrating the fundamental link between the cooperative shift and superradiance. However, the full thickness dependence of the shift in a planar geometry with ℓ < λ predicted four decades ago [4] has not been observed.
Here we present experimental measurements of the cooperative Lamb shift in a gaseous nanolayer of Rb atoms as a function of both density and length. The atoms are confined in a cell between two superpolished sapphire plates. Similar nanolayers have been studied extensively over the last two decades, see e.g. [9][10][11][12][13][14]. We extend this work to the high density regime where dipole-dipole interactions dominate. In addition by building the effects of dipole-dipole interactions into a sophisticated model of the absortpion spectra we are able to extract the length depedence of the resonant shift and thereby verify that the spatial frequency of the cooperative Lamb shift is equal to twice that of the light field [4]. We thus confirm the fundamental mechanism of the cooperative Lamb shift as the exchange of virtual photons.
The underlying mechanism of light scattering is the interference between the incident field and the local field produced by induced oscillatory dipoles. In a medium with N two level dipoles per unit volume the susceptibility for a weak field is given by the steady state solution to the optical Bloch equations (see e.g. [15]) where d is the transition dipole moment, γ ge is the decay rate of the coherence between the ground and excited states and ∆ is the detuning from resonance. The response of an individual dipole is described in terms of the polarizability, In a dense medium, the field produced by the dipoles modifies the optical response of each individual dipole. This modified response is found by adding the incident field to the dipolar field, E loc = E + P/3ǫ 0 , where E loc is known as the Lorentz local field [16]. The susceptibility determines the bulk response P = ǫ 0 χE, whereas the polarizability determines the local response P = 4πǫ 0 N α p E loc . Substituting for E and P we find a relation between the macroscopic variable χ and the single dipole parameter α p which is referred to as the Lorentz-Lorenz law [16] Substituting for α p we find and hence the first order correction due to dipole-dipole interactions is a shift in the resonance frequency known as the Lorentz shift However, as discussed by Stephen [17] and Friedberg, Hartmann and Manaasah [4] the pairwise dipole-dipole interaction also contains a radiation term. The complete pair potential for two dipoles, V dd , has the form where ǫ = −3 Γ/4(kr) 3 , r and θ are their separation and relative angle, respectively, and Γ is the natural linewidth of the dipole transition with wavevector k = 2π/λ. The real and imaginary parts of V dd give rise to a level splitting and a modification of the spontaneous lifetime (superradiance or subradiance), respectively [4,[17][18][19].
While these effects have been demonstrated in experiments on two ions [20] and two molecules [21], a key advantage in our experiment is that we can easily vary the mean spacing r between atoms. By changing the temperature of the vapor between 20 • C and 350 • C we can smoothly vary the number density over 7 orders of magnitude. In doing so we move between two regimes: For more than two dipoles the cooperative N -atom shift and decay rate are given by a sum of the pairwise dipole-dipole interaction Eq. (6) for all pairs. For the relatively simple case of an ensemble of dipoles confined within a thin plane of thickness ℓ, the sum produces a shift [4] where the first term is the Lorentz shift and the second term is the cooperative Lamb shift. There are two remarkable features of Eq. (7). First, the cooperative Lamb shift is a shift to higher energy. One can understand the opposite sign of the Lorentz shift and the cooperative Lamb shift from the pairwise potential, Eq. (6). For a thin slab where all the dipoles lie in the plane, all the dipoles oscillate in phase such that the dipole-dipole interaction reduces to the static case, which after averaging over all angles gives an attractive interaction resulting in the Lorentz shift to lower energy. As one moves out of the plane in the propagation direction the relative phase of the dipoles changes and at a separation of λ/4 the second dipole re-radiates a field that is π out of phase with the source dipole. This switches the sign of the interaction giving rise to the cooperative Lamb shift to higher energies. The second interesting property of the shift is that it depends on twice the propagation phase kℓ which arises due to the re-radiation by the second dipole [4]. Finally we note that while superradiance requires excitation of the medium, the cooperative Lamb shift can be observed in the limit of weak excitation where there is negligible population of the excited state.
It is important to note that the shift ∆ dd applies to a static medium. For a gaseous ensemble, atomic motion leads to collisions that also contribute a density dependent shift ∆ col and broadening Γ self of the resonance lines (see [22] and references therein), and thus the total shift for a thermal ensemble becomes While evidence for density dependent shifts has been observed in experiments on selective reflection [23], it is important to measure ∆ tot as a function of the length of the medium to separate the length independent collisonal shift ∆ col [4] from the length dependent cooperative Lamb shift. Below we present experimental data that allow that distinction to be made for the first time.
To measure the cooperative Lamb shift, we use a gaseous atomic nanolayer of Rb confined in a vapor cell with thickness ℓ < λ. The cell is shown in Fig. 1(a), and consists of a Rb reservoir and a window region, where the Newton rings indicate the variation in the cell thickness from 30 nm at the centre to 2 µm near the bottom of the photograph. The wedge-shaped thickness profile arises due to the slight curvature of one of the windows (radius of curvature R > 100 m). The local thickness at the position of the probe laser is measured at operational temperature using an interferometric method outlined in Ref. [24]. The local surface roughness measured over an area of 1 mm 2 is less than 3 nm, for any part of the window, and the focus of the beam is ≪ 1 mm 2 . The reservoir can be heated almost independently of the windows and its temperature determines the Rb number density, while the windows are kept > 50 • C hotter to prevent condensation of Rb vapor. By changing the temperature of the vapor between 20 • C and 350 • C we can vary the atomic density between the regimes N k −3 ≪ 1 where dipole-dipole interactions are negligible and N k −3 ≈ 100, where dipole-dipole interactions dominate.
To determine the optical response of the medium we record transmission spectra as a narrowband laser is scanned across the D2 resonance in Rb at 780 nm. The light is reduced to a power P ≈ 100 nW and focussed to a 30 µm spot size inside the cell, leading to a local vapor length variation due to the wedge-shaped profile of less than 3 nm. The accuracy in determining the cell thickness is therefore limited by the surface flatness of the windows. Though the intensity of the light is greater than the conventional saturation intensity (I sat ≈ 1.7 mW/cm 2 for the Rb D2 line), the extremely short length of the cell means that optical pumping is strongly suppressed. The transmission is recorded on a photodiode, and a reference cell and Fabry-Perot interferometer are used to calibrate the laser frequency. Example experimental spectra for a thickness of ℓ = 90 nm are shown in Fig. 1(b), where the shift is clearly visible. The shift is extracted by fitting the observed spectra to a comprehensive model of the absolute transmission, based on a Marquardt-Levenberg method (see e.g. Ref. [25]). The model includes the effect of collisional broadening and has been shown to predict the absolute absorption of Rb vapor to better than 0.5% [22,26]. To this we add the effects of Dicke narrowing [9], where the Doppler effect is partially suppressed as a result of the short length scale; cavity effects [12], since the cell is a low-finesse etalon (with finesse F ∼ 1); and a single parameter which accounts for a ferquency shift of the whole spectrum. shows the spectrum obtained in the dipole-dipole dominated regime (N k −3 ≈ 50) for a thickness ℓ = 90 nm. The individual hyperfine transitions are no longer resolved and there is a clear shift of the resonance to lower frequency. To illustrate this, we also plot the theoretical prediction with the line shift removed. From fitting the data, the collisional broadening is found to be Γ self = √ 2 · 2πN Γk −3 = 2π(1.0 × 10 −7 )N Hz cm 3 for thicknesses greater than λ/4 in agreement with previous work (see [22] and references therein). For thicknesses shorter than λ/4 we observe additional broadening that requires further investigation. We also observe a van der Waals shift due to atom-surface interactions which we extract by fixing the density and varying the cell thickness (see also [14]), but even for the smallest thickness (90 nm) this is more than an order of magnitude smaller that the cooperative Lamb shift.
By comparing the experimental data with the theoretical prediction we extract the line shift as a function of number density and the thickness of the medium. In Fig. 3 we show the measured shift as a function of number density for two thicknesses, ℓ = 90 nm and 250 nm. Hyperfine splitting gives rise to a different effective dipole for each transition in the spectrum, which at low densities shift independently. However, in the high density regime (N > 10 16 cm −3 ) dipole-dipole interactions dominate the lineshape and hyperfine splitting becomes negligible. We can then treat the line as a single s 1/2 → p 3/2 transition which shifts linearly with density, as shown in Fig. 3. We fit the gradient of the linear region to obtain the coef- ficient of the shift, and repeat these measurements for 13 thicknesses up to 600 nm. For thicknesses greater than 600 nm, the high optical depth of the sample impairs resolution of the line shift. We also observe anomalous behaviour around ℓ = 420 nm, which may be due to the 5s-6p atomic resonance around 420 nm in Rb populated by the well-known energy pooling process [27].
In Fig. 4 we plot the gradient of the line shift as a function of cell thickness. For the Rb D2 resonance, ∆ LL /N = −2πΓk −3 , where we have used the relationship between the dipole moment for the s 1/2 → p 3/2 transition and the spontaneous decay rate, d = 2/3 L e = 1|er|L g = 0 (see Ref. [26]). We extract the collisional shift by comparing the data to Eq. (8) with ∆ col the only free parameter. The amplitude and period of the oscillatory part are fully constrained by Eq. (7). We find the collisional shift to be ∆ col /2π = (0.25 ± 0.01) × 10 −7 Hz cm 3 , similar to previous measurements on potassium vapor [23]. In this high density limit, the collisional shift is also independent of hyperfine splitting. The solid line is the prediction of Eq. (7), and the agreement between the measured shifts and the theoretical prediction is remarkable (the reduced χ 2 for the data is 1.7). As well as measuring the thickness dependence of the cooperative Lamb shift, our data also provide a determination of the Lorentz shift which can only be measured in the limit of zero thickness.
The demonstration of the cooperative Lamb shift and coherent dipole-dipole interactions in media with thickness ∼ λ/4 opens the door to a new domain for quantum optics, analgous to the strong dipole-dipole non-linearity in blockaded Rydberg systems [28,29], that combines high bandwidth and high repetition rate with a simple optical set-up that is easily scalable. As the cooperative Lamb shift depends on the degree of exciation [4], exotic non-linear effects such as mirrorless bistability [30,31] are now accessible experimentally. In addition, given the fundamental link between the cooperative Lamb shift and superradiance, sub-quarterwave nanolayers offer an attractive system to study superradiance in the small volume limit. These topics will form the focus of future research.
We would like to thank M. P. A. Jones for stimulating discussions. We acknowledge financial support from EPSRC and Durham University. | 2012-01-25T12:38:49.000Z | 2012-01-25T00:00:00.000 | {
"year": 2012,
"sha1": "d20d404ab6d528cb521249898b4d177934f63a2e",
"oa_license": null,
"oa_url": "http://dro.dur.ac.uk/9661/1/9661.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d20d404ab6d528cb521249898b4d177934f63a2e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219966170 | pes2o/s2orc | v3-fos-license | Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
Network pruning is one of the most dominant methods for reducing the heavy inference cost of deep neural networks. Existing methods often iteratively prune networks to attain high compression ratio without incurring significant loss in performance. However, we argue that conventional methods for retraining pruned networks (i.e., using small, fixed learning rate) are inadequate as they completely ignore the benefits from snapshots of iterative pruning. In this work, we show that strong ensembles can be constructed from snapshots of iterative pruning, which achieve competitive performance and vary in network structure. Furthermore, we present simple, general and effective pipeline that generates strong ensembles of networks during pruning with large learning rate restarting, and utilizes knowledge distillation with those ensembles to improve the predictive power of compact models. In standard image classification benchmarks such as CIFAR and Tiny-Imagenet, we advance state-of-the-art pruning ratio of structured pruning by integrating simple l1-norm filters pruning into our pipeline. Specifically, we reduce 75-80% of total parameters and 65-70% MACs of numerous variants of ResNet architectures while having comparable or better performance than that of original networks. Code associate with this paper is made publicly available at https://github.com/lehduong/ginp.
INTRODUCTION
Motivation Researchers have extensively exploited deep and wide networks for the sake of achieving superior performance on various tasks. Most of state-of-the-art networks are extremely computationally expensive and require excessive memory. However, real-world applications usually require running deep neural networks on edge devices for various reasons: user privacy, security, real-time analysis, offline capability, reducing cost for server deployment, and so on. Adopting large, cumbersome networks to such resource-constrained environments is challenging due to the restrictions of memory, computational power, energy consumption,... LeCun et al. (1990); Reed (1993); Han et al. (2015); Li et al. (2016) reduce a cumbersome, over-parameterized network to compact one by removing unnecessary weights and connections of networks. It is widely believed that small networks pruned from large, over-parameterized networks achieve superior performance than those trained from scratch Frankle & Carbin (2018); Renda et al. (2020); Li et al. (2016); Luo et al. (2017). A plausible explanation to this phenomenon is the lottery ticket hypothesis Frankle & Carbin (2018) i.e. large, over-parameterized networks contain many optimal sub-networks i.e. winning tickets. In particular, network pruning could be done in two manners: one-shot pruning -prune network with the desired compression ratio and retrain it only one time, or iterative pruning -only prune small ratio of the original network, retrain and repeat that process until target size is reached. It has been shown that iterative pruning could lead to a greater compression ratio compare to one-shot pruning Han et al. (2015); Luo et al. (2017); Li et al. (2016); Renda et al. (2020). Furthermore, Frankle & Carbin On the other hand, ensembles of neural networks are known to be much more robust and accurate than individual networks Huang et al. (2017); Ashukha et al. (2020); Snoek et al. (2019). In spite of their superior performance, the tremendous cost of training and inference of ensembles making them less attractive in practice. For the purpose of accelerating training time of ensembles, prior works propose methods encouraging models to converge to different local minimums during training Huang et al. (2017); Garipov et al. (2018); Yang et al. (2019b). To reduce inference time of ensembles, one could use a single network to mimic behavior of ensembles as pioneered by bornagain tree Breiman & Shang and knowledge distillation Hinton et al. (2015); Balan et al. (2015); Bucilu et al. (2006); Malinin et al. (2019). In the above approaches, although small networks can not achieve comparable performance with ensembles of networks, dark knowledge transferred from teachers to student network could bridge the gap between their prediction powers.
Background: Network pruning
Our proposal: While existing methods of iterative pruning is more effective than one-shot pruning, the snapshots at each pruning iteration are mostly overlooked. We consider leveraging the snapshots of iterative pruning to take the performance of compact models to the next level.
In this work, we propose a simple pipeline for model compression by slightly modifying standard approach. Specifically, we make use of large learning rate restarting at each pruning iteration to retrain pruned networks. Hence, each retraining step could be considered as a cycle of Snapshot ensemble Huang et al. (2017). Utilizing both large learning rate restarting and pruning foster the diversity between snapshots, thus, constructing strong ensembles. Once achieve the desired compression ratio, we then distill the knowledge from the ensembles of snapshots of iterative pruning to the final model. Our method acquires the advantages of network pruning, ensembles learning, and knowledge distillation. To the best of our knowledge, this is the first work attempting to exploit snapshots of iterative pruning to further improve the performance of pruned networks.
Our main contributions: The contributions of our work are summarized as below: 1. We empirically show that fine-tuning with large learning rate restarting can achieve competitive or better results than common strategy i.e. small, fixed learning rate on a range of standard datasets and architectures. Surprisingly, such simple modification can create very strong baselines for both structured and unstructured pruning.
2. We demonstrate that snapshots of iterative pruning could construct strong ensembles.
3. We proposed a simple pipeline to combine knowledge distillation from ensembles and iterative pruning. We empirically show that our approach can achieve state-of-the-art pruning ratio by reducing 75 − 80% of parameters and 65 − 70% MACs on numerous variants of ResNet while having comparable or better results than original networks.
RELATED WORKS
Knowledge Distillation The approach of training small, efficient student network to mimic behavior of large, over-parameterized network has been proposed for long time Bucilu et al. (2006) and was recently repopularized in Hinton et al. (2015); Ba & Caruana (2014). Later, knowledge distillation was extended to various aspects, transferring knowledge from intermediate layers Romero et al. (2014); Zagoruyko & Komodakis (2016), allowing teachers and students guide each others Zhang et al. (2018), using teacher and student with same architecture Network Pruning The idea behind network pruning is reducing the redundant weights and connections of original network to achieve compact networks without losing much performance Han et al. (2015); Li et al. (2016). In general, pruning can be divided into two categories: structured pruning and unstructured pruning. Unstructured pruning Hanson & Pratt (1989) Molchanov et al. (2016) remove the redundant weights at the level of filters/channels/layers, thus, speeding up the inference of networks directly. There are numerous approaches to determine redundant filters/weights: Luo et al. (2017) use statistic information of the next filters to select unimportant filters, Li et al. (2016) prune the filters that have smallest norm in each layer, Molchanov et al. (2016) select the filters to minimize the construction loss estimated with Taylor expansion. As these criteria are rough estimation of weight's importance, pruning a large number of filters/weights at once might break down and lead to inferior performance compare to iterative pruning Han et al. (2015); Li et al. (2016). Recently, Liu et al. (2018) empirically show that training the pruned model from scratch can also achieve comparable or even better performance than fine-tuning. While the efficacy of pruning remains an open question, in this work, we propose exploiting benefits of generating multiple networks of difference capacity for model compression.
BACKGROUND: KNOWLEDGE DISTILLATION
Consider the classification problem in which we need to determine the correct category for input image x among M classes. The probability of class m for sample x n given by neural network f parameterized by θ is computed as: Where τ is the temperature of softmax function, higher values of τ lead to softer output distribution. Conventional approaches optimize the parameters θ by sampling mini-batches B from the dataset and update the parameters to minimize cross-entropy objective: The target distribution of a sample is usually represented by one-hot vector i.e. only the true class is 1 and all other classes are 0. Since input images might differ in term of noise, complexity, and multi-modality, enforcing networks to excessively fit the delta distribution of ground truth for all samples might deteriorate their generalization. Besides that, the similarity between classes provides rich information for learning and potentially prevent overfitting Yang et al. (2019a). Knowledge distillation Bucilu et al. (2006); Hinton et al. (2015) use a trained (teacher) network, which usually has high capacity, to guide the training of other (student) network. Let q m (x n ) be the probability of class m for image x n given by the teacher network, which is parameterized by ψ. The objective function of knowledge distillation is defined as: In case the teacher is an ensemble of K networks, the target distribution of knowledge distillation is the average of outputs of all networks:q m (x n ; ψ 1: ). An alternative approach is optimizing the mean of Kullback-Leibler divergence between student and each teacher network: We experimented with two above objectives but did not observe significant difference in performance of student networks, thus, we only report results of second approach.
SNAPSHOTS OF ITERATIVE PRUNING
In contrast to previous works, which mainly focus on the aforementioned usage of iterative pruning (i.e. alleviating the noise of weight's importance estimation), we exploit the benefits of generating multiple models varying in structure, capacity to construct strong ensembles. Figure 1: Overview of our approach to combine the advantage of knowledge distillation, ensembles of networks, and network pruning. At the start, we prune the filters/weights according to some criteria ( 1 -norm, Taylor approximation,...). With KESI, we retrain the pruned networks with large learning rate and minimize the conventional supervised loss function. Once we achieve the desired pruning ratio, we use knowledge distillation to transfer the knowledge from ensembles of snapshots of iterative pruning to the final model.
Inspired by the prior works of Smith (2015); Loshchilov & Hutter (2016) in which the authors show that promising local optimums could be found in a small number of epochs after restarting the learning rate. Furthermore, Huang et al. (2017) demonstrate that utilize large learning rate restarting during training can construct strong ensembles without much additional cost.
Broadly speaking, the accurate of ensembles depends on: the accurate of individual networks and the diversity of them. On the other hand, network pruning generates snapshots varying in structure and achieving competitive performance. Hence, if the pruned networks could achieve minimal loss in predictive power relative to the original networks, the ensembles of them could outperform ensembles of networks having identical architecture (and trained with large learning rate restarting).
Prior works such as Han et al. (2015); Liu et al. (2018); Molchanov et al. (2016) retrain the pruned networks for T more epochs with a fixed learning rate, which is usually the final learning rate of the training. However, this approach might results in multiple snapshots stuck in similar local optimums, thus, leading to very weak ensembles as shown in our experiments. Similar to Huang et al. (2017), we adopt the large learning rate restarting at each every pruning iteration to encourage each snapshot converges to different optimum. For learning rate restarting, we utilize the one-cycle policy Smith & Topin (2019), which is proved to increase convergence speed of several models. Due to the similarity of our proposed method and Snapshot Ensembling Huang et al. (2017), we refer each pruning and retraining step as a cycle. One-cycle policy adjusts learning rate at each mini-batch update and has two phases: INCREASING LEARNING RATE The learning rate and momentum of optimizer will be initialized to η initial and β initial respectively. During the first T iterations of fine-tuning, learning rate and momentum gradually increase from initial values to η max , β max . The learning rate and momentum at i-th step with cosine annealing strategy are given by: DECREASING LEARNING RATE After T iterations, learning rate and momentum will be gradually decreased from η max and β max to η min and β min in L − T iterations where L is total number of iterations for fine-tuning.
It is worth noticing that differ from previous works Huang et al. (2017); Yang et al. (2019b), which use cosine annealing schedule, by using one-cycle policy, we also "warm-up" learning rate at the start of each cycle. In our experiments, warming up learning rate is extremely important to achieve high accuracy with deep and large networks.
Surprisingly, retraining with one-cycle policy does not only generate significantly stronger ensembles, but also consistently outperform standard policy for both structured and unstructured pruning in term of predictive accuracy of individual snapshot. We hypothesize that the (local) optimums of pruned networks are actually far from those of original networks, thus, large learning rate is needed to guarantee the convergence of pruned networks. We leave rigorous evaluation to investigate this phenomenon for future works.
EFFECTIVE PIPELINE FOR MODEL COMPRESSION
Since we already obtain strong ensembles during pruning, it is straightforward to distill the knowledge from them to the final pruned network. Our proposed pipeline can be summerized as follow: 1. TRAIN the baseline model to completion. 2. PRUNE redundant weights of the network based on some criteria. 3. RETRAIN the pruned network with large learning rate . 4. REPEAT step 2 and 3 until desired compression ratio is reached. 5. DISTILL knowledge from ensembles of snapshots of pruning.
From now, we refer our pipeline for model compression as Knowledge Distillation from Ensembles of Snapshots of Iterative Pruning (KESI). An overview of our approach is depicted in Figure 1. Our approach is extremely simple, easy to implement and can be adopted with any pruning mechanisms. We discuss the reasons why ensembles of snapshots of pruning are naturally suited for knowledge distillation.
Quality of Teacher In knowledge distillation, student can either learn to jointly optimize the supervised loss (Equation 2) and knowledge distillation loss (Equation 4) or only optimize the distillation objective. In the former case, if the teacher is poorly trained, mathematically speaking, the two objectives will conflict with each other. In the latter case, a poor teacher provides weak supervision (noisy label), making it's harder to learn from the student perspective. Furthermore, ensembles provide more robust predictions on noisy labeled datasets Lee & Chung (2019) 2018) show that a powerful teacher might impair its student performance when there is a large gap between their predictive powers. However, ensembles of snapshots of pruning consist of models varying in capacity. Hence, teacher's predictions of hard-to-learn samples (because of their complexity, multi-modality) will have softer distributions as the small networks could not "remember" those samples and would be more uncertain about them.
In this work, we only investigate knowledge distillation from ensembles of fixed-weights teachers, however, we can also jointly train all models and allow them to guide each other, which is referred to as deep mutual learning Zhang et al. (2018).
EXPERIMENTS
We conduct experiments on CIFAR-10, CIFAR-100 Krizhevsky et al. (2009) and Tiny-Imagenet 1 . We run each experiment 3 times then report the mean and standard deviation. In our experiments, we prune all networks in 5 cycles unless otherwise stated. The configurations used for training baselines models are described in supplementary document.
6.1 EXPERIMENT SETUP PRUNING Structured pruning we use 1 -norm based filters pruning Li et al. (2016) for simplicity. In each layer, a fixed number of filters having smallest 1 -norm will be pruned. Since the bulk of networks tend to be the last layers, we increase the percentage of filters that will be pruned as the layer goes deeper to achieve higher compression ratio. Unstructured pruning, we exploit (global) magnitude-based weight pruning Han et al. (2015) i.e. pooling parameters across all layers and pruning weights with lowest magnitude. Specifically, we only prune weights in convolutional layers similar to Liu et al. (2018).
RETRAINING
The budget for fine-tuning of each cycle is T = 40 and T = 25 epochs on CIFAR and Tiny-Imagenet respectively regardless model architectures. In standard policy, the learning rate is set to 0.001 and fixed during retraining. For one-cycle policy, we set the initial learning rate η initial = 0.01, gradually increase it to maximum learning rate η max = 0.1 in 10% of total (retrain) epochs, then decrease it to minimum learning rate η min = 0.0001 for the rest epochs. Other configurations are identical to those of training.
KNOWLEDGE DISTILLATION
We use Adam optimizer for ensemble distillation since we find it gives better results than vanilla SGD in general. For knowledge distillation, we also adopt one-cycle policy where we set η initial , η max , η min to 1e−4, 1e − 3, 1e−6 respectively. We do not explicitly use regularization for knowledge distillation. Other configurations e.g. batch size, number of retraining epochs,... are similar to normal retrain. In our experiments, we use temperature τ = 5. The teachers i.e. ensembles of snapshots consist of 6 models including original (unpruned) network and 5 snapshots of pruning.
RETRAINING WITH LARGE LEARNING RATE
We conduct experiments to empirically evaluate the performance of pruned networks trained with large learning rate compare to networks fine-tuned with small learning rate. Figure 2 and 3 shows results of pruned networks with different compression ratio for both structured and unstructured pruning. Exhaustive results are reported in supplementary document.
PERFORMANCE OF ENSEMBLES OF SNAPSHOTS
We compare the performance of ensembles of snapshots with different approaches: snapshots of pruned networks trained with small learning rate, snapshots of pruned networks trained with large learning rate restarting and snapshots of unpruned networks retrained with large learning rate (i.e. all snapshots have same architecture as the original network). Figure 4 presents the result of this experiment. We can see that although the capacity are reduced at each cycle, the ensembles of snapshots of iterative pruning achieve competitive or even better than snapshots of networks with same architecture. Detail results of performance of ensembles are reported in supplementary documents.
PERFORMANCE OF COMPACT NETWORKS TRAINED WITH OUR PIPELINE
In this section, we demonstrate that the smaller models trained with our pipeline achieve comparable or even better results than original models. Each final model is iteratively pruned and retrained in 5 cycles with different strategies. Table 2 and 3 present the performance of compact models on CIFAR-10, CIFAR-100 and Tiny-Imagenet. Specifically, we compare the iteratively-pruned-models retrained with small learning rate, large learning rate and our pipeline (i.e. large learning rate + knowledge distillation). Our pipeline consistently outperform standard strategy by a large margin on both structured and unstructured pruning.
Although our approach is general and can be applied to any (iterative) pruning mechanism, we also give a comparison of model trained with our pipeline and conventional approaches in Table 5: Results of iterative Filter Pruning on CIFAR-10 and CIFAR-100 dataset. The SLR column presents the result of the pruned networks trained with small, fixed learning rate while LLR column shows the results of same networks trained with large learning rate. | 2020-06-23T01:00:38.821Z | 2020-06-20T00:00:00.000 | {
"year": 2020,
"sha1": "a6b499136aa70dfad66fca5efaaaead5c616d59b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e6466feb728069311b172273d512485a0edd3ce7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
255861692 | pes2o/s2orc | v3-fos-license | Insecticide-treated durable wall lining (ITWL): future prospects for control of malaria and other vector-borne diseases
While long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) are the cornerstones of malaria vector control throughout sub-Saharan Africa, there is an urgent need for the development of novel insecticide delivery mechanisms to sustain and consolidate gains in disease reduction and to transition towards malaria elimination and eradication. Insecticide-treated durable wall lining (ITWL) may represent a new paradigm for malaria control as a potential complementary or alternate longer-lasting intervention to IRS. ITWL can be attached to inner house walls, remain efficacious over multiple years and overcome some of the operational constraints of first-line control strategies, specifically nightly behavioural compliance required of LLINs and re-current costs and user fatigue associated with IRS campaigns. Initial experimental hut trials of insecticide-treated plastic sheeting reported promising results, achieving high levels of vector mortality, deterrence and blood-feeding inhibition, particularly when combined with LLINs. Two generations of commercial ITWL have been manufactured to date containing either pyrethroid or non-pyrethroid formulations. While some Phase III trials of these products have demonstrated reductions in malaria incidence, further large-scale evidence is still required before operational implementation of ITWL can be considered either in a programmatic or more targeted community context. Qualitative studies of ITWL have identified aesthetic value and observable entomological efficacy as key determinants of household acceptability. However, concerns have been raised regarding installation feasibility and anticipated cost-effectiveness. This paper critically reviews ITWL as both a putative mechanism of house improvement or more conventional intervention and discusses its future prospects as a method for controlling malaria and other vector-borne diseases.
Background
In recent years considerable reductions in global malaria burden have been achieved by scaling-up key diagnostic, treatment and preventative measures [1]. Long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) remain the cornerstones of malaria vector control, both targeting indoor feeding and resting mosquito vector populations [2][3][4][5]. Long-term effectiveness of these strategies is currently under threat from widespread emergence of insecticide resistance to pyrethroid LLINs [6,7], as well as to other chemical classes used for IRS [8,9]. Furthermore, maintaining high coverage at the community-level of either intervention can be operationally challenging. Universal coverage (UC) campaigns of LLINs have been adopted as the standard of care by most National Malaria Control Programmes (NMCPs) [1]; however, net usage is known to decline during hot seasons [10][11][12], and LLIN efficacy and durability under field conditions [13,14] and rates of household attrition are also of increasing concern [15,16]. In some epidemiological settings, IRS can be highly effective [1,17] but the short residual activities of most insecticide formulations [18] render it logistically demanding and economically unsustainable for many endemic countries [19]. To maintain and consolidate gains and to transition towards malaria elimination and eradication [20], there is a growing impetus to develop alternate or complementary interventions [4,5,21], novel insecticide classes [22,23], combinations [24,25], formulations [26,27] and costeffective, scalable mechanisms of delivery [28][29][30], as well as to evaluate a potential role for concurrent housing improvement in disease control [31][32][33].
Initial experimental development and evaluation of insecticide-treated housing materials
Insecticide treatment of house or shelter materials was first pioneered as a method to control malaria during humanitarian emergencies in countries affected by war [34][35][36][37]. Impregnation of utilitarian tents or tarpaulins with deltamethrin was intended to circumvent the logistical difficulties of achieving high coverage with IRS or insecticide-treated nets (ITNs), producing high rates of mosquito mortality in experimental platform studies and pilot malaria control projects in Pakistan [35][36][37]. Early experimental hut evaluations of pyrethroid (deltamethrin or permethrin) and non-pyrethroid (pirimiphos-methyl, organophosphate or bendiocarb, carbamate) treated plastic sheeting (ITPS) as an interior wall liner, indicated that this intervention functions in a similar manner to IRS against host-seeking vectors entering indoors and alighting on walls either before or after blood-feeding, or if blocked from feeding by a mosquito net (Table 1). Only limited personal protection from biting was observed when ITPS was evaluated alone, suggesting disease control would instead be achieved through a 'mass effect' on vector density and longevity at the community-level [38,41,42,[46][47][48]. Depending upon the excito-repellant properties of different insecticides used to treat ITPS, some studies also reported increased deterrence rates and exophily among susceptible mosquito populations, demonstrating the potential to directly interrupt humanvector contact, further contributing to a reduction in malaria transmission [38,41,42,46]. For the majority of entomological parameters, ITPS efficacy was correlated with intervention surface area, with increasing coverage affording higher rates of mortality, deterrence and bloodfeeding inhibition [38,39,46].
Initial community-level trials of insecticide-treated housing materials
Following preliminary trials of experimentally-treated plastic materials (Table 1), commercial ITPS (ZeroFly ® ) was originally produced by Vestergaard Frandsen (Switzerland) as high density laminated polyethylene sheets containing deltamethrin (55 mg/m 2 ). Based on LLIN technology, the insecticide is incorporated into the polymer during manufacture and diffuses to the surface slowly, in a controlled fashion, acting as a long-lasting insecticide reservoir. Initial community-level evaluations of ZeroFly ® ITPS in temporary labour shelters and villages in India [40,43] and among displaced populations in Sierra Leone [44] and Angola [45] supported the entomological outcomes reported by experimental hut trials, achieving significant reductions in malaria incidence ( Table 2). Similar observations of the impact of coverage on intervention effectiveness were observed in Sierra Leone, where protective efficacy from malaria improved from 15 to 60% when ITPS coverage increased from ceiling only to include all four tent walls [44]. However, when carbamate-treated ITPS was evaluated in combination with UC or targeted LLIN distribution among rural houses in Benin, no additional malaria protection was reported, potentially attributable to limited wall coverage (only the upper thirds of walls were covered due to insecticide safety concerns), and the short residual activity of a single treatment of bendiocarb [21].
Commercial development of insecticide-treated housing materials
The promising results demonstrated by ITPS stimulated an interest in developing a long-lasting, sustainable, community-level version for permanent use in malaria endemic settings. Such a material would offer the prospect of a novel system of insecticide delivery, which could be more residual than IRS, provide a more uniform covering of the wall with insecticide and potentially improve the interior appearance of traditional dwellings, particularly in rural areas. To identify an acceptable wall lining material, among urban and rural houses in Angola and Nigeria, three deltamethrin-treated prototypes (polyethylene woven shade cloth, laminated polyethylene plastic sheeting (ZeroFly ® ) and polyester netting (PermaNet ® 2.0) were assessed for their levels of household acceptability, installation feasibility and willingness to pay ( Fig. 1) [52]. Rural participants highly favoured the concept of a wall lining for malaria control because of its observable impact on mosquitoes and other nuisance insects and perceived decorative value, given an existing predilection for house decorations. Of the prototype materials, polyethylene shade cloth was the most popular because of its ease of installation and resemblance to local materials. Based on these pilot field trials, the original iteration of insecticide-treated durable wall lining (henceforth ITWL; referred in previous publications as 'durable lining' or 'DL') was produced in the form of high density polyethylene woven sheets containing deltamethrin (ZeroVector ® ; 175 mg/m 2 ) (Fig. 1). Initial smallscale studies across multiple African and Asian countries demonstrated consistently high levels of user acceptability, entomological efficacy and no significant loss of Outcomes reported relative to untreated control, unless otherwise specified insecticidal activity over 1 year of household use [53,54]. However, no phase III evaluation of this product was ever conducted due to the emergence of widespread pyrethroid resistance among vector populations across sub-Saharan Africa [6,7]. In response, the latest generation of commercial ITWL (PermaNet ® Lining; Vestergaard Frandsen) was designed as a non-woven, high density polypropylene fabric containing a proprietary mixture of two non-pyrethroid insecticides (abamectin 0.25% and fenpyroximate 1%), to potentially mitigate insecticide resistance (Fig. 1). This product is currently the subject of an ongoing cluster-randomized controlled trial in an area of pyrethroid-resistance in rural North-East Tanzania, in comparison with UC of LLINs, assessing whether this version of ITWL can provide additional protection from malaria [55].
A potential role for insecticide-treated housing materials in resistance management
Now that pyrethroid resistance is pervasive across Africa, there has been a policy shift away from pyrethroid IRS towards the restriction of this insecticide class to LLINs for which there are currently no approved alternatives [49]. Because the 'mode of action' of ITWL is analogous to a long-lasting IRS and Africa has become a LLIN using continent, the combined use of ITWL and LLINs may have resistance management potential. In areas with pyrethroid-resistant vector populations, the role of ITPS/ ITWL plus LLINs or IRS to mitigate selection of resistant genotypes was investigated in experimental settings. Theoretically, combining interventions with different active ingredients can improve vector control because mosquitoes which are resistant to the insecticide in one intervention may be susceptible to the chemical class contained in the other. Several studies demonstrated that the combination of ITPS and LLINs can increase mortality, blood feeding inhibition and personal protection, the latter largely provided by LLINs, [41,48], but that ITPS, when used alone, may select for resistant vectors, as evidenced by higher proportions of mosquitoes carrying resistance genes surviving in ITPS-treated huts [41,42,47,48]. The difference in selection pressures likely reflects the different stages of the gonotrophic cycle, which ITPS and LLINs disrupt. Host-seeking mosquitoes upon encountering a LLIN may persist in their attempt to feed, by either making more flights between treated walls and the netted sleeper, increasing the chances of exposure to a lethal dose of the non-pyrethroid insecticide in the ITPS, or from the pyrethroid LLIN by probing for longer on the net surface, particularly if they have a degree of pyrethroid resistance and are less irritated. In this scenario, a proportion of females resistant to either insecticide would be killed. However, in the absence of a LLIN, once successfully fed, females become relatively quiescent and alight on the walls where differential selection, between susceptible and resistant genotypes, to the ITPS insecticide occurs. This explanation is plausible in Burkina Faso where resistance to the ITPS insecticide was rare and was selected by the ITPS when applied alone but not when ITPS was combined with LLINs [48]. However, in Côte d'Ivoire, where the baseline frequency of resistance to the organophosphate-containing ITPS was higher and where multiple resistance mechanisms to this chemical class were present [56], the same combination of interventions, as applied in Burkina Faso, did not significantly increase mosquito mortality rate over ITPS or LLIN alone, and did not limit the selection of resistant genotypes [47]. Hence the resistance management potential of combining ITWL and LLIN is not a foregone conclusion but appears to depend on the mechanisms and frequency of resistance already present in a locality or country as a result of previous selective pressures. These studies caution the application of ITWL in areas with resistant vectors in the absence of high community-level net coverage to safeguard continuing personal protection afforded by LLINs.
Key determinants of community-level ITWL acceptability
The principal rationales of ITWL, which render it an attractive alternative to IRS, are its longevity, provision of protection to LLIN non-compliers and potential to overcome the user and donor fatigue associated with repeated rounds of spraying. Consequently, the majority of latterly ITWL studies have focused on identifying key determinants of acceptability and operational feasibility of implementing this intervention in endemic areas (Table 3). In general, themes of decorative value, ownership prestige, few noticeable adverse events and immediate and sustained entomological efficacy have all been reported to positively affect participant receptivity and compliance [52,53,57]. The relative influence of these factors on levels of community acceptability varies between study sites. In Angola, despite householders initially commending ITWL for improving their house aesthetics, once the material was considered ineffectual, the majority of participants removed theirs [52]. By contrast, in a multi-centre trial, respondents unanimously reported wanting to keep their ITWL even if it had no impact at all on mosquito populations or other nuisance insects [53].
Other attractive features of ITWL described in these studies include, the concept of a single intervention that would alleviate the daily inconvenience of multiple control measures, its role as an additional building material to block holes in walls, reduce draughts, noise and dust, and how easily it can be removed and re-installed when certain communities participate in annual house renovations, particularly re-smearing walls with mud during festive periods [57,59]. Common aspects of ITWL which were causes for concern amongst householders were its impact on house ventilation, possible flammability, fragility, especially in the context of damage caused by children, and how long-term exposure to smoke from internal, unventilated fires may affect its aesthetics, durability and insecticidal efficacy. Finally, one more unexpected, negative outcome reported in several sites was the collateral cessation of LLIN use and other methods of disease control, as ITWL was perceived to be either a sufficient or superior malaria prevention strategy [57][58][59]. These observations clearly demonstrate that application of this intervention must be accompanied by re-iterative community sensitization to sustain the use of all available control measures.
Future prospects of ITWL for malaria control: control intervention or method of house improvement?
In the absence of unequivocal evidence to support ITWL as an alternate control measure to IRS, the questions remain, how will this intervention function to reduce malaria, in what epidemiological situation will it warrant implementation and how will it be executed to scale? There is increasing evidence to support a crucial role for housing improvement in malaria control [31-33, 60, 61]. It can be envisaged that ITWL could act as an effective and insecticidal method of house, and in particular, eave screening, if affixed to the base of the roof or ceiling and proven to have long-term durability. However, with concomitant housing, social and economic development, will potential communities still accept ITWL as readily based on its perceived aesthetics? Reports from more affluent urban residents in Nigeria suggest this might not be the case [50]. Alternatively, even if ITWL were to be proven effective and applied in a similar manner to IRS, there are considerable implications for installation logistics. Previously, ITWL has been primarily installed using locallysourced nails, often covered with plastic caps to improve wall grip [62]. Installation time, which accounts for time taken to attach the material to house walls, as well as preparation (removal of all household and wall items) and clean-up, is largely correlated with overall house size, construction and number of rooms to be covered. From an economic perspective, lengthy or highly variable installation times, among communities containing heterogeneous house constructions, will have repercussions on intervention cost-effectiveness, potentially requiring financing mechanisms that many African countries lack [63]. By comparison to IRS, which is estimated at as little as $5 for pyrethroid (ICON ™ lambdacyhalothrin capsule suspensions) to $23.50 for organophosphate sachets (Actellic CS 3000) [64], ITWL installation also requires the purchase, temporary storage and transportation of large ITWL rolls (measuring 2.4 × 210 m and weighing 40 kg each), supporting fixings and resources (e.g. nails, hammers, tape measures, step ladders etc.), often to remote and inaccessible locations. In this scenario, unlike IRS, the cost of contracting and deploying specialist installation teams by NMCPs would likely be financially prohibitive.
Other, as yet unanswered issues, include just how much of a wall or house must be covered with ITWL to impact disease transmission, could ITWL coverage be restricted to sleeping rooms with only limited loss of effectiveness and how can high quality intervention installation and community maintenance be ensured and monitored, as ITWL is expected to function for multiple years, without external upkeep or interference. Moreover, should ITWL durability be assessed in terms of overall householdlevel coverage, given it will likely impact malaria transmission like IRS, through a reduction in overall vector population density, or because of its long-lasting LLINlike properties, will the formation of holes from daily household wear and tear also impact efficacy? Given its higher cost, ITWL is unlikely to be considered for widespread programmatic implementation but instead may be more appropriate as a method to control malaria in areas where pyrethroid-resistant vectors predominate, or to reduce epidemic hot spots of transmission [20,65]. Unlike vertical IRS programmes and mass LLIN distributions, potential delivery systems for ITWL could utilize a combination of social mobilization and microfinancing or subsidization, designating direct responsibility of installation and maintenance to community members.
Future prospects of ITWL for control of other vector-borne diseases
To date, ITWL has primarily been evaluated for its effectiveness as a malaria control strategy. However, there are fundamental features underlying the biology of other vector-borne diseases where ITWL could also play a critical role in interrupting disease transmission. Leishmaniasis remains an important neglected tropical disease with an estimated 350 million individuals at risk worldwide [66]. Vector management is one of the principal disease control strategies, targeting putative resting sites of phlebotomine sand flies, usually with IRS [67]. In addition to all of the aforementioned limitations of IRS, because some vector species display crepuscular feeding activities, LLINs can also be ineffective in these endemic countries [68]. Recently, the efficacy of ZeroVector ® ITWL was investigated in a multi-centre study in Bangladesh, India and Nepal, demonstrating high levels of sand fly mortality and household acceptability and decreases in vector density over 12 months of household use [69,70]. However, no epidemiological endpoints to assess the impact of ITWL on incidences of visceral leishmaniasis were measured, indicating further evaluations of this intervention are still needed. ITWL also warrants consideration as a supplementary intervention to control Chagas disease, which is transmitted by highly domiciliated triatomine bug vectors, inhabiting cracks in the walls of rural adobe houses across Latin America [71]. Despite achieving substantial reductions in disease incidence through historic large-scale trans-national IRS campaigns, active transmission persists, particularly in the Gran Chaco, where rapid domestic re-infestation abounds and insecticide resistance is increasing; both of which are exacerbated by decentralized regional control efforts in areas of recurrent political, social and economic instability [72]. While ITWL has yet to be directly evaluated against Chagas disease, organophosphate and juvenile growth hormone containing insecticidal vinyl paints (Inesfly 5A IGR ® ), based on similar principles to ITWL, have thus far reported encouraging experimental results [73,74] and long-term reductions in levels of household triatomine infestation [75,76].
Conclusions
Insecticide-treated durable wall lining (ITWL) is a novel method of vector control, which when attached to inner house walls remains efficacious over multiple years and can circumvent some of the logistical constraints associated with first-line control strategies. To date, there is substantial phase II data indicating ITWL can impact malaria vector populations, with complete wall coverage affording the highest rates of mosquito mortality, deterrence and blood-feeding inhibition in experimental hut trials. However, there is currently limited Phase III evidence to support operational implementation of ITWL either as a control intervention in a programmatic context or as an insecticidal method of house improvement or eave screening. While aesthetic value and observable entomological efficacy are key determinants of acceptability, additional studies are still required to determine feasible and cost-effective financing mechanisms of installation to sustain ITWL durability during long-term field use. Further large-scale community-level trials are warranted to support the development and evaluation of ITWL as a potential alternate control strategy for malaria and other vector-borne diseases.
Authors' contributions
LAM and MR co-drafted the manuscript. Both authors read and approved the final manuscript. | 2023-01-17T14:39:13.886Z | 2017-05-22T00:00:00.000 | {
"year": 2017,
"sha1": "c2806ad1af8dccfd50313c43bdcb7b17c82c00d7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12936-017-1867-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c2806ad1af8dccfd50313c43bdcb7b17c82c00d7",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
15537752 | pes2o/s2orc | v3-fos-license | Which insulin resistance-based definition of metabolic syndrome has superior diagnostic value in detection of poor health-related quality of life? Cross-sectional findings from Tehran Lipid and Glucose Study
Background The superiority of the diagnostic power of different definitions of metabolic syndrome (MetS) in detecting objective and subjective cardiovascular outcomes is under debate. We sought to compare diagnostic values of different insulin resistance (IR)-based definitions of MetS in detecting poor health-related quality of life (HRQoL) in a large sample of Tehranian adults. Methods This cross-sectional study conducted within the framework of the Tehran Lipid and Glucose Study on a total sample of 742 individuals, aged ≥ 20 years. Metabolic syndrome was defined according to the World Health Organization (WHO), the European Group for the study of Insulin Resistance (EGIR), and the American Association of Clinical Endocrinology (AACE). Health-related quality of life was assessed using the Short Form Health Survey (SF-36). Logistic regression analysis and Receiver Operating Characteristic (ROC) curve were used to investigate the impact of the three IR-based definitions of MetS on HRQoL and compare their discriminative powers in predicting poor HRQoL. Results Compared with other definitions, the WHO definition identified more participants with MetS (41.8 %). Although the AACE definition had higher adjusted odds ratios for reporting poor physical HRQoL (OR: 1.95; CI: 0.84–4.53 and OR: 1.01; CI: 0.55–1.85 in men and women respectively) and mental HRQoL (OR: 0.97; CI: 0.41–2.28 and OR: 1.00; CI: 0.56–1.79 in men and women respectively), none of the three studied definitions were significantly associated with poor physical or mental HRQoL in either gender; nor did ROC curves show any significant difference in the discriminative powers of IR-based definitions in detecting poor HRQoL in either gender. Conclusions None of the three studied IR-based definitions of MetS could significantly detect poor HRQoL in the physical or mental domains, indicating no significant superior diagnostic value for any of these definitions.
Background
Metabolic syndrome (MetS), an escalating health issue worldwide, is a constellation of metabolic abnormalities, which are the major risk factors for developing cardiovascular disease (CVD) and diabetes type 2 [1]. Fast increasing evidence shows the negative effect of MetS on health-related quality of life (HRQoL) as a subjective patient-centered health measurement that concentrates on the individual's own perception of their health status and life satisfaction [2]. Recent data reveals that the prevalence of the syndrome is fast increasing in different populations [3] and is reported to be approximately 34.7 % in Iranian adults [4].
Despite much research have been conducted in recent years, there is uncertainty regarding the concept of MetS and critical investigations have questioned whether the syndrome is a mere aggregation of metabolic abnormalities or a syndrome representing a clinical entity [5]. This doubt has resulted in the introduction of several definitions that include risk factor components which are not entirely similar and more importantly, the cutoffs used for defining them differ. Insulin resistance (IR); the most accepted pathophysiology of MetS, is likely a significant link between the components of this syndrome [1]. On the other hand, obesity, which is included in most definitions of MetS, is identified as the most important correlate of the increasing prevalence of MetS [1]; in addition it has been proposed that IR is significantly related to central obesity [6]. Among the proposed definitions, the World Health Organization (WHO) [7], the European Group for the study of Insulin Resistance (EGIR) [8], and the American Association of Clinical Endocrinology (AACE) [9] emphasize IR as the major component of MetS; however, the National Cholesterol Education Program Adult Treatment Panel III (NCEP-ATP III), the American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI), the International Diabetes Federation (IDF) and the joint interim statement (JIS), emphasize waist circumference (WC) [10].
Considering the above mentioned ambiguity, several efforts have been made to explore the superior diagnostic powers of the WC-and IR-based definitions of MetS in detecting objective and subjective cardiovascular outcomes, i.e. CVD and HRQoL. Regarding objective CVD complications, evidence shows that except for the EGIR definition, the NCEP/AHA, AACE, IDF and modified WHO definitions are all associated with cardiovascular events among an elderly American population [11]. The results of a recent study of the Tehran lipid and glucose study (TLGS) population revealed no differences among diagnostic values of WC-based definitions of MetS in detecting coronary heart disease (CHD) and CVD [12]. Furthermore, in a Dutch population, compared to the WC-based definitions of MetS, IR-based definitions had lower hazards ratios in detecting cardiovascular events [13]. In the field of subjective outcomes, the association between WC-based definitions of MetS and HRQoL, especially in the physical domain and mainly in women has been reported [14]. We recently showed that these definitions failed to show any superiority in the discriminative powers in detecting poor HRQoL in Tehranian adults without diabetes [15]. Although the association between IR and HRQoL has been documented [16] there is no study reporting the association between IR-based definitions of MetS and HRQoL. As one of the first efforts, this study aimed to investigate diagnostic values of three IR-based definitions of MetS including WHO, EGIR and AACE to detect poor HRQoL in a large population of Tehranian adults.
Subjects and design
The current study was conducted within the framework of the TLGS, a large scale ongoing community based prospective study being performed on a representative sample of residents of district-13 of Tehran, the capital of Iran. Details of the rationale and design of the TLGS have been published elsewhere [17]. The TLGS has two major components: 1) a cross-sectional prevalence study of non-communicable diseases and their associated risk factors implemented from 1999 to 2001 and 2) a prospective follow-up study in which non-communicable diseases risk factors are measured approximately every 3 years.
In the current study, a total of 742 individuals, aged ≥ 20 years, with insulin measurements, participating in the TLGS between September 2005 and September 2007 (the second follow-up), was recruited. Information data of these 742 participants was analyzed using the WHO definition. For EGIR and AACE definitions, after excluding 135 persons with diabetes and 17 persons with missing data, the information of 590 persons met the inclusion criteria and was analyzed. The study was approved by the ethics committee of the Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences and written informed consent was obtained from all participants.
HRQoL measures
To assess HRQoL, we used the Iranian version of Short Form Health survey (SF-36), which has been validated in Iran [18]; this widely used questionnaire contains 36 questions summarized into eight subscales; four physical health related subscales including physical functioning, role limitations due to physical health problems, bodily pain, general health and also four mental health-related subscales including vitality, social functioning, role limitations due to emotional problems, and mental health. The physical subscales are summarized and termed as the physical component summary (PCS) and similarly the four mental subscales are termed as the mental component summary (MCS) [19]. The score attributed to each subscale ranges from 0 to 100 as the worst and the best conditions of health respectively. Calculating of the PCS and the MCS scores was done using the Quality Metric Health Outcomes Scoring Software 2 [20].
Definitions
Metabolic syndrome was defined according to the WHO [7], EGIR [8] and AACE [9] criteria (Table 1). Based on the criteria of the American Diabetes Association (ADA), diabetes was defined as fasting plasma glucose ≥ 7 mmol/ L, 2-h post 75 g glucose load ≥ 11.1 mmol/L or receiving antidiabetic medications [21]. Insulin resistance was defined as the upper quartile of insulin level of baseline population from the second follow-up of TLGS. After excluding patients with diabetes, those who had received educational intervention, and participants of the current study, the cut point for IR was calculated. The median [range] insulin concentration was 7.45 mU/ml [0.2-51.84] with an upper quartile range of ≥10.63. Menopause was defined as the time of cessation of menstrual periods for 12 consecutive months, not due to surgery or any other biological or physiological causes [22]. Impaired glucose regulation was defined according to the criteria of the ADA as fasting blood glucose 5.6 mmol/L to 6.9 mmol/L or 2-h post 75 g glucose load 7.8 mmol/ L to 11.1 mmol/L [21]. Smoking status was considered in two groups: 1) Non-and ex-smokers and 2) Current smokers. Additional information regarding age, physical activity [23] and current use of oral hypoglycemic agents, lipid-lowering and anti-hypertensive medication were obtained using the TLGS data.
Other measures
Waist circumference was measured at the umbilical level, over light clothing, using an unstretched tape meter, without any pressure to body surface and measurements were recorded to the nearest 0.1 cm. Blood pressure was measured twice, after participants were seated for 15 min, using a standard mercury sphygmomanometer; there was at least 30s interval between these two separate measurements and the mean of two measurements was recorded as the participants blood pressure. Twelve-hour fasting blood samples were collected in tubes containing 0.1 % EDTA and were centrifuged at 4°C and 500 × g for 10 min, to separate the plasma. Blood glucose was measured on the day of blood collection by an enzymatic colorimetric method using glucose oxidase. Fasting serum insulin was determined by the electrochemiluminescence immunoassay (ECLIA) method, using Roche Diagnostic kits and the Roche/ HitachiCobas e-411 analyzer (GmbH, Mannheim, Germany). Serum total cholesterol and triglyceride concentrations were measured with commercially available enzymatic reagents (Pars Azmoon, Tehran, Iran) adapted to a selectraautoanalyzer. High density lipoprotein-cholesterol (HDL-C) was measured after precipitation of the apolipoprotein B-containing [17].
Statistical analysis
By use of graphical methods, continuous variables were checked for normality and are expressed as mean ± SD. Distribution of variables with normal and non -normal distributions between two groups and also categorical variables were compared using sample t-test, Mann-Whitney test and χ2 test respectively; categorical variables are reported as percentages.
Poor HRQoL was defined as the first tertile of PCS or MCS and to compute the odds ratios (ORs), logistic regression analysis was used. Sex specific ORs with 95 % confidence intervals were computed for men and women separately; model 2 was adjusted for age (years) and model 3 was adjusted for age, smoking (only in men-Ref: never smoked or ex-smoker), education (Ref: above high school), menopause (only in women-Ref: productive age) and marital status (Ref: married). In women, smoking was not adjusted in model 3 because
Results
General metabolic and clinical characteristics of study participants are listed in Table 2; compared to women, men had higher mean levels of WC, SBP, DBP (P < 0.001), but lower HDL-C (P < 0.001); compared to women, rates of higher education and married subjects were significantly higher in men (P <0.001). According to different MetS definitions, more participants (41.8 %) met the WHO definition, followed by AACE (30.7 %) and EGIR (25.6 %) criteria. Both in the physical and mental domains, more men had higher scores in all subscales of SF-36 indicating better HRQoL; however the score of role emotional subscale was higher in women by all three definitions. Among the subscales, physical and social functioning got the highest scores in both men and women respectively according to all three definitions (Table 3). Table 4 shows the risk of being in the lowest tertile of PCS and MSC according to MetS status by each three definitions. There was no remarkable difference between the rates of poor HRQoL in any of the definitions or either gender. Unadjusted odds ratios (95 % CI) for poor physical health for the WHO, EGIR and AACE definitions were 1.89 (1.00-3.57), 2.04 (0.83-5.03), 1.75 (0.82-3.71) for men respectively, while, in women, they were 1.51 (0.97-2.37), 1.21 (0.67-2.19), 1.56 (0.92-2.64) respectively, significant only in men based on the WHO definition (Table 4).
Compared with women, after adjustment for confounding variables, in physical HRQoL all definitions showed higher odds ratios in men as follows: WHO (Table 4).
After adjustment for confounding variables, ROC analysis showed no significant superiority in the discriminatory powers of the three different MetS definitions in detecting poor HRQoL in physical or mental domains in either gender. However, especially in physical health, women showed higher AUCs for all definitions, in comparison to men (Table 4).
Discussion
In the current study, after adjusting confounding variables, none of the three IR-based definitions of MetS could significantly detect poor HRQoL in the physical or mental domains. Moreover ROC analysis showed no significant superior diagnostic value for any of the three definitions; however the WHO definition identified more patients with MetS (41.8 %) than the other IR-based Although previous studies have documented the predictive value of different WC-based definitions of MetS on the objective and subjective outcomes of CVD and HRQoL [13,15], to the best of our knowledge, this is the first report comparing the diagnostic values of the IR-based definitions of this syndrome in detecting poor HRQoL. Findings from studies investigating the impact of MetS on the risk of CVD, as a measurable outcome, show both similarities and differences. Based on previous findings, compared to the IR-based definitions, the WC-based definitions of MetS showed a superior predictive value of non-fatal CVD among an elderly Dutch population [13]; Consistent to this, another study showed that WHO and EGIR definitions were associated with lower risk of all cause CVD than WC-based definitions in men enrolled in a multiethnic study [24]. Moreover, a study conducted in Turkey, reported that [25].
Although in our previous study, the association between poor physical HRQoL and different WC-based definitions of MetS was significant in women but not in men [14], in the current study it is interesting to note that the odds of having poor physical HRQoL is lower in women than in men, a sex specific difference which may be associated to the use of IR as the main component of MetS definitions used in this study; in this regard, a study by Schlotz et al. assessing the relation between IR and HRQoL, showed a significant association between poor HRQoL and some subscales of PCS in men but not women after controlling for confounding variables [16]. Based on the Masharani et al. findings, this sex specific difference in the association between IR-based definitions of MetS with HRQoL may be due to sex differences in the associations between insulin resistance, regional adipose stores, and lipids values which may result in higher prevalence of obesity and high WC in men in association with IR [27]; this is while previous studies revealed that abdominal obesity is the major component of association between MetS and poor HRQoL in women [28]. On the other hand, in type 2 diabetic patients, which is one of the main manifestations of IR, women compared to men, showed worse quality of life and mental well-being [29].
Based on our findings, the IR-based definitions of MetS were not associated with the mental HRQoL in any of the three definitions, results consistent with those of our previous study in which WC-based definitions of MetS showed no significant association with poor mental HRQoL in adult Tehranians [15]. Moreover previous studies have shown that IR and its related measures are associated with poor HRQoL in the domains of physical health but not in domains of mental health [16]. The association between IR and depression has been shown in some previous studies, including a recent meta-analysis, in which a small, but significant association was observed between IR and depression [30], an observation not found in our study.
Based on our knowledge, this is the first report comparing the diagnostic impact of different IR based definitions of the metabolic syndrome in a large sample of adults in the general population. Our study has some limitations; first, this study was conducted using a cross -sectional design, so we are unable to draw conclusions regarding the causal association between MetS and HRQoL. Second, there may be yet other confounding factors such as depression and economic status that affect HRQoL, factors that we did not adjust for. Third, since the three definitions of MetS tended to be associated with poor physical HRQoL in men, the lack of significant statistical results could be related to low statistical power. Moreover, microalbuminuria, which is included in the WHO definition, was omitted from our study because of lack of data. Finally in the AACE definition, due to inadequate information, we did not include acanthosis nigricans, polycystic ovary syndrome, nonalcoholic fatty liver disease, non-Caucasian ethnicity and sedentary life style in the criteria needed for an individual to be considered at high risk of being insulin resistant.
Conclusions
Although the AACE definition had a higher odds of having poor physical HRQoL compared to the WHO and EGIR definitions of MetS, none the three definitions could detect poor HRQoL after adjusting potential confounders in either physical or mental domains in men and women. Accordingly, ROC analysis failed to show any significant superiority in the discriminatory power of the different definitions over each other. | 2017-06-27T03:03:30.997Z | 2015-12-09T00:00:00.000 | {
"year": 2015,
"sha1": "d3b3374de879b8a0c66ab78e13c8a2547942bc0e",
"oa_license": "CCBY",
"oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-015-0391-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3b3374de879b8a0c66ab78e13c8a2547942bc0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232284271 | pes2o/s2orc | v3-fos-license | Breaking monologues in collaborative research: bridging knowledge systems through a listening-based dialogue of wisdom approach
The urgent need to address the sustainability issues of the Anthropocene requires a dialogue capable of bridging different knowledge systems, values, and interests. This dialogue is considered one of the most crucial challenges in collaborative research approaches. With this research, we seek to break with monologues in collaborative research by offering a decolonising methodological approach that combines the notion of dialogue of wisdom, communication theories and ethical principles of Andean philosophy. The methodological framework, the circle of dialogue of wisdom, is the result of an iterative action–reflection process developed in a North–South collaborative research project for territorial planning in Bolivia. Our praxis confirms the potentials offered by a listening-based dialogue for (i) dealing with knowledge–power relations in collaborative research projects, (ii) promoting mutual learning and knowledge co-creation between different knowledge systems, (iii) re-valuating local and Indigenous knowledge, and (iv) decolonising the society–science–policy dialogue.
Introduction
Bridging the gap between diverse knowledge systems is becoming a priority and at the same time, a significant challenge in sustainability sciences (Johnson et al. 2016;Chilisa 2017;Hill et al. 2019). During the last decades, there is a greater awareness of the scientific knowledge limitations and the critical role that Indigenous and local knowledge (ILK) plays in dealing with real-world problems (Mistry and Berardi 2016;Díaz et al. 2018). Although there is an increasing number of scholars interested in finding new pathways to bridge Western and Indigenous sciences, unfortunately, many of these collaborative research practices ended up reproducing dominant (colonial) schemes (Andreotti et al. 2015) and encapsulating fragments of ILK in Western paradigms (Muller 2012). This kind of practices engenders false expectations based on the illusion created by the buzzwords of co-creation, empowerment, and participation (Phillips and Napan 2016). It produces fatigue in communities disappointed by the participatory processes that do not generate real participation or empowerment (Charli-Joseph et al. 2018).
Over the last decades, an increasing number of scholars is advocating for a profound transformation of the nature and purpose of collaborative research (Edwards and Brannelly 2017). This transformation implies "working with, not working on" people (Lieberman 1986), sharing goals and developing each other's ideas, i.e. a collaborative work in which researchers consider local actors as allies instead of participants (Vásquez-Fernández et al. 2018). Essential in this collaborative work is the willingness to share and learn from each other, recognising and valuing each other's 1 3 knowledge and working together to address current societal problems.
In the different forms of collaborative research, interaction with society is critical. Dialogue and participatory approaches are the primary means to co-create knowledge, empowering participants as co-learners or co-researchers, enabling social transformations (Phillips and Napan 2016). However, studies showed that neither dialogue nor participatory methods are a guarantee of equal participation (Herrador-Valencia et al. 2012). Phillips and colleagues (2018) argue that there is a proliferation of collaborative approaches that take the positive values of dialogue for granted, believing that following dialogical principles and methods are sufficient conditions to achieve democratic and inclusive forms of knowledge production. In line with these ideas, critics of consensus-based dialogue highlight the risks of using dialogical approaches under asymmetric power conditions (Watson 2006;Miraftab 2009).
Academics interested in counteracting the effects of power relations in knowledge production have developed inclusive, decolonising and feminist approaches that try to give voice to those who do not have it (Edwards and Brannelly 2017). These approaches have focused their efforts on developing methodological tools to transform the participants into active agents in a co-creation process (Phillips and Napan 2016). However, some dialogic methodologies tend to romanticise dialogue, ignoring or minimising the impact of power-knowledge tensions on collaborative spaces (Phillips et al. 2018). This position is in strong contradiction with the identified and reported recurring challenges encountered when implementing collaborative work, such as the difficulties of building trust, understanding each other's perspectives, and recognising the value of different knowledge systems (e.g. Chilisa 2017;Hill et al. 2019).
We argue that until now, the efforts made to overcome the challenges of collaborative work have focused on giving a voice to the marginalised and vulnerable populations. Yet, the listening component has been neglected in collaborative research literature. As stated by Gayatri Spivak (1990), reflecting in her seminal work, "Can the subaltern speak?", she points out that the crucial question is "who will listen?" rather than "who should speak?" (Spivak 1990, p. 59).
It is not our intention to dismiss dialogue in collaborative processes. Although dialogue discourse constitutes a form of governance in which knowledge, power and subjectivities end up reducing the possibilities of an inclusive dialogue where all kinds of being and knowledge are recognised and equally valuable (Phillips et al. 2018;Hill et al. 2019). We believe that communication skills, in particular the ability to dialogue, is an integral part of qualitative research methodologies (Fals-Borda and Rahman 1991). We consider it necessary to transform, making the inaudible audible through listening, a stance that requires inner silence, reflexivity, and appreciation of the otherness (Lipari 2010).
This research coincides with Abson and colleagues (2016) work about the possibilities offered by the concept of "deep leverage points" to generate sustainable transformations. A deep leverage point is that place where an intervention aimed at changing mental schemas has great potential to create profound changes in a system. Considering language as "one of the most powerful means by which our conceptual habits are shaped" (Lipari 2010, p. 354), we propose that dialogic listening is a lever with the potential to change the forms of interaction between different types of knowledge. We believe that a mentality shift in how we communicate is necessary to close the gap between diverse knowledge systems. Our contribution seeks (i) to explore alternative pathways in which different types of knowledge co-exist and are enriched by each other; (ii) to delve into the potentialities of listening in an inter-ontological and epistemological dialogue.
We focused our work on the knowledge production realm (Abson et al. 2016) and specifically on the role of academics in bridging different knowledge systems, values, and expectations. This article addresses the need to create a new ethical space in a cooperative spirit (Ermine 2007), i.e. a dialogical space emphasising listening instead of speaking. A listening that takes us beyond the self, not to transform the world but to be transformed (Lipari 2009).
We propose a methodological framework called Circle of Dialogue of Wisdom (CDW) to leverage the possibilities that listening-based dialogue offers to transform collaborative research practices and bridge diverse knowledge systems. This approach takes us beyond dialogue towards a listening engagement with the otherness (Lipari 2009) in which the dyad listening and dialogue are the foundations of social change (Dutta 2014 We developed this methodological framework through a North-South collaborative project in Bolivia as a case study that seeks to achieve an inclusive territorial planning and revaluation of ILK to contribute to the self-determination of Indigenous Peoples and Local Communities (IPLC). 1 We, as indigenous and non-indigenous scholars, embarked with the Bolivian IPLCs, policy-makers and practitioners to navigate uncharted waters with the goal of co-creating new pathways that bridge diverse knowledge systems.
In this paper, we argue that a dialogic listening perspective deserves greater attention, because it creates new directions of communication, that go beyond the discursive thought. We briefly summarise what we mean by a dialogue of wisdom and listening dialogue concepts, highlighting the critical aspects of dialogue and communication in collaborative approaches and giving insights into the ethical principles of Andean philosophy. Finally, by describing our praxis, we shed light on the potentials and challenges of a listening-based dialogue approach.
Dialogue in the spotlight of communication theories
During the last decades, many scholars, practitioners and decision-makers from different disciplines have been using notions such as co-creation, co-production, co-design, participation, to name the dialogue between diverse stakeholders to produce new knowledge (Phillips 2011). This dialogic turn is becoming a sine qua non condition in research policies and required by funding agencies with an eagerness to create "socially robust knowledge" (Gibbons et al. 1994). However, bridging different types of knowledge, culture, and perspectives is a challenging endeavour, not only because of epistemological and ontological differences but also due to power-related issues. As stated by Watson (2006), using consensus-based knowledge in conditions of power asymmetries will represent an imposition of one group on another, and limit inclusiveness and diversity (Díaz-Reviriego et al. 2019). In the best case, it ends up incorporating some elements of one system into the other, through a validation process imposed by the dominant system (Montana 2017).
Critical analysis of collaborative approaches, conducted by communication researchers, argue that expressions like 'equal footing', 'empowerment', 'participation', and 'dialogue' are buzzwords, turning dialogue and participation into empty discourses (Phillips and Napan 2016). They called for de-romanticising dialogue, arguing that practices which neglect the tensions of power-knowledge arising from the diversity of interests, values and expectations present in a multi-knowledge-holders exchange could exacerbate exclusion instead of promoting inclusion (Phillips et al. 2018).
Scholars questioning the effects of participation in the neoliberal era argue that hegemonic structures seek to legitimise citizens' perception of inclusion to maintain the hegemonic power using participation and consensus as a tool giving the illusion of an inclusive and equitable decision-making process (Miraftab 2009;Watson 2016). Scholars concerned about these mainstream practices at the science-policy interface proposed to decolonise knowledge production by questioning researchers' deep assumptions (Miraftab 2009) and reflecting on the ethical principles that guide our research practice (Ermine 2007).
Dialogue of wisdom
The notion of dialogue of wisdom (DW) or dialogo de saberes has its origins in the Participatory Action Research (PAR). It seeks to go beyond the emancipation ideals of empowering the oppressed. DW proposes to create a space for a get-together of different cultures, disciplines and knowledge, recognising each other as equals (Archila 2017), not rhetoric equality, but one that counteracts the cultural, political, economic and mental structures of oppression (Rivera 2012). A space for an inter-ontological and epistemological dialogue, finding possible common ground between different knowledge systems (Rist et al. 2011).
The interest in DW has been growing among Latin-American scholars in the last years pointing to it as a critical factor for the construction of a more equitable, democratic and sustainable world (Rodríguez et al. 2016). Although there are multiple understandings of DW, some refer to it as a collective hermeneutical tool (Ghiso 2000), while others consider it an intercultural dialogue (Rodríguez et al. 2016). Leff (2004) argues that DW is an encounter of collective identities based on cultural autonomies, from where an intercultural dialogue unfolds. DW deactivates the violence of the forced homogenisation of the diverse world, by recognising the other and their knowledge. In other words, DW is not a methodology; it goes beyond a strategy of inclusion and participation. It is a social practice that re-links ethics, ontology, and epistemology, intertwining the real, the symbolic, and the imaginary, in the act of thinking, feeling, and building a diverse world (Leff 2004).
Despite the different understandings of the DW, there are multiple commonalities. First, it is a dialogic sharing space between diverse knowledge systems. Second, it gives the same value to all knowledge systems. Third, it recognises the vital role of ILK in the construction of alternatives to the hegemonic models of knowledge production (Leff 2004;Tapia Ponce 2016;Archila 2017).
One of the main challenges of DW is how to deal with power asymmetries. Academics argue that it is necessary to work in parallel on the decolonisation of knowledge production, the mind and power structures to reduce power asymmetries (Smith 2012). On the other hand, Ermine (2007) considers that there is a need for another kind of dialogue. One that provides space for "observing, collectively, how hidden values and intentions can control our behaviour, and how unnoticed cultural differences can clash without our realising what is occurring" (Ermine 2007, p. 203). DW is a dialogue in which we embrace the "parallel coexistence of multiple cultural differences that do not extinguish but instead antagonise and complement each other" (Rivera 2012, p. 105).
Similarly, as PAR, DW is attentive to the non-exploitative patterns in social, economic and political life and values such as ideological and spiritual commitment (Fals-Borda and Rahman 1991). Unlike in PAR, in DW the action-reflection is more an internal than external process, which it is developed through listening, appreciation, sensitivity and self-awareness.
A listening dialogue
Inspired by Eastern philosophical traditions of Buddhism, Lipari (2010) proposes an encounter that goes beyond the limitations of language and binary thinking. In this dialogic space, listening can open new ethical possibilities. This dialogue requires inner silence and awareness of the otherness, where participants engage in a listening-being, embracing difference, uncertainty and plurality. As eloquently expressed by Lipari, Listening "does not merely tolerate but openly embraces difference, misunderstanding, and uncertainty, and invites entrance to human communication and consciousness beyond discursive thinking, to dwelling places of understanding that language cannot, as yet, reach" (Lipari 2010, p. 360).
To embrace this difference, we need to address the false oppositional dichotomies of "mind versus body; subject versus object; objective truth versus subjective emotion" (Chilisa 2012, p. 271), and try to return to the indigenous relational being, by connecting mind, body, spirit and feelings.
A listening dialogue understands communication as a listening space that puts marginal voices at the centre and critically interrogates the structures of power, goals, agendas and protocols that perpetuate social inequities and render invisible marginal voices (Dutta 2014).
Andean Philosophy
Ethics and values are vital in collaborative approaches. As academics working with IPLC, we must be aware of our privileges and how our actions may trespass others' spaces and reproduce colonising schemes. Therefore, building ethical spaces in a dialogue between different knowledge systems requires deep reflection on how to define what harms or enhances the wellbeing of the otherness (Ermine 2007).
According to Estermann (2006), there are no equivalent words to values in the Andean philosophy. Like many other indigenous cultures, the Andean world is governed by principles deep-rooted in the relationality with the others and the otherness. Qhari-warmi (Quechua) or Chacha-warmi (Aymara), or double duality, represents a binary relationship between the elements of the world (Antequera 2016). It is the union of opposite pairs that maintain balanced, reciprocal and complimentary relationships (Quiroz 2006, p. 60). This union is possible thanks to the Chakana, which means bridge or the communicative action between opposites that are complementary and corresponding at the same time (Antequera 2016). Two principles govern the relationships between the elements of the Qhari-warmi: the Tinkuy and the Kuti. The Tinkuy is a reunion of opposites, exchange and dialogue with the other. It is what Estermann (2006) calls the principle of relationality, considered as "the life force of everything that exists" (Estermann 2006, p. 111). On the other hand, Kuti is an alternation of opposites, chaos and instability, which can be restored through reciprocity and complementary actions (Estermann 2006).
Three other principles of Andean philosophy are correspondence, complementarity, and reciprocity. The correspondence principle refers to the harmonic correlation between everything that exists in the macro and microcosmos. The principle of complementarity considers that the autonomous and separate individual is incomplete. The individual is only complete and integral when its opposite complements it. Reciprocity is giving back and occurs on multiple levels. A cosmic harmony exists with the reciprocity of actions, manifested in interpersonal relations, with nature and divinities (Estermann 2006). Finally, it is noteworthy that the Andean people of today are the result of multiple colonisations, which have forced them to transform and to reconfigure their way of relating to the outside world. Although these relational principles remain in today's Andean societies, it does not imply that they are non-hierarchical.
Circle of dialogue of wisdom: methodological framework
The framework proposed in this paper is a result of a collective reflexivity process. It offers a listening-based dialogue, incorporating the decolonising ideas of DW, supported by Lipari's philosophical notion of listening-being and guided by the ethical principles of Andean philosophy. Considering DW as a decolonising approach, it requires a critical reflexive lens to disrupt damage-centred research practices (Calderon 2016), embracing plurality and diversity. In Fig. 1, we outline the six phases of the Circle of Dialogue of Wisdom (CDW) approach and principles that constituted the route map of our actions-reflections.
In the following paragraphs, we describe the six phases of the CDW; it is noteworthy that the steps should not be considered a linear process, but rather iterative and spiral. We also acknowledge that each collaborative experience is unique. However, we believe that our suggested framework can offer a quantum leap not only related to partners relationships but also as proposed by DW to reconceptualise participation, empowerment and collaboration.
Knowing each other
At the beginning of every relationship, we need to know who is with us and the best way to discover it, according to Shoter (2009) is through listening in a way that we can recognise and connect to the world of the other. This connection implies building ethical relations (Shotter 2009) framed around four (Rs) principles: relational accountability, respectful representation, reciprocal appropriation and rights and regulations (Louis 2007). To guide this listening phase, we propose some questions to ponder about our real motivations in building the partnership but also reflecting if the community benefits from research. This phase is crucial to building a partnership and creating "the sense of a collective-we" (Shotter 2009, p. 40) and the best way to do it, is by being honest and sympathetic.
Concerting rules
This phase consists of setting out the rules of participation and the needs of participants. It requires reflecting on the processes and how we can go from symbolic participation towards the creation of a respectful and supportive alliance, where all parties have responsibilities and obligations to maintain the collective-we (Shotter 2009). One strategy is following the three-layer method that includes the self, inter, and collective reflexivity (Nicholls 2009). As stated by Dutta (2014), reflexivity as a critical tenet of listening must be vigilant of the intention that resides in the researchers and their interactions. CDW should not be understood as another encounter where people meet to talk. CDW is an ethical space to engage in a dialogue of wisdom, with a real interest in listening to the other(s), discovering new things, embracing diversity, complementarity and divergence (Leff 2004).
Creating safe spaces
CDW considers each sharing moment as an ethical space, where we can express to each other deepest thoughts without fear in safe spaces where participants can build trustful and respectful relationships (Charli-Joseph et al. 2018). Safety is also related to fear, the unknown, situations where we do not know the other(s), the language, and the topic we discuss. These situations create discomfort and anxiety, which makes it challenging to connect with the other. Safety is related to harmony, regardless of whether or not different parties agree. Even in moments of misunderstanding, in the end, the balance should return if we comply with CDW principles.
The use of participatory tools can help to build this connection by transforming participants into allies, paying attention to power-relations, plurality and diversity, through (i) Hands-on workshops, participatory mapping and participatory scenario planning using them as boundary elements to engage participants in reflexivity moments of sharing (Steelman et al. 2015); (ii) Honour cultural protocols (Kovach 2010), with rituals, or ceremonies depending on participants' beliefs; (iii) Promote solidarity and empathy (Charli-Joseph et al. 2018), using alternative spaces such as having a drink or sharing a moment outside of the work environment to create close relationships which reinforce positive Fig. 1 Circle of dialogue of wisdom, methodological framework exchanges in more formal moments. In spaces where power relations are challenging to overcome, anonymity can be a strategy to break the silence and hear the voices of the less powerful participants. The goal of these strategies is to make people comfortable with and within the group, so they can freely express their opinions and beliefs, breaking with the privilege of the powerful.
Building affection
Building affection is recognising the value and potential that exists in diversity. This recognition is the basis of the reciprocity and complementarity principles of Andean philosophy. If we receive something from the other, we feel the need to reciprocate and give the other something back. These practices strengthen the relationships in the community (Antequera 2016). Similarly, in a collaborative project, helping each other, sharing different tasks regardless of position, complementing each other (senior or junior researcher, practitioner or local leader), strengthens group ties and creates more horizontal relationships.
This stage of CDW is putting people at the centre, valuing their qualities and knowledge. It is related to sentiments and fondness; it is about learning to be a sentipensante, an "empathy-oriented researcher, who not only conducts research for academic purposes but also creates critical and ethical research that is based on solidarity" (Datta 2018, p. 16).
Co-creating solutions
How many of the collaborative projects in which we have participated have unintentionally ended up being monologues disguised as dialogue? This type of monologue "seeks to command, coerce, manipulate, conquer, dazzle, deceive or exploit" (Johannesen 1971, p. 377), to achieve the consensus of the audience and impose its truth.
Only by acknowledging our unquestioned assumptions and analysing how different interests and values interfere in our dialogues, we can stop our monologues and start listening to others and learning from each other (Lipari 2010). We do not suggest that CDW are spaces free of power relations and asymmetries but being aware of them allows us to address them (Reid et al. 2016).
By creating learning communities, we can open collaborative solutions spaces where partners share their knowhow, expertise, time and all their available resources for the well-being and benefit of the group. In such an environment, participants think together, freely sharing knowledge, and welcome all ideas. However, this is not an easy task, because it implies changing old ways of thinking (Ermine 2007), and decolonising not only our minds but also our actions (Smith 2012). This collaboration can be possible by exploring multiple solutions, giving privilege to marginalised voices, re-valuating ILK, resisting dominant discourses and decolonising researcher-driven leadership through listeningbeing (Lipari 2010).
Taking solutions to practice
In collaborative projects, it is necessary to take the time to analyse the effects and benefits of the proposed activities, adopting a listening-being attitude to be connected to the others and, at the same time, being aware of the inevitable power differences (Lipari 2010). It implies sharing actions and responsibilities. It entails co-designing the solutions and co-monitoring and co-evaluation, embracing the plurality of rationalities where different knowledge systems have a place (Chilisa 2017), and building collective ownership (Datta 2018). We could say that the ultimate goal should be to construct a polyphony of voices, as in a choir with different types of voices: soprano, contralto, tenor, and bass co-exist in harmony, but each voice keeps its own identity.
In summary, the CDW approach centres on ethical principles that guide our thoughts and actions. It is a ritual where we share knowledge, beliefs and feelings in safe spaces created by respect, empathy and affection. The goal of this iterative self-inter-collective reflective process is to learn to listen to each other, to learn "to see our own privilege, our own context, our own deep colonising" (Johnson et al. 2016, p. 3), to discover new rationalities by embracing difference, complementarity and plurality and learning to transform ourselves by sharing knowledge, dreams and responsibilities. The six phases and the guiding principles are summarised in Table 1, offering a non-extensive list of methods and questions to ponder.
Circle of dialogue of wisdom: the praxis
Although the proposed methodological framework seems easy to implement at first glance, in practice, it is a process that requires a profound change in the way of thinking, acting, and communicating. It implies a genuine commitment to decolonise our thoughts. It requires permanent surveillance of our subjectivities and how they reproduce colonial schemes. It also needs significant efforts to learn to listen to the intangible within each one of us. The CDW praxis allowed us to recognise how difficult it can be to change our habits of mind. Also, we witnessed the potential of listeningbeing to reflect on our unquestioned assumptions, values and hidden interests. CDW helped us to develop creative ways to deal with power-relations and, at the same time, re-valuating ILK. In the following lines, we summarise some of the challenges and lessons learned derived from our praxis working with our Bolivian partners.
Applying the CDW in the context of Bolivia
In 2016, Bolivia promulgated a new territorial planning law (Ley 777 del Sistema de Planificación Integral
Sharing knowledge and control: building the alliance
At the end of March 2017, a partnership was created between Bolivian and Belgian universities. Together with the communities and policy-makers, we developed a suitable methodological tool based on ILK and constructed it with IPLC. During six months, we discussed ideas for a project. The main interlocutors from Bolivia were two agronomists with extensive experience in agroecology and ILK. From the Belgian side, there was a Belgian professor and a Colombian researcher from educational sciences with more than 15 years experience of working with IPLC in Suriname and Colombia, respectively. The communication between us was fluid, not only because our research interests matched but also because of our values and commitments. Although the incentives for our relationships were primarily personal interests at first, our shared values quickly strengthened our ties. Our joined efforts paid off with a 2-year collaborative project. The project objective was to promote a society-science-policy dialogue to enhance the methodological guidelines of PTDI by building differentiated qualitative indicators that could measure the different dimensions of Vivir Bien-using as case studies three Bolivian municipalities (Bolivar, Totora and Vacas).
Before starting the project activities, we set-up the research agenda, and signed memoranda of understanding with the local community partners. Each municipality appointed a local co-leader and interlocutors for the project, and communities selected the co-researchers who participated in the different activities of the project. In the academic team were two professors, one from each university, three PhD researchers, two from Bolivia and one from Belgium, three master students and three undergraduate students, all from Bolivia. The academic team consisted of male and female researchers from different disciplines (social and natural sciences), and five of them were Quechua native speakers.
Communities requested to be informed about the SPIE Law and wanted training in agroecology practices. To comply with this reciprocity act, we coordinated with the municipalities and their local institutions and set up a training course. This space gave us the possibility to enter the communities and start building closer relationships, discovering the environment and sharing daily life with IPLC before beginning the project's activities. Table 2 presents an overview of the actors involved in the activities of the project.
Our assumptions
When the project started, we considered that the conditions were optimal for collaborative work. The project idea came from a real problem expressed by the municipalities. Academic partners elaborated the proposal together and agreed on the scope, and project goals, complying with the requirements of the municipalities and the IPLC. At first glance, we thought it was going to be an easy collaborative project to carry out, but reality showed us that the conditions were far from ideal.
Our assumptions, interests and perspectives were present since we started writing the project proposal, we began by assuming that: • Local authorities and IPLC were highly interested in the possibilities that the law offered to integrate the Vivir Bien approach into their territorial planning. • The difficulties in implementing the new legal framework were mainly due to technical and economic reasons. • A Bolivian legal framework, promoted by an indigenous government, offered more possibilities to participate for IPLC (including women).
• We could improve women's participation through working with female researchers and speaking the local language.
In the beginning, municipal authorities acknowledged that their main problems were the lack of qualified personnel and resources to carry out inclusive planning as proposed in the law. When we started our CDW with IPLC, we understood that although the above problems were accurate to some extent, other underlying reasons and local realities were slipping out of sight.
We started listening to not only multiple and contradictory voices but also silences that gave us a different perspective on the problem. These voices and silences showed us that (i) it was not only a methodological issue that could be solved with researchers speaking the same language. (ii) Even if proposed by an indigenous government, a legal framework does not automatically guarantee the inclusion and self-determination of IPLC.
Especially during the knowing each other phase, but also throughout creating safe spaces and building affection phases, we identify and reflect on power issues, political struggles, unresolved conflicts, corruption, migration, modernity, and gender inequality, among other realities. All these were present in the territories and affected the engagement and self-determination of IPLC. We understood that talking about territorial planning without considering these realities would be a form of blindness.
In a collaborative process, it is essential to take the time to discover the context, and listening offers an opening for interrogating the inequities of power distribution (Dutta 2014). We must not forget that collaboration takes place in a specific and unique environment that we need to explore with an open mind and without judgements, paying attention to avoid perpetuating the values embedded in the interest of the elite and their power structures.
Dealing with power-knowledge relations
Bolivia is a hierarchical society where multiple factors, such as titles, origins, and even gender, determine relationships. These power scales were evident in all spaces, within the academic team, in the communities and also in gender relationships. Considering the academic project team, we perceived that power-knowledge and gender relations were affecting the possibilities of dialogue between equals during the CWD meetings.
Although horizontality was emphasised and everyone was encouraged to participate, introvert students-mainly women-were always silent. After several attempts using participatory tools, we recognised that they did not feel safe enough to express themselves in those spaces. Creating safe spaces and co-creating solution phases helped us to overcome this issue, we incorporated an anonymity strategy to create a safe space for interaction. We used the traditional pen and paper option to write down answers to discussion topics. The anonymously written topics were stuck on a pinboard to open public discussion. When anonymously raised topics/issues became available, it was easy to talk about them, construct solutions together, and share responsibilities when implementing the solutions, promoting complementarity and unity. The approach did not only guarantee that all voices were heard, but also that power was redistributed, considering each participant's interests and expertise, embracing plurality. CDW helped the academic team to look at power-knowledge in teacher-student and peer-to-peer interactions and propose alternative ways to support an active-project-based learning methodology. Especially, the last three phases of our methodological framework gave us ideas about how to provide students with the responsibility and opportunity to take part during their learning process by acquiring and applying new knowledge in a real problem-solving context. Making the imperceptible voices audible helped alter power relationships, prejudice and the complex dynamics within the project team. It allowed silent students to be heard and provided spaces to reflect, breaking dominant discourses through listening. Our CDW approach provided elements to deal with these problems while improving mutual and cross-cultural learning and knowledge co-creation through our relational accountabilities (Datta 2018).
Re-valuating Indigenous and local knowledge
Concerning power-knowledge issues within the communities, the strategy was to create spaces for mutual learning while re-valuating ILK. The training sessions, which were reciprocity acts, became crucial spaces for sharing and empowerment with the IPLC, Non-Governmental Organisations (NGOs) and local authorities. Together, we succeeded to build a team by embracing Andean philosophy and sharing knowledge and control. The training consisted of three moments: the first moment where a proposed problem was discussed from the local perspective while trying to find the key challenges. In a second moment, participants were asked to reflect on how they solve the problem in their communities and investigate how older people, wise men/women handled the situation. The participants had to prepare re-valuation cards, explaining the problem and solutions based on local knowledge. Later, participants shared their work with the group. In a third moment, we implemented a practical workshop combining ILK with academic and practitioner knowledge, which resulted in the co-creation of new practical knowledge that can easily be applied in their communities, illustrated in Fig. 2. Another important outcome of the training sessions was that co-leaders were empowered, and their leadership was crucial in working with the communities. Co-leaders were in charge of guiding the CDW together with the academic team; their leadership helped to create safe and inclusive spaces where participants felt more comfortable to share their knowledge and experiences.
As mentioned before, it is crucial to take the time to discover the context that surrounded us and adapt different participatory tools accordingly. In our case, workshops, participatory mapping and participatory scenario planning were other co-creation moments that allowed us to articulate different visions of the past, present and future, which contribute at the same time, to reconstruct historical memory and to revitalise ILK (Rodríguez and Inturias 2018). Our praxis also showed us the relevance of the reflexivity between peers to address participants' interests and concerns and, at the same time, unfold ILK.
Decolonising knowledge, actions and beings
Another challenge we encountered in the three municipalities was to promote women as active participants. At the beginning, most women who came to the meetings remained silent and did not feel comfortable speaking in public. Although the principle of complementarity between men and women (Qhari-warmi) is widely defended by Andean people, in reality, few women participate in decision-making spaces. Most of the time, women have been co-opted by men in organisations and political parties (Rousseau and Hudon 2017). Entering political spheres which have been exclusive to men represents a high risk to be subjected to sexual harassment and lose legitimacy for the public representation of women (Rivera 2015). When talking to a woman leader in Totora, she explained that women do not participate, because they are timid to speak in public. She was also afraid but participating in training and meetings like the ones we organised, helped her learn to be a leader and urge other women to do the same.
Despite the efforts to close the gender gap in Bolivia by incorporating relevant legislation and including women in decision-making spaces, gender inequality persists. Changes at the institutional level do not have enough leverage to generate transformations. It is necessary to develop programmes and strategies in the planning process, specially designed for women, by women, to enhance women's participation in planning and decision-making spheres. To promote gender equality, as stated by Rivera, there is a need for a simultaneous effort of cultural and gender decolonisation (Rivera 1997). We observed that although legal and institutional frameworks could be a leverage point to generate sustainable transformations as proposed by Abson et al. (2016), we considered that for these legal frameworks not to remain rhetorical, it is necessary to empower citizens to take ownership. This empowerment can begin by revitalising ILK and turning communities into allies.
Conclusions
The CDW approach proposed in this paper offered guidance in using the potential of new dialogic forms based on listening for the construction of parallel coexistence of multiple knowledge systems. Building on reciprocity, complementarity and respectful relationships, this approach opened up new possibilities for knowledge co-creation in collaborative research. The suggested CDW framework was a dialogue of wisdom in itself, in which concepts from Western science, Buddhist and Andean philosophy were put into dialogue, valuing what each one can offer, demonstrating that it is possible to build bridges between knowledge that seemed dissonant.
We demonstrated how CDW leveraged listening-based dialogue to transform collaborative research practices and bridge diverse knowledge systems. In the same way, we emphasised how CDW strengthened mutual and cross-cultural learning, revitalising ILK and decolonising teaching and research practices. CDW helped in the reconstruction of collective memory to understand local realities better and discover hidden dynamics in the communities. With the CDW, we were able to visualise new paths of collaborative research practices, addressing power imbalances and colonised and dominant discourses, striving to establish reciprocity and coexistence pacts between different knowledge systems. The benefits that this kind of listening dialogue could bring are multiple, not only for planning endeavours but also for society-science-policy encounters in other domains where power relations affect collaboration and in co-creation spaces that involve different knowledge systems.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-03-20T13:46:57.755Z | 2021-03-19T00:00:00.000 | {
"year": 2021,
"sha1": "c2c2e70c7807b3c879023c84e0ba38b8e114eeac",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11625-021-00937-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "c2c2e70c7807b3c879023c84e0ba38b8e114eeac",
"s2fieldsofstudy": [
"Philosophy",
"Environmental Science"
],
"extfieldsofstudy": []
} |
248867384 | pes2o/s2orc | v3-fos-license | Predicting treatment outcome in depression: an introduction into current concepts and challenges
Improving response and remission rates in major depressive disorder (MDD) remains an important challenge. Matching patients to the treatment they will most likely respond to should be the ultimate goal. Even though numerous studies have investigated patient-specific indicators of treatment efficacy, no (bio)markers or empirical tests for use in clinical practice have resulted as of now. Therefore, clinical decisions regarding the treatment of MDD still have to be made on the basis of questionnaire- or interview-based assessments and general guidelines without the support of a (laboratory) test. We conducted a narrative review of current approaches to characterize and predict outcome to pharmacological treatments in MDD. We particularly focused on findings from newer computational studies using machine learning and on the resulting implementation into clinical decision support systems. The main issues seem to rest upon the unavailability of robust predictive variables and the lacking application of empirical findings and predictive models in clinical practice. We outline several challenges that need to be tackled on different stages of the translational process, from current concepts and definitions to generalizable prediction models and their successful implementation into digital support systems. By bridging the addressed gaps in translational psychiatric research, advances in data quantity and new technologies may enable the next steps toward precision psychiatry.
Introduction
With over 300 million affected people worldwide, depressive disorders have become one of the main causes of disability [1,2]. Even though there has been an increasing number of studies investigating the optimization of treatment for major depressive disorder (MDD), response rates in patients remain unsatisfactory [3,4]. In fact, rates have not much improved since the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study reported in 2006 that only 30% of patients reach the clinical goal of remission, i.e., the absence of symptoms, after the first trial of medication [5]. These numbers need to be taken seriously given the high level of suffering during depressive episodes, the high risk for suicide and comorbidities, and the huge social and economic impact [6,7]. The question of what constitutes the best treatment option for a specific patient with a depressive episode under certain individual circumstances is still difficult to answer. Approaches that allow the matching of patients with personalized treatments, often termed 'precision medicine', are widely called for in psychiatry [8,9]. Particularly in early stages of MDD treatment, it is often unclear whether an individual patient will profit most from pharmacotherapy or if other approaches, such as psychotherapy, brain stimulation, or a combination of treatments, might be more beneficial [10]. Models predicting treatment outcome on the basis of individual baseline characteristics can inform the stratification of patients according to their response chances and consequently, the physician's choice of individualized treatment strategies. In oncology, for example, molecular approaches for tumor characterization have led to the discovery of important subtypes and greatly improved individualized treatments [11,12]. However, in psychiatry, prediction models have not yielded any reliable and valid (bio)markers that are ready for incorporation into clinical tools to support diagnoses or guide treatment decisions (for a review, see [13]). For the treatment of specific psychiatric disorders, such as MDD, mental health professionals can refer to evidence-based, mostly country-specific, guidelines that have been formulated by a committee of experts, such as the American Psychiatric Association [14] or corresponding organization in other countries (e.g., Germany; [15]). These guidelines typically recommend, depending on depression severity, different initial treatment trials as well as a stepwise increase in treatment intensity if initial treatments fail. To some extent, they also take individual patient characteristics into account by adapting treatment recommendations to specific comorbidity or symptom patterns and the patient's prior subjective experience with tolerability and efficacy of certain antidepressants. Standardized approaches in the treatment of MDD, such as guideline-and measurement-based [16] treatments, can help to improve treatment success rates [17]. However, treatment guidelines for MDD are also limited by the non-availability of accurate and validated makers of treatment outcome that are needed for the personalization of treatment. Therefore, treatment administration in MDD is often based on the physician's individual experiences and the patient's personal preferences [18], potentially adding to the low success rates of MDD treatment [19]. With the current lack of personalized treatment, it is more likely that a chosen treatment will be inefficient than efficient for a certain patient [20].
Thus, a better understanding of individual factors contributing to treatment outcome in MDD continues to be a major topic in psychiatry. The present review summarizes definitions of and issues with the current concepts of treatment outcome and provides an introduction into approaches to study and predict antidepressant outcome in MDD. It focuses on clinical implications from these approaches and on implementations into clinical decision support systems.
How is treatment outcome in MDD defined?
In the absence of measurable biological indicators of depression severity, it is important to understand how treatment outcome in MDD is commonly defined and how patients are evaluated based on their rate of recovery.
Changes in symptom severity
In clinical studies, the efficacy of any kind of treatment in MDD as in other psychiatric disorders is routinely assessed with symptom questionnaires, including both clinician-based ratings as well as patient self-ratings. Table 1 summarizes the most typical definitions of treatment outcome based on these ratings. Symptom questionnaires are commonly analyzed by adding up their single items into a sum score. Treatment outcome can then be evaluated by simply interpreting this sum score after a certain length of treatment or by comparing it to a baseline score. However, even though the scales are semiquantitative, binary outcome definitions are widely used, the most common ones being 'response' and 'remission'. Treatment response implies a reduction of symptom severity compared to baseline severity by a certain amount (usually by at least 50%), whereas remission requires symptom scores to drop below a certain threshold (e.g., ≤ 7 on the 17-item HDRS; [29]). Since the concept of response relies on the percentage change in symptom severity, it strongly depends on the baseline score. Remission, on the contrary, does not rely on baseline symptom severity at all. From a clinical perspective, remission is the more desired outcome as remitted patients are generally considered symptom-free and, for the time being, fully recovered. Compared to patients who report residual symptoms after treatment (e.g., response without remission), remitters show a reduced risk of subsequent relapse [30,31].
If depressive symptoms are continuously measured over time, outcome definitions are not restricted to absolute or relative measures, such as response or remission. Instead, trajectories of symptom development over time can be considered to evaluate treatment success. Many longitudinal studies and clinical trials collect data by applying symptom scales on a weekly basis, which allows outcome definitions built on data from more than one or two timepoints. With this information, more refined interpretations of treatment effects can be made for individual patients. Furthermore, symptom trajectories can be used to identify subgroups of patients with similar outcome patterns but different dynamics in change. With increases in computing power, advances in statistical methods and sufficient sample sizes, such
Treatment resistance
In contrast to response and remission, non-response and non-remission can be precursors of so-called 'treatmentresistant depression' (TRD). Definitions of TRD also depend primarily on scores from symptom questionnaires and are mainly focusing on pharmacotherapy. Even though there is no unique definition [37], TRD is most commonly described as a major depressive episode with no response after two or more trials of adequate antidepressant medication coming from different pharmacological classes [38][39][40]. Still, although this definition seems to be the most prevalent and a useful common ground, many different definitions exist. Some of them vary fundamentally in their criteria, making them difficult to compare [38, 41].
Recovery of cognition and daily functioning
Apart from reduction of symptom severity and failed treatment trials, the desired outcome after a depressive episode also includes other aspects of the patient's recovery. Ideally, patients return to the same (or even a higher) level of well-being as well as to their way of living from before the disorder, including their daily functioning, i.e., their work, social contacts, and general quality of life [42,43]. This overarching goal of MDD treatment, helping patients to achieve all aspects of recovery, seems to be a stepwise process. For patients with acute moderate or severe episodes, a reduction of symptoms is naturally the first target. Hence, in clinical studies, especially in inpatient settings, symptom severity is more commonly measured than levels of functioning and positive affect
Prediction models of treatment outcome in MDD
The endeavor of finding indicators of treatment efficacy in MDD has led to a remarkable amount of publications from different psychiatric subfields. A large subset of these have looked at associations of preselected psychological and biological factors with treatment outcome. The main aim hereby was the identification of new (bio)markers using classical statistical approaches, such as regression models with null hypothesis significance testing based on p-values of the investigated predictors. The results from these association studies have been summarized in several systematic reviews and meta-analyses, often focusing on selected data modalities (but see [51,52]), such as sociodemographic and clinical measures [53], cognitive functioning [54], or blood biomarkers [55]. Table 2 provides a list of these publications grouped by data modality and by their ease of access in clinical practice. Overall, the most consistently identified and most predictive factors were derived from sociodemographic and clinical characteristics [19]. Information on a patient's social support, their baseline symptom severity, psychiatric comorbidities (e.g., anxiety disorders), or chronicity of the disorder, for instance, have repeatedly been associated with MDD treatment outcome [51][52][53]. However, an important shortcoming of these results is that none of the identified measures has been proven informative enough to sufficiently predict treatment outcome on their own. This issue has led to a "new generation" of studies which aim at creating prediction models based on a multitude of variables. These models use machine learning (ML) methods, mainly supervised learning with classification algorithms such as regularized logistic regression or tree-based methods [56], to combine the effects of many variables and to increase predictive accuracy. Hence, they do not necessarily focus on the identification of new predictors of treatment outcome but rather try to find the best combination of variables to maximize their predictive power. A clear and comprehensive review on ML models and their value for predicting treatment outcome in psychiatry was recently published [57], as well as a systematic review and meta-analysis of these approaches in MDD specifically [58]. Crucially, the development of such models needs to include some kind of validation in order assure that predictions are not specific to the data they were created from but also generalize to new data. Validation is often performed by dividing the initial data set into subsamples (e.g., training sample and validation sample) or by testing the model's performance on a completely independent sample [59]. Furthermore, sufficiently large data sets in terms of sample size are required to guarantee robustness and generalizability of the predictions. The majority of predictive ML models of MDD treatment outcome have thus been created on data from large patient cohorts coming either from clinical trials (such as STAR*D [60,61], Genome-based Therapeutic Drugs for Depression [62,63], or Establishing Moderators and Biosignatures of Antidepressant Response for Clinical Care in Depression [64]), or from observational studies (such as the Munich Antidepressant Response Signature project [32] or the Netherlands Study of Depression and Anxiety (NESDA) [65,66]). Since clinical trials usually compare different treatment arms (or treatment against placebo), the resulting predictions are likely be treatment-specific and may not be readily applied to other treatments [60,62]. Observational studies, on the other hand, follow a more naturalistic approach by observing patients who are treated based on routine clinical decisions, which might lead to more heterogeneity in the data [67,68]. In general, prediction models of MDD treatment outcome based on sample sizes of at least several hundred patients (e.g., [60][61][62][63]) can predict treatment outcome (most often response vs. non-response or remission vs. non-remission) with moderate to good accuracies of 65%-75% [58]. This means that up to three quarters of 'true' responders/remitters are recognized as such by these prediction models. Most models that have been published so far have confirmed that the most reliable predictors of MDD treatment outcome come from established clinical and sociodemographic factors that had already been identified in earlier studies, such as initial symptom severity (e.g., [32, 36, 60, 62]), number and duration of depressive episodes (e.g., [32,60]), personality traits (e.g., [32,66]), as well as employment status and education (e.g., [61,66]). However, only few studies exist that have assessed the additional value of other data modalities by comparing the performance of a multimodal model to a model using sociodemographic or clinical variables only. We here provide two examples of studies that have followed this approach using large sample sizes (at least several hundred samples) and ML methods. Iniesta et al. [63] showed that a prediction model combining demographic and clinical variables (e.g., depressive symptom scores, medication status, and stressful life events) with over 500,000 genetic markers (single nucleotide polymorphisms and copy number variants) led to slightly more accurate predictions (area under the receiver operating characteristic curve (AUC) Neuroimaging [152][153][154][155] Electroencephalographic markers [156,157] Peripheral physiological markers [158] of 0.77) than a model trained on the non-genetic variables only (AUC of 0.74; [62]). Similarly, Dinga et al. [66] compared a prediction model combining clinical and biological data (primarily somatic health measures, inflammatory and metabolic markers) to models including only one of the available predictor domains. Across all comparisons, the full model containing all variables performed better than the alternative models. The largest differences occurred when the alternative model was based on biological measures only, the smallest differences when it was based on depressive symptom severity scores (differences in AUC of 0.01-0.05). These results suggest that even though adding biological markers to prediction models can lead to increases in performance, their additional value on top of clinical data still remains small.
Clinical decision support systems in psychiatry
A suitable instrument to transfer predictive models from research into clinical practice is a Clinical Decision Support System (CDSS). CDSSs are any kinds of computer systems that work with clinical data or knowledge and are set up to assist healthcare professionals in decision processes [69]. These decisions can refer to both diagnosing a patient and selecting the best treatment [70]. Concretely, a patient's characteristics enter a CDSS to be evaluated based on implemented clinical knowledge in order to return recommendations to the clinicians [71]. Hence, these systems can improve clinical processes and help healthcare professionals benefit from scientific findings [72]. CDSSs have been used successfully in many medical disciplines (for a review, see [73]), but use in psychiatry or mental health is lagging behind. However, some systems have been developed for the diagnoses of mental disorders, e.g., for attention deficit hyperactivity disorder [74], MDD and anxiety disorders [75], subtypes of schizophrenia [76], or a broader range of disorders [77]. Other systems were designed more specifically and can also be of value for MDD, such as the NetDSS [78], a web-based CDSS with various functions, from patient registry to clinical outcome monitoring. An elegant tool for physicians and patients was set up by Henshall et al. [79]. They developed a recommendation system and tested it on a focus group comprising physicians, caregivers, and patients with several mental disorders, including MDD. By entering basic sociodemographic and clinical variables as well as by setting preferences for potential side effects, the software returned a graphical illustration of recommended interventions and their corresponding probabilities of effectiveness. A benefit of such a tool is that it uses individual data to tailor a treatment to each patient. Similarly, a few commercial tools have been developed lately, promoting improvements of treatment efficacy for mental disorders using individual patient data and predictive models [80][81][82].
Ultimately, such predictive systems can enhance personalized treatment, e.g., by indicating from the beginning which medication has the highest probability to lead to a beneficial response. Moreover, these tools can save physicians time and increase preciseness of clinical judgements [83,84].
Current challenges and unmet needs
With the increasing interest in precision psychiatry and outcome prognosis, many efforts have been invested in this field of research. Nonetheless, the core problem in translational psychiatry remains: translations of research findings into daily clinical work, in such a way that patients and clinicians could directly benefit from them, are practically non-existent. Due to the lack of validated tests as guidance for personalized medication, treatment administration still has to rely on generic guidelines and physicians' personal judgements. The potential solution appears to be twofold: first, robust (bio)markers of treatment efficacy need to be identified and built into prognostic models. Subsequently, if models are proven useful, the second step will be their translation into new tools for clinicians. The main issues and current challenges in this translational process as well as potential solution approaches are outlined below. Additionally, they are illustrated in Fig. 1.
Challenges in concepts and definitions
Up to 16,400 potential symptom combinations can lead to a diagnosis of MDD [85], which might essentially be a conglomerate of many different pathophysiologies [86]. Moreover, MDD shows a high degree of comorbidity with other mental disorders, both cross-sectionally [87][88][89] and over time [90]. Longitudinal studies, especially using registry data [91], have shown large variability of diagnoses across lifetime which is why a cross-sectional focus on MDD diagnosis might miss relevant longitudinal information that discriminates among disorder subtypes. Hence, transdiagnostic and longitudinal approaches (e.g., assessing lifetime disorders in diagnostic interviews) should be considered in clinical studies.
A second challenge is posed by the measurements and definitions of antidepressant outcome (see Table 1). Unlike other medical disciplines, which provide objective biological measures of disease severity or treatment success, psychiatry defines clinical outcomes on subjective ratings (self-reported or clinician-rated). However, some of the most common ratings were shown to lack reliability [27, 92,93] and to be incongruent among themselves, meaning that they do not measure exactly the same construct and are thus not fully comparable [28]. These issues limit the validity of findings and the generalizability from one outcome scale to others. Moreover, ratings of depressive symptom severity, such as the HDRS, the QIDS, or the BDI, evaluate many different symptoms and aspects of MDD, all influencing the respective sum score. It is possible for patients to show a 50% reduction of the sum score and be classified as responders, even when none of the core symptoms of MDD (depressed mood or reduced interest/pleasure in activities) have improved. Furthermore, patients with the same overall severity score can show very different symptom profiles, and have thus very different subjective experiences of their disorder. This important information gets lost when sum score data are used [94]. Explicitly differentiating between symptoms instead of using sum scores could help to identify indicators of specific symptoms and could thus lead toward more targeted treatments [95].
Moreover, antidepressant outcome is often defined as (partial) response or remission (see Table 1). Both terms represent artificially dichotomized variables, created based on more or less arbitrary cut-off values on a continuous scale, that is, the respective sum score (for remission) or the difference in sum scores (for response) on a symptom scale. Dichotomizing continuous variables always brings certain risks and comes with loss of information [96]. Consider two patients with very similar symptom scores during the course of treatment, e.g., symptom reductions of 55% and 45%, respectively. According to the common definition of treatment response, the first patient would be classified as a 'responder' whereas the second patient would be treated as a 'non-responder'. In fact, the second patient would be categorized together with patients who do not show any symptom reduction at all. Classifying patients in a data-driven manner, e.g., using clustering techniques to create more homogenous outcome classes, might be a promising alternative that has already been implemented in several studies [32][33][34][35]. Still, the resulting outcome groups strongly depended on the selected variables and the chosen clustering method. Hence, the number of identified groups varied, e.g., from five [33] to seven [32, 34] up to nine [35]. These discrepancies challenge their clinical usefulness as the obtained classes are likely not generalizable to most other settings. Nevertheless, especially if more than one type of outcome measure is available, clustering methods might be a good way to combine information and identify subgroups.
Another issue with common measurements of treatment efficacy is the time frame. Patients in clinical trials are often measured over a few weeks only. Especially in disorders such as MDD, which can appear recurrently and show a risk of chronification [97], it is important to follow up on patients after a longer period of time. This could help differentiate between temporary improvements and long-term recovery. In the NESDA sample, 22% of initially remitted patients developed a recurrent episode within the following 2 years [98]. Identifying these at-risk patients early on might help to prevent subsequent episodes by scheduling regular checkups and implementing prevention strategies [99].
Even in the absence of reliable (biological) alternatives, sum scores on symptom questionnaires alone do not seem to be the most specific and clinically meaningful measures [95,100]. In a recent online survey, MDD patients, informal caregivers, and healthcare professionals were asked to indicate outcome domains that matter most in their opinion. They identified not only depressive symptoms but also domains of functioning, healthcare organization, and social representation, many of which are not measured in most clinical studies, let alone included in depression rating scales [44,101], highlighting the importance of including patient centered outcomes. Another research team explicitly differentiated between opinions from doctors and patients [102]. Their survey revealed that physicians mainly considered alleviation of depressive symptoms to be most important for relief and cure from MDD whereas patients rather focused on rehabilitation of positive affect. These results suggest that definitions and measures of treatment outcome should go beyond plain ratings of symptom changes and need to be broadened and potentially lengthened [42]. Relevant assessment instruments for many different domains of MDD characterization, including neurocognition, functioning and quality of life, as well as their suitability for routine clinical use have recently been reviewed [19] and should be considered when measuring treatment outcome in future studies.
Finally, novel objective measures that do not rely on subjective self-or external reports, such as behavioral and functional data generated by smartphones, wearables or other digital devices, could be of further value [103]. As long as no direct biological measure of treatment outcome exists, personal data collected from mobile devices, i.e., 'digital phenotyping', might become a promising alternative [104]. Ecological momentary assessments, actimetry, speech characteristics, or movement patterns, for instance, can be continuously and mainly passively collected in large amounts and in high temporal resolution. Sensor data and other information from wearable devices like smartphones have already been successfully applied in psychiatric research, especially in combination with ML and deep learning [103]. Future studies will need to prove if they can contribute to a deeper and broader characterization of treatment outcome and MDD.
Challenges for prediction models
Except for a few psychometric and sociodemographic factors, there are still no robust or well replicated predictors of treatment outcome. Apart from a few promising pharmacogenetic tests [81,105], no biological measures qualify as stable biomarkers nor are they used in clinical practice. Associations between specific measurements and treatment outcome are often of limited prognostic value as statistically significant associations do not guarantee accurate and robust predictions. Therefore, the focus has started to shift from testing associations to improving predictions in order to forecast what is most beneficial for an individual patient and to personalize clinical decision-making [106].
Predictive ML models tackle this issue as they are built to be as accurate and robust as possible. The robustness of a model should be assessed by validating it on an independent data set [57], ideally by testing its performance and safety on new patients in a prospective clinical study. Nonetheless, several prediction models were not validated on external data sets at all (e.g., [32,64,65]). Others were less predictive when they were applied to other classes of antidepressants, suggesting that the identified predictors of treatment outcome might be agent-specific [60,62]. In addition, the main target variables in studies using ML were response and remission in their binary form [58], the downsides of which have already been discussed. Furthermore, psychiatric data often face the problem of high dimensionality while samples sizes remain relatively small [107]. This is often referred to as the 'curse of dimensionality': the more variables a data set contains, the more the sample size needs to increase (per variable) to allow reliable results [59]. Otherwise, resulting prediction models are likely to be biased and therefore need to be carefully validated on independent data to ensure their reliability. Moreover, prediction models based on biological data often only show restricted translatability into clinical practice as they require precisely preprocessed data from time-consuming and expensive measurements. A prerequisite for a successful translation of a predictive model into clinical practice is that it consists of parameters that can be routinely accessed by a licensed physician without producing a lot of extra costs. Psychological and clinical features as well as sociodemographic information can be evaluated easily by any trained clinician or via self-ratings. On the other hand, as indicated in Table 2, many biological measures, i.e., potential biomarkers, are comparatively expensive or hard to assess for physicians in common clinical settings. This is especially the case for neuroimaging, omics data, and endocrinological markers derived from a challenge test, for instance. Such parameters should only be preferred over less costly data modalities, e.g., questionnaire data, if their predictive performance is notably higher and thus justifies the additional expenses. Making use of other objective measures, such as data collected from smartphones and other wearable devices, might become a promising alternative [103]. Their collection would be economical and profitable for researchers as well as less time-consuming and free of stress for patients.
In summary, well-performing and externally validated ML models are promising tools for future psychiatric practice [59], including the prognosis of treatment outcome in MDD.
Challenges for CDSSs
In order to translate predictive models into digital tools for everyday clinical use, CDSSs could be of help. Iniesta et al. [108] sketched a concise outline of the workflow for designing and choosing predictive models and, crucially, explained how to bring them into CDSSs. Still, as appealing as the idea of such publicly used tools might sound, they have not yet become prevalent in healthcare institutions.
The main challenge in MDD outcome prediction seems to be the lack of powerful models and established predictive patient characteristics. As outlined above, predictive models are still not robust and generalizable enough to guide daily clinical decisions. Only if additional value coming from a predictive model is proven, will an implementation into a CDSSs lead to a successful supporting device. Biases in such systems, for instance, were shown to lead to underestimations of their effectiveness [109], high non-compliance rates among users [73], and even to wrong diagnoses by physicians [110]. This is particularly concerning given that working with a CDSS might influence clinicians in their decisions later on even when they are not explicitly using the system anymore [111].
Furthermore, before CDSSs can be fully implemented into clinical workflows, substantial ethical challenges need to be considered. Apart from data protection, which needs to be assured, questions regarding liability and responsibility for treatment decisions have to be addressed, especially when it comes to disagreement between physicians and support systems. Also, human interactions, conversations and relations between patients and mental health professionals play an important role, not only in psychiatric care [112,113]. Further necessary ethical considerations have been summarized by Chekroud et al. [57].
Due to these problems, a number of factors needed to sustainably establish CDSSs in clinical settings should be considered [73]: First, apart from having appealing visual designs and being user-friendly, the system should implement personalized, transparent, and reliable recommendations as well as comprehensive overviews for each patient. Second, physicians should keep the authority over treatment decisions and should still oversee algorithmic outputs [114]. They should be involved in the development of the system, receive training and not have to make adaptations in their daily working processes in order to use the application. Third, to circumvent organizational obstacles, CDSSs should be integrated into preexisting clinical computerized systems, such as electronic medical records or physician order entries [73].
Ultimately, however, the main incentive in research seems to remain the publication of novel findings, indeed funding for the translation of existing findings into applications and technical devices is often more difficult to obtain [115,116]. Therefore, interdisciplinary work is needed, bringing together scientists, clinicians and, e.g., information technologists for successful development of CDSSs.
Conclusion
Tackling the medical treatment of MDD and increasing treatment efficacy have always been major challenges in psychiatric research. In this narrative review, we summarized current approaches to operationalize and predict treatment outcome in MDD. We highlighted findings from ML approaches and discussed their implementation into CDSSs. To date, numerous studies have investigated and discovered associations between biological and phenotypic patient characteristics and treatment outcome, producing growing evidence for potential underlying mechanisms. Large patient cohort data and ML methods have additionally produced predictive models with promising accuracies (e.g., [32,36,60,62,64,65]). Nevertheless, psychiatry has made comparatively little progress in applying the acquired knowledge into daily clinical work and in personalizing decisions based on empirically derived patient characteristics.
The main issue of this lacking translation seems to be the absence of robust and generalizable predictors of treatment outcome, especially of biological and other objectively measurable markers. Further quantitative characterizations of patients might help to identify more robust predictors and could provide support in medical decisions, such as choosing the most beneficial treatment for individual patients or subgroups of patients [117]. Once reliable indicators and prognostic models are established, the next challenge will be their implementation into clinical practice. Efficient systems with clear interpretation of results need to be introduced and made available for healthcare professionals. CDSSs can be useful tools to implement tests and predictive models to guarantee benefits for physicians and patients. To make this happen, research funding needs to put more emphasis on translational systems, i.e., the development of target-oriented and clinically useful applications. Cooperation with companies specialized in health information technologies might be of particular use for this endeavor. Finally, there needs to be a shift in psychiatry toward a data-driven stratification of patients as well as more precise, personalized treatments based on individual patient data.
Author contributions NR wrote the initial draft of the manuscript. EBB and TMB critically contributed to the writing of the manuscript and its revisions. All authors contributed to and have approved the final manuscript.
Funding Open Access funding enabled and organized by Projekt DEAL. This publication is funded by the Max Planck Institute of Psychiatry. NR is supported by the International Max Planck Research School of Translational Psychiatry (IMPRS-TP) and received funding from the Bavarian Ministry of Economic Affairs, Regional Development and Energy (BayMED, PBN_MED-1711-0003).
Declarations
Conflict of interest EBB is an editor of the journal European Archives of Psychiatry and Clinical Neuroscience. Otherwise, the authors have no financial or non-financial competing interests to declare that are relevant to the content of this article.
Employment All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-05-19T14:34:14.551Z | 2022-05-19T00:00:00.000 | {
"year": 2022,
"sha1": "385f3bf7f8e4b3abcb831c47a6b3ad6535c67a7d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00406-022-01418-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "385f3bf7f8e4b3abcb831c47a6b3ad6535c67a7d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220597130 | pes2o/s2orc | v3-fos-license | Removal of Heavy Metals from Stormwater Using Porous Concrete Pavement
This study aimed to investigate the heavy metals, i.e. Cu, Pb, Ni and Zn removal efficiency from stormwater runoff of a porous concrete pavement (PCP). A model of PCP was designed with the porosity and co-efficient of permeability of the pavement were 27.2% and 1.83 cm/sec, respectively. Artificial stormwater containing heavy metals are passed through the pavement at a constant rainfall rate to mimic the stormwater rainfall-runoff condition. The artificial stormwater infiltrated through the pavement were then collected at two different pavement layers at different time instances. From the experimental investigations, it is observed that Cu, Pb, Ni and Zn concentrations are significantly reduced in the treated stormwater. At the first collection point which is located below the sub-base layer and coarse sand layer of the pavement, the concentrations of Cu, Pb and Zn reduced 56%, 67% and 93% respectively compared to their initial concentration, Ni concentration reduced only 20%. At the second collection point which is located below the coarse and fine sand layers beneath the pavement, the concentrations of Cu, Pb, Zn, and Ni are reduced 92%, 89%, 100%, 100%, respectively.
Introduction
This study focuses on the Porous concrete pavement (PCP) application to reduce heavy metal contaminated stormwater. PCP is a feasible alternative to reduce stormwater runoff and can remove various heavy metals, such as, Zn and Cu significantly [1]. Porous concrete is an opengraded material with zero-slump, comprises with coarse aggregate, cement, water and contains few or no fine aggregates, i.e. sand. It is also known as "no-fines" concrete. PCP usually has interconnected void space of 15%-25% and a surface permeability of 300 to 2000 inch/h [2]. It captures rainwater by allowing water to seep into the ground, recharges groundwater, reduces stormwater runoff, ensure efficient usage of land and lower overall impervious surfaces. Although the porous concrete has high water permeability and lower compressive strength, lower durability compared to conventional concrete, but it has enough strength for use in parking lots, roof tops and driveways. As a rule of thumb, 150 mm of Porous concrete pavement can carry the same light traffic that would normally be carried by 100 mm of conventional concrete pavement [3]. Since water quality maintenance and sanitation infrastructure do not cope up with rapid urbanization and population growth, the pollution of heavy metals in water is a major concern in many developing cities. Sources of heavy metals can be natural and artificial. Artificial sources of heavy metal include direct disposal of untreated industrial waste, mining effluent containing heavy metal contamination and runoff of pesticides, fertilizer used in the agricultural fields. Heavy metals can accumulate in human body, non-degradable in nature, resulting in damage to internal organs and nervous system [4]. It is recognized that heavy metal such as Cu, Zn, Pb and Ni can prevent biological system of ecosystem. A considerable amount of studies have been performed on the removal effectiveness of porous concrete pavement systems for hydrocarbons, nutrients, fecal coliforms, metals, and various contaminants [5][6][7]. The advantages of PCP systems are reduced runoff, improved water quality, sediment filtering, pollutant removal and increased infiltration of rainfall [8].
Recent studies related to porous concrete pavement includes, application of PCP in Municipal waste treatment [9] and blending Geopolymer with porous concrete to remove heavy metals [10].For environmental ecosystem protection it is essential to removed or reduced heavy metal pollutant. Remove heavy metalcontaminated wastewater pollutant is very difficult and costly. If porous concrete pavement used to remove pollutant the pavement is made by local materials. It made easily and economic and no bad effect on environment. The aim of the study is to assess the efficiency of Porous concrete pavement (PCP) for removal of heavy metal in stormwater pollutants at different layers. PCP is a viable alternative to reduce stormwater runoff in the urban stormwater management. The study determines the stormwater quality before infiltration and compares the percent removal of heavy metals concentration in stormwater by PCP. The main challenge of this study is to investigate the heavy metal removal efficiency of PCP from stormwater in context of Bangladesh and access its applicability.
PCP Model Preparation
The porous concrete pavement (PCP) consisted with 7 layers, i.e. surface layer, base layer, subbase layer, coarse sand filter layer, fine sand filter layer and two layers of Geotextile. The PCP model was designed as per AASHTO, 1993 [11] guideline and followed typical layer thickness used for porous concrete pavements [10]. In this study, we used thickness of 4", 8", 8", 4" and 6" for surface layer, base layer, sub-base layer, coarse sand layer and fine sand layer respectively ( Figure 1). The model of porous concrete pavement had a cross-section of 2 ft × 2 ft and a height of 2.5 ft.
Surface Layer
This layer consists of porous concrete. The proportion of the mixture is (cement) 1: (coarse aggregate) 4. This ratio is optimum for porosity and permeability [12]. In this study, we used gradation of aggregate shown in Table 1. Usually, the water cement (w/c) ratios between 0.27-0.30 are used with admixtures [13].
Base and Sub-base Layer
In this study, open-graded aggregate used as a base. We used aggregate sizes from 19 mm (20mm) to 9.375 mm (10mm). A single sub-base layer was used in this study. A coarse layer used which comprised of smaller sized aggregate above the sub-base and sand filter layers used for water quality improvement. We used aggregate sizes from 19 mm to 38 mm.
Sand Filter Layer
In the study, we used two sand filters below the sub-base. One sand filter used as coarse sand and another is fine sand. 1 st sand filter used as coarse sand and 2 nd sand filter as fine sand. Natural sand was used for cushion layer. The fineness modulus (FM) of the coarse sand and fine sand are 2.56 and 1.67 respectively. The 1 st sand and 2 nd sand filters in square model were packed in layer 4" and 6" thickness respectively with preparatory tamping and was not in the state of fully compacted.
Geotextile Layer
A geotextile layer increases the pollutant attenuation capabilities [14], reduces the heavy metal, suspended solids and enhance fine particle retention capacity [6,15,16] within the porous concrete pavement system. In this study, plastic geotextile was set in two layers, one between subbase and coarse sand layer and another one between coarse sand and fine sand layer. Plastic geotextile consist of 2% black carbon, which help to removal of heavy metal.
Porosity Test
The effectiveness of porous concrete are determined by volumetric method [8]. In this method, a cylindrical test specimen is used where, a mass of water to fill a sealed test is compared with an equivalent volume of void to measure the porosity. The effective porosity is calculated by the Equation 1: (1) Where, = Total porosity of the test specimen (%), 1 = Weight of the test specimen air-dried for 24 hours (gm), 2 = Weight of the test specimen submerged in water (gm), = Volume of the test specimen ( 3 ) and = Density of water ( The porosity of porous concrete within 15%-30% are acceptable usually [17,18]. In this study, the porosity value is 27.2%, which is in the acceptable range.
Permeability Test
Permeability defines the flow of water through the material structure. Concrete mixture proportioning have influence on porosity and permeability of porous concrete [19]. Permeability test of PCP in this study have been conducted by constant head permeability method [20,21]. The coefficient of permeability k as given: Where, k = Coefficient of permeability( ⁄ ), Q = Quantity of flow through the test specimen The porous mixtures had permeability values between 1 to 2 cm/s, which is recommended to be used as a drainage layer of pavement system [19]. In this study, the co-efficient of permeability is 1.83 cm/s, which is acceptable for good drainage.
Rainfall Data Analysis
We collected last 5-years rainfall data from a rain gauge station at Dhaka city from Bangladesh Metrological Department (BMD). The average total rainfall for a year is 1910.8 mm. In this study, we were experiment for 100 mm/hour (5% of 1910.8) rainfall intensity.
Stormwater Quality Test
Porous pavements can reduce pollutant loads by filtering, chemical degradation, adsorption and biological activity [8]. is an essential mineral that is naturally present in some foods and available as a dietary supplement. Lead (Pb) is a heavy metal that is denser than most common materials. Nickel (Ni) is generally considered to be one of the most toxic metal found in environment. Higher concentration of Ni is harmful for human body. It causes cancer of lungs, nose and bone.
Synthetic Stormwater Preparation
In this study, we prepared synthetic stormwater in the laboratory. We analysis Pb, Cu, Ni and Zn these heavy metals which exist in stormwater. We took standard solutions of Cu, Pb, Ni and Zn. The standard solution was added within 35 liters tap water. Then was well mixed using stirrer. After proper mixed, was collected some amount of sample for testing in laboratory. After testing we have been knowing about correct concentration of Cu, Pb, Ni and Zn. Finally, synthetic stormwater is prepared for experiment.
Experiment Procedure
Experimental setup of the porous concrete pavement is shown in Fig. 2. Total synthetic stormwater tank was placed beside the pavement, which above of the pavement surface. A water pipe connected one end with the tank and another end with shower. The water flow through the pipe average 2.7 liter/min. The water infiltrated through the pavement, then we collected sample from two points. 1st point located at 24" below the porous concrete pavement surface and 2nd point located at 30" below the porous concrete pavement surface, which shown the Fig. 2. The water samples were collected at 3 minutes, 6 minutes, 11 minutes and 16 minutes from the start of the test session from each point. The tank was empty within 13 minutes. Label of the collected 8 samples with respect to the time represented in the Table 2.
Heavy Metals Test Results in Laboratory
Heavy metals (Copper, Lead, Nickel and Zinc) were analyzed by atomic absorption Spectrophotometer (AAS). For digesting, 2.5 ml diluted HNO3 acid (1:3) and 7.5 ml diluted HCl acid (1:3) were added to 100 ml of synthetic stormwater sample. Then, the acidified samples were kept overnight. The digested for two hours under reflux conditions. After the cooled samples were filtered and the filtrate volume was adjusted to 100 ml by de-ionized water. The sample was then ready for analysis through the Atomic Absorption Spectrophotometer (AAS) Model AA-7000. The Test results of the heavy metals Cu, Pb, Ni and Zn concentration are summarized in Table 2.
Heavy metals in synthetic stormwater
The maximum amount of Cu concentration has reduced by the sand layers and geotextile layers (Figure 3a). The amount of decrease of Cu concentration not varies on using of the coarse sand layer or fine sand layer. The maximum amount of Pb concentration has reduced by the coarse sand layers and geotextile layers. Also, Pb was removed by the fine sand layer but it is relatively low (Figure 3b). Less amount of Ni concentration reduced by the coarse sand layer (Figure 3c). Relatively, maximum amount of Ni concentration has reduced by using of fine sand layer. In synthetic stormwater, initial Zn concentration was 0.0152 ppm. Zn concentration gradually reduced in synthetic stormwater with time. Total concentration has removed by the two sand layers and geotextile layer, which consist of one coarse sand layer and another is fine sand layer (Figure 3d).
Removal Analysis
From the concentration were removed of 56%, 67% and 93% respectively, but Ni concentration was removed only of 20% shown in Fig. 4(a). Table 3 we observed that, after 3 minutes Zinc (Zn) concentration was removed 100% in the 2 nd collection point. But other three parameters were gradually reduced with respect to time. For removed of 100% Nickel (Ni) concentration required time was 11 minutes. At the end of 16 minutes Cu and Pb concentration were removed of 92% and 89% respectively, shown in Figure 4(b). When water pass through the 4-inch porous concrete pavement layer, 8-inch porous base layer, 8-inch porous sub-base layer, 4-inch coarse sand layer and 1-inch geotextile layer, then Copper and Lead concentration has removed of 56% and 67% respectively. But when water passes through another 6-inch fine sand layer and 1-inch geotextile, then Cu and Pb concentration has removed of 92% and 89% respectively shown in Figure 5. When water pass through the 4-inch porous concrete pavement layer, 8-inch porous base layer, 8-inch porous sub-base layer, 4-inch coarse sand layer and 1-inch geotextile layer, then Nickel and Zinc concentration has removed of 20% and 93% respectively. But when water passes through another 6-inch fine sand layer and 1-inch geotextile, then Ni and Zn concentration has removed of 100% shown in Figure 5. The results showed that the average removal efficiency of Ni concentration is higher by the fine sand layer. The average removal efficiency of Zn concentration is maximum by the coarse sand layer. Removal efficiency of Zn concentration is 100% from first time to last time. Legret and found reductions of 79% and 72% for Lead and Zn respectively during three-month runoff on the surface. Hogland and Niemczynowicz [23] found 62% reduction of zinc, 42% reduction of copper, 50% reduction of lead for a porous pavement system receiving snowmelt runoff. Some prior studies used porous reactive concrete [24] and geo-polymer with PCP [1] to remove heavy metal from stormwater. In this study, we have used locally available materials, which are low cost and removal efficiency of Zn, Pb and Cu are higher than some prior studies [15,23]. Therefore, our proposed lab scale PCP model can be experimented for the future practical application in the field.
Conclusions
This study evaluated the porous concrete pavement structures in order to assess their capability to reduce the concentration of heavy metals from stormwater. Synthetic stormwater quality analyses, after the passage through the model, by coarse sand layer and geotextile layer are reduced the concentration of Cu, Pb and Zn are 56%, 67% and 93% respectively, but reduce Ni concentration 20%. Since the atomic mass of Ni is relatively less than Cu, Zn and Pb. So, the atomic crystal size of Ni is less. When fine sand layer and geotextile layer are added below the coarse sand layer, then 100% reduced the concentration of Zn and Ni, also about 90% reduced the concentration of Cu and Pb. Coarse and fine sand layers can remove the heavy metal, because of it have maximum adsorption capacity. The plastic geotextile can remove the heavy metal, because of it consist of black carbon. So the pavement improved stormwater quality by removing heavy metals. This infiltrated stormwater can use as groundwater recharger or water treatment plant. This research work has opened up new possibilities for further works to improve the metal removal efficiency of PCP. Additional studies are required to observe clogging effect of porous concrete pavement. Further study can determine efficiency after long time flow on the pavement. The study can be extended for recycling materials, determine efficiency at various metal concentration stormwater and considering various water cement ratio. | 2020-07-16T09:03:37.903Z | 2020-07-09T00:00:00.000 | {
"year": 2020,
"sha1": "5899034fb3bb479c79399f31e45d2c0814eef21c",
"oa_license": null,
"oa_url": "https://journals.aijr.org/index.php/jmm/article/download/2706/324",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f8eb2b6b935cf5ef6d941be2e4f233794beababb",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
236625920 | pes2o/s2orc | v3-fos-license | Lesbian Gay Bisexual Transsexual Self Esteem: Finding and Concerns
The aim of this study was to identify self-concepts on Lesbian Gay Bisexual and Transsexual (LGBT) in Indonesia. Although LGBT is banned in Indonesia, research shows that many LGBT lives in Indonesia. They accept their condition and consider it a gift, but they feel uncomfortable when traveling everywhere, and also often excluded, they do not have obstacles in self-development. They feel equal to others because their income and work are the same as other people's, they can work and help others and start a business. Most LGBT say they are embarrassed and sad towards people being ridiculed, sometimes angry. They also said could not show their identities openly, only a small LGBT could show openly and that was because the community knew first before they showed. They do not want to be a normal and assume that it was a gift from God.
INTRODUCTION
LGBT is a social phenomenon that emerged in the early 90s and is increasingly developing in people's lives (15) The LGBT issue has again become a global topic due to the United States on June 26, 2016 which legalized same-sex marriage in 50 states in the United States. From the legalization of the marriage, it turned out to have a significant impact on LGBT communities in various countries to fight for desires that could be legally recognized by the state and could legally carry out samesex marriages. Even health professional already be prepared to handle LGBT individuals [1] Indonesia forbid LGBT.
LGBT individuals are considered to have social welfare problems because their sexual behavior is obstructed in social life. The Indonesian Ulema Council (IUC) agreed on a fatwa about LGBT that included several some provisions that samesex sexual orientation was not a gift from God but a disorder that had to be cured.
LGBT is haraam and it is a crime so that LGBT can be punished by the authorities [2] It can be found also in Philippines, Filipino lesbians, gays, bisexuals and transgender (LGBT) individuals are subjected to discrimination, prejudice and stigma from society, which in turn may contribute to poor mental health [3] Violation of law and norms about LGBT will have an impact in the form of rejection in the social community so it is not accepted in daily relationships with other people, and limited social interaction in the community. According to Dacholfany [4], LGBT behavior raises health problems such as 78% of people infected with sexually transmitted diseases. The average age of health person who was married is 75 years, while the average age of LGBT is 42 years old and decreases to 39 years if LGBT with HIV-AIDS is included to the proportion [4].
LGBT problems caused negative self-concept for the people. The self-concept or individual's perception of him has an important role for the individual because it can that individual himself. A research was conducted by Azizah [5], shows that self-concepts on homosexual students are negative self-concepts. This happens because of the imbalance between the positive self-concept of homosexual students and the community's self-concept. Society still thinks that homosexual phenomena violate religious and social norms that exist in the community. The purpose of this study is to find out how self-concept on LGBT individuals.
Design
In this study the design was used qualitative with a phenomenological approach.
Participant
Participants in this study were 18 LGB people, which constituted the total LGB in the group. All participants agreed to be the subject of this study, and signed an agreement to be the subject of the study, they were contacted by research assistants for in-depth interviews, selected participants who became lesbian, gay, bisexual, interview be recorded and transcribed.
This research is located on the Kangean Island, that is part of the province of East Java in the Indonesia, Kangean is an island about 100 km from Sumenep. Existing transportation was a ship that takes about 9 to 10 hours.
Ethical approval
All participants were given informed consent, this study had passed ethical conduct from the Health Sciences Faculty of Wiraraja University
Data collection
Data collection was in-depth interviews about selfesteem on LGB, in the study area, the main questions of in-depth interviews in this study are
Data analysis
Thematic analysis was used in this study, then result of the interview was transcribed, being read over and over again and then given code, after being given a code grouped on the same idea, then giving the name of the theme, all members of the researchers then gathered to determine the results of the theme, and equate perceptions, then look for relationships between themes.
Prolonged engagement.
In this research, one of the researchers and primary data collectors were local resident, who has interacted with LGB for more than 20 years, so that the data collection was done easily even though the data collected was sensitive data. Researchers interview directly and also by telephone if the participant does not want to be suspected during the interview.
Rigour
To obtain the valid data, the research was carried out carefully, this research pays attention to credibility, transferability and dependability, to obtain credible data researchers conduct data coding independently then discuss with members of the researchers to obtain themes, data triangulation was also carried out to obtain credible data by comparing the results of interviews with the results of observations and theories also previous research.
Transferability was done by displaying data clearly and simply, so that the reader can find out the similarities and differences between research and clinical practice settings. Dependebility was done by explaining research methods and collecting data through data analysis.
RESULT
The results showed that the majority of respondents were gay, with age 21-40 years. Most respondents had basic education with the most work as laborers, farmers, fisherman.
Self Acceptance
Respondents with his physical appearance match their desires, (gays who act as men) accept this condition because respondents say there is no change on themselves, and do not want to change their appearance.
An interview excerpts as follows "Because of my appearance is still a man, I don't change my appearance (Resp. I/41 y.o)" "I don't know how to say it, but if I must be honest, at the first time I feel sad why I must be born this way, but now I can be happy to accept myself because I believe if this condition is a gift from God. (Resp. V/ 32 y.o)" but a respondent who have a tendency to want to change their sex say they cannot accept his changes.
Fear, stigma, exclusion and limitations
Respondents said they were afraid of having to gather with many people, because people were often made fun of them, so it was not convenient to go anywhere, and was also often ostracized but they said still could work and help others.
"I was often be mocked, especially if I walk alone at an area that so many people (Resp. J/ 29 y.o)" "Really difficult, I wish I can get along with other people, but I've been mocked seldomly (Resp. J/ 29 y.o)"
Development of Self Potential
Almost all of respondents work, they feel there are no obstacles in self-development and there are those who set up a barber shop and then teach women around them to dress up, they feel useful for those around him.
Desires to be accepted by the community and family
All respondents want to be accepted by the community, and live peacefully and get together with family, and have a family like the others, they also hope that there is a solution so that the conditions are accepted by the community, they try to interact with the community, but some of them are already ignorant and surrender if the community isolates them, so that only a few of LGBT individuals are active in social activities.
Feeling equal to others
They feel equal to others because their income and job are the same as another people.
"In term of income, I got equal or even more than other people, because I have a barber shop (Resp. L/41 y.o)" "Yes, it's same, although I used to be like this, I still can earn money (Resp. T/ 28 y.o)"
Feeling ashamed and angry also sad
Most respondents said they were ashamed and sad about their condition, and sometimes they felt angry and Advances in Social Science, Education and Humanities Research, volume 547 embarrassed when they were ridiculed by people. They said they were sad because of the community's treatment. They were also sad because their family also got ridicule from the community.
Showing Identity
Almost all of respondents said they were shy and could not show their identities openly. only a small portion could show openly, even they were not shy because the community knew about their abnormal orientation first before they showed it of.
Desire to be Normal
Most respondents said that they did not want to go back to the way they were and considered that the situation was a gift from God.
DISCUSSION
Violation of law and norms about LGBT certainly has an impact in the form of rejection in the general community so that is not accepted in everyday relationships, and association is only limited to the community. According to Dacholfany [4], LGBT behavior raises health problems in the form of 78% of people infected with sexually transmitted diseases. LGBT problems result in negative self-concept of the individuals. The self-concept or individual's perception of him has an important role for the individual because it can influence the behavior that is raised and mental health of the individual. Previous research shows that self-concept in homosexual students is negative self-concept. This happens because of the imbalance between the positive self-concept of homosexual students and the community's self-concept. Society still thinks that the homosexual phenomenon violates religious and social norms that exist in the community [1].
Individual behavior will be in accordance with the way the individual sees himself. Individuals who have negative self-concepts tend to have poor mental health such as feeling depressed, isolated from the environment and feel life is meaningless [6].
LGBT can no longer view and judge themselves rationally based on gift (religion), state legal products, and social norms. Based on S. C. Roy's Adaptation Nursing theory approach, psychological integrity (self-concept) is one of the four adaptation models that must be achieved. Can the input of the LGBT adaptation process affect the level of ability to respond positively to himself so that it will form selfconcepts and behaviors that do not deviate?
CONCLUSION
Efforts to overcome LGBT problems are done through counseling on gender status that is nondiscriminatory or non-judgmental. Nurses can involve families, religious leaders, and community leaders to raise awareness with the approach method. The involvement of family, religious leaders, and community leaders is an important domain in shaping positive selfconcepts of LGBT because they are a social environment part of the integration of social support. | 2021-08-02T00:06:17.266Z | 2021-05-03T00:00:00.000 | {
"year": 2021,
"sha1": "09cec7794f195ef4f525eae579206044603299de",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125956147.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "492e3d3d7e8ce758bb39e3a7997577d554b58f54",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
234942037 | pes2o/s2orc | v3-fos-license | β -Blockers Simultaneous Nanoadsorption From Aqueous Solution Via Dispersive Solid-Phase Microextraction With1-Butyl-3-Methylimidazolium Tetrachloroferrate Functionalized Graphene Oxide GO-(Bmim) Fecl 4
The objective of this work was the synthesis of graphene oxide functionalized with the ionic liquid 1-butyl-3-methylimidazolium tetrachloro - ferrate and to examine its adsorption efficiency for seven β-blockers: propranolol, timolol, atenolol, oxprenolol, alpreolol, acebutolol and carazolol. The results in this work revealed that the ionic liquid is covalently attached to graphene oxide sheets. Batch adsorption experiments indicate that all β-blockers studied were adsorped by GO-(Bmim) FeCl4 nanocomposite, compound hydrophobicity is an important predictor of adsorption, with propranolol, the most hydrophobic compound studied, adsorbed to the greatest extent. This highly sensitive and specific method gives a limit of detection (LOD) of 10-20 pg/L and limits of quantification between 20 and 60 ng/L, respectively. The linearity of the method was satisfactory, with mean determination coefficients of 0.993 and 0.999, respectively. In order to test the applicability of the proposed method in real-life samples, the effluent from a municipal wastewater were collected and spiked with seven β-blockers at concentrations equal to 2 and 10 times the LOQs and analyzed with HPLC. The method is straightforward, environmentally safe, exhibits high enrichment factors and satisfactory recoveries from wastewater. To the best of our knowledge, this is the first time that a MIL-GO is used for analytical purposes in a practical, efficient and environ mentally friendly microextraction approach for seven β-blockers. The highest removal rate was observed for propranolol, timolol, alpreolol and carazolol, ranging from 98%+/-1.5% to 99%±1.2%, and the lowest was observed for atenolol, oxprenolol, and acebutolol, ranging from 84%±2.6% to 86%±1.7%, respectively..
Introduction
Beta blockers β-blockers are a class of medications that are predominantly used to manage abnormal heart rhythms, and to protect the heart from a second heart attack (myocardial infarction) after a first heart attack (secondary prevention) [1]. They are also widely used to treat high blood pressure (hypertension) [2]. Betablockers work by temporarily stopping the body's natural 'fight-or-flight' responses, they reduce stress on certain parts of the body, such as the heart and the blood vessels in the brain. Betaadrenoceptor blocking drugs were discovered to be important therapeutic agents in the treatment of both angina pectoris and hypertension. Originally, these β-blockers share the common property of beta-adrenoceptor antagonism, though they may vary in terms of potency. They differ from one another in terms of their additional pharmacological properties, membrane stabilizing activity, cardioselectivity, and partial agonist activity [3]. The term cardioselectivity refers to the ability of some drugs, to block beta 1 receptors without blocking beta 2 receptors. This is important in patients with patients with peripheral vascular disease, obstructive airways disease, and patients with insulin-dependent diabetes during hypoglycemic crisis [4]. Partial agonist activity is the intrinsic activity that some drugs have to stimulate the beta adrenoceptor while they are competitively antagonizing catecholamines. In the other hand, they have less effect on resting heart rate, cardiac output, peripheral vascular blood flow, and resting respiratory function. As far as pharmacokinetic differences between drugs are concerned, lipid solubility is seen to be of increasing importance. The higher the water-soluble of the β-blockers the longer elimination halflives, the less variation in steady-state plasma concentrations, and the less penetration to the central nervous system [5][6][7][8][9][10][11][12].
Many of the carbon-based nanomaterials, such as: carbon nanotubes, fullerenes, nanofibers, nanohorns, graphene, and their chemically-modified analogues, has been investigated as an adsorption material. Due to their properties, these carbon-based nanomaterials have received huge applications in different areas of analytical chemistry and in many other techniques of medicine [13][14][15][16]. The unique structures of carbon-based nanomaterials allow them to interact with molecules via different non-covalent and covalent forces. These interactions includes hydrogen bonding, electrostatic forces, π−π stacking, van der Waals forces and hydrophobic interactions. Recently, graphene which is a novel and indeed fascinating carbon material, has sparked a tremendous research work from both the theoretical and experimental scientific societies [17]. Due to its extraordinary chemical properties and very high specific surface area, carbon-based nanomaterials have been extensively applied as adsorbents in solid-phase microextraction techniques and applications.
The ionic liquids has received a wide attention as an alternative to environmentally toxic organic solvents, it is increasingly applied in many techniques in analytical chemistry. These include nanoadsorpant techniques, different methods of chromatography, and electrochemistry science. Ionic liquids have many unique properties that can be changed by the proper choice of the building cations and anions species, these properties including its polarity, hydrophobicity and viscosity. Their chemical nature allows the synthesis of many different ionic liquid solvents with different properties for different applications and areas [18][19][20][21][22]. The impact of ILs in analytical chemistry resulted from their unique properties as negligible vapor pressure associated with high thermal stability, non-molecular solvents, tunable viscosity, good extractability in various inorganic and organic compounds, and miscibility with water and organic solvents. Modified materials, which consist of ionic liquid and magnetic composites, has been applied in many analytical chemistry extraction techniques [23]. One of the problem in the use of ionic liquids in chemistry applications is that at high temperatures the viscosity of the IL is reduced, therefore resulting in a flowing state in which the IL can be lost. Scientists overcomes this problem by the modification of the ILs to polymeric ILs (PILs). The PILs have unique properties such as higher thermal stability compared to monomeric ILs and indeed more resistance to flow. Moreover, PILs are tunable by proper functionalization and modification of the IL monomers, thus changing extractive capabilities along with their physicochemical properties [24][25].
Additionally, the combining the two above-mentioned fields, i.e., carbon-based nanomaterials and MILs, it is possible to design and develop new microextracting phases with outstanding properties for extraction of selected anabolic steroids and six β-blockers from water samples. The Functionalized carbon-based nanomaterials with MILs are expected to possess unique adsorbents advantages with tunable microextraction capabilities. This paper provides snapshot of our applications of magnetic ionic liquids (MILs) functionalized graphene oxide GO-MILs in microextraction, as an integral step of sample preparation for the removal of anabolic steroids and six β-blockers from water samples. The results obtained are accurate and highly reproducible, making it a good alternative approach for routine analysis of were applied to assess and detect the anabolic steroids and six β-blockers in water samples. The lowcost approach is straightforward, environmentally safe and exhibits high enrichment factors and absolute extraction percentages and satisfactory recoveries. To the best of our knowledge, this is the first time that a MIL-GO is used for analytical purposes in a practical, efficient and environmentally friendly MIL-GO microextraction approach for anabolic steroids and six β-blockers chemicals.
Reagents and chemicals
Chemicals used such as graphite powder, KMnO 4 , NaNO 3 , H 2 SO 4 , HCl, DMF, standard substances in the chemical analysis and ionic liquids 1-butyl-3-methylimidazolium tetrachloroferrate were purchased from Sigma-Aldrich Chemical Co. six β-blockers were purchased from ANPEL Laboratory Technologies Incorporation (Shanghai, China). Chromatographic grade methanol, acetonitrile, and isopropanol were purchased from Sigma-Aldrich (St. Louis, MO). The aqueous solutions were prepared using deionized water (a Milli-Q water purification system from Millipore), methanol, dichloromethane, n-hexane and acetone used of HPLC grade. The glass fiber filters (GF/F, pore size 0.7 μm) were prebaked at 250 °C for 2h prior to use. The solid-phase microextraction sorbent GO-(Bmim)FeCl 4 was prepared in our lab. Stock solutions of the anabolic steroids and six β-blockers s and standard were prepared in methanol at 1 mg/mL and stored in amber glass bottles at 4 °C.
Synthesis of ILs-modified GO composites
GO was prepared using the well known modified Hummers method, and MILs was prepared GO through amidation reaction between the carboxyl groups of GO and the amino groups of the MILs. 50 mg of GO was dispersed in 100 mL of deionized water by ultrasonication for 1 h. After 1hr 200 mg of EDC and 160 mg of NHS were added to ensure the homogeneity of the solution. Then, solu-Page 3 of 9 tion was magnetically stirred for 2 h to activate the carboxyl groups of GO, after that 200 mg of 1-butyl-3-aminopropyl imidazolium tetrachloroferrate was added and the mixture was then ultrasonicated for 60 min, the mixture was stirred at 30 °C for 1 h. The final product was washed several times with deionized water and methanol.
SPE procedure and real sample preparation 20.0 mg of the GO-MILs nanocomposite was packed into a standard filter, to acted as a homemade SPE column. The column was preconditioned with 2 mL of methanol and 2mL of water. 10 mL of the sample solution was passed through the column at a flow rate of 0.5 mL/min. Then, the adsorbed six β-blockers were eluted with 1.0 mL of methanol and concentrated to dryness under a steam of nitrogen before detection. Fused silica and glass materials were used for the entire procedure to avoid any possible interferences. The quantification was based on external calibration with areas relative to the internal standard areas (at least eight calibration standard solutions, r 2 was always above 0.98). The concentrations determined were corrected for average blank value and relative recovery were recorded. Figure 1(b) shows the FTIR spectra of GO-(Bmim)FeCl 4 nanocomposites. There are many surface functional groups are observed on the GO surface. The peak at approximately 3386 cm −1 is caused by stretching vibration of the -OH group on the surface of GO, after the GO reacted with (Bmim)FeCl 4 this peak shifted to 3430 cm −1 , thus indication of covalent bond formation. The peak at 1732 cm −1 which are caused by the C-O stretching vibration of the carboxyl group, and that at 1600 cm −1 is belongs to the C=C bonds in the aromatic groups. In Figure 1(b), the peak at 1034 cm −1 is attributed to ring in-plane asymmetric stretching of the imidazolium ring. The obvious peak at 1631 cm −1 and the peak at 1548 cm −1 associated with -CONH group. These results suggest that the MILs have been successfully grafted onto the GO surface to form GO-(Bmim) FeCl 4 nanocomposites. Thus, FTIR results proved that the functional amine of ionic liquids were chemically bond with carboxyl group on the surface of graphene oxide. Moreover, the imidazolium ring of ionic liquids with alkyl chains offer large π-π and hydrogen-bond interactions and enhance electrostatic inter-sheet repulsion which result in an increase to the interaction capacity and connectivity between GO and (Bmim)FeCl 4 (Figure 1). Figure 2(a) shows the Raman spectra of GO and Figure 2(b) shows Raman spectra of GO-( Bmim)FeCl 4 nanocomposites, with a laser excitation of 532 nm was used. In the two samples clearly visible and strong peaks were noticed at approximately 1595 cm −1 and 1353 cm −1 which assigned to the G and D bands, respectively. We noticed red-shift in the D-band of the Raman spectra of GO-(Bmim)FeCl 4 nanocomposites , as shown in Figure 2(b). The red-shift phenomena is caused by the bonding between the C and N atoms, therefore changes the electronic structure of GO. Indeed, the intensity ratios of the two peaks (ID/IG) demonstrate the extent of defects on the GO surface caused by reaction with (Bapim)FeCl 4 and can be used to reflect the extent of covalent binding. The ID/IG ratio of GO and the GO-( Bmim)FeCl 4 nanocomposites are 1.01 and 1.07, respectively, corresponding to a slightly increased ratios of ID/IG. And thus an increase in disorder, and indicate the successful formation of amidation reaction between the GO and (Bmim)FeCl 4 to form the final product GO-( Bmim)FeCl 4 nanocomposites ( Figure 2).
X-ray diffraction (XRD) analysis
X-ray diffraction (XRD) analysis was performed to characterize the crystallographic structure of graphine, GO, the fresh and used GO-(Bmim)FeCl 4 nanocomposites. As presented in Figure 4, the XRD pattern demonstrating the successful preparation of GO-(Bmim)FeCl 4 nanocomposites (Figure 4-b and c). The used GO-(Bmim)FeCl 4 nanocomposites displays the same diffraction pattern as the fresh sample with little decrease in intensity, which indicates that the crystal structure of the GO-(Bmim)FeCl 4 nanocomposites is maintained after microeextraction. Moreover, the discernible diffraction peaks at about 9.3 0 belonging to GO can be detected in the pattern of GO-(Bmim)FeCl 4 nanocomposites but with shift to the higher value, many other new peaks appear in Figure 4, suggesting a chemical covalent bond between GO and (Bmim)FeCl 4 . Thus, the X-ray diffraction (XRD) analysis result indicates the chemical covalent bond formation during the incorporation of (Bmim)FeCl 4 into the GO structure (Figure 4).
XPS scanning spectrum
GO and the GO-(Bmim)FeCl 4 nanocomposites were investigated by X-ray photoelectron spectroscopy (XPS). Figure 5 As revealed in TEM and SEM images GO possesses a wrinkled single-layer structure with semitransparent flake-like shape. After being grafted by the MILs, the GO-(Bmim)FeCl 4 nanocomposites still maintain the lamellar structure shown in Figure 6b and c, which ensures that they retain a high specific surface area and high ad-sorptive performance. An obvious change in the SEM images can be observed after reaction of GO with (Bmim)FeCl 4 , the large GO sheets are reduced to small pieces, thus giving the appearance of holes with different sizes. As seen clearly in Figure 6d, there are many micro and mesopores being formed on GO-(Bmim)FeCl 4 surface. It was found that the pore size of GO-(Bmim)FeCl 4 is the range of around 2.1 -2.5nm. This appearance phenomenon can explain the novel nancomposites structure of GO-(Bmim)FeCl 4 , that have the advantage of the high adsorptive capacity of the six β-blockers ( Figure 6). (7) propranolol.
Page 7 of 9
Figure 7-a shows HPLC chromatograms of seven β-blockers standard and Figure 7-b HPLC chromatograms of seven β-blockers extracted from GO-(Bmim)FeCl4 nanocomposites by solid phase nanoextracton. As shown in Table 1, we find that the GO-( Bmim) FeCl4 nanocomposites possess a superior nanoextraction capacity, mainly for propranolol, timolol, alpreolol and carazolol, with the removal efficiencies were above 98% even to 100%. The results indicated that GO-MIL adsorption capacities especially for aten-olol, oxprenolol, and acebutolol significantly weaker, with the removal efficiencies were between 85% to 89%. Previous work have indicated that the nature of the anion and a decreased in carbon chain length of ILs are the primary influence to the adsorptive performance of the GO-MIL nanocomposites, the hydrophobic effect restrained the extraction process in the aqueous system [26,27] (Figure 7 & Table 1-3). a: Mean of three determinations The influence parameters for pH values, elution time, and salt concentration were evaluated to find the optimal extraction performance of the GO-(Bmim)FeCl 4 for the extraction of β-blockers. In our laboratory, the extraction capability of GO-(Bmim)FeCl 4 for the β-blockers were tested in the range of 3.0-11. As shown in Table 1, the adsorption capacity (mg/g) clearly increased as the pH increased from 3.0 to 9.0. Therefore, a pH of 9.0 was chosen for the next adsorption tests. It is noteworthy, that HPLC peak areas decreased as the pH became greater than 9.0. Moreover, HPLC chromatogram revealed that β-blockers are relatively independent of changes to the sample solution pH in the range of 2.0-10, because they exist as neutral molecules. It is important to mention that the pH value primarily influences the charge of the ionic liquids and other functional groups such as hydroxyl, carboxyl groups and epoxide on the surface of GO. Table 2 shows the effect of elution time on the extraction performance by changing the washing time from 1 min to 5 min. our experimental results revealed that the best elution time was 4. Thus, elution time was set to 4 min to ensure balance between optimum time and efficiency. Experiments were conducted to examine the effect of an anionic surfactant, sodium dodecyl benzene sulfonate (SDBS), on adsorption. As shown in Table 3, results indicate that SDBS significantly increases the adsorption of all β-blockers specially propranolol. This result is potentially important because surfactants such as SDBS are likely to be present in wastewater effluents with beta blockers and could influence their mobility in the environment. As indicated in Table 3, the SDBS concentration is a significant factor that influences the extraction capabilities, with 1% (w/v) SDBS added to the sample solution, the extraction performance of most of the seven β-blockers reached a maximum. This result may be occurred due to the salt-out effect, which usually promotes extraction. However, when the SDBS concentration exceeds 2% (w/v), the mass transfer process in the solid/liquid interface becomes inhibited from the increased viscosity, leading to a reduction of the diffusion rate of the β-blockers, which decreases the extraction efficiency. For this reason, 1% (w/v) SDBS was added to the β-blockers sample solution.
Validation of the method
The analytical parameters for the GO-(Bmim)FeCl 4 nanocomposites nanoextraction of the β-blockers, such as linearity, correlation coefficients (r2), limits of detection (LODs), limits of quantitation (LOQs) and repeatability were tested under the optimal experimental conditions by using a series of spiked natural water samples. As indicated in Table 4, the linearity of the β-blockers ranged from 2 ng mL −1 to 200 ng mL −1 , with the correlation coefficients exceeding 0.99. The LODs and LOQs were defined as the corresponding concentration equivalent to three and ten times the signal-to-noise ratios, in our results ranged from 0.02 ng mL −1 to 0.88 ng mL −1 and from 0.06 ng mL− 1 to 2.94 ng mL −1 , respectively. The sensitivity of the method, with the use of a UV detector, is quite satisfying, mainly because the detector is easily available to most analytical laboratories. The reproducibility of the method was determined by intra-day RSDs (n = 3) and inter-day RSDs (n = 3) at a spiked sample concentration of 50 ng mL −1 . The two RSD values were always less than 8.0%. Indeed, all of our results indicated a high sensitivity and good reproducibility of this new method (Table 4).
Conclusion
This research highlights the preparation of novel types of GO-MILs nanocomposite for use in SPME of seven β-blockers disruptors from aqueous solution. In our research, a magnetic ionic liquid modified graphene oxide nanocomposites were prepared through a direct amidation reaction between GO and MIL. The prepared nanocomposites were used as the nanoadsorbent in a fixed-bed column, which possessed the advantages of low column pressure and high nanoadsorption capacity. Moreover, the system was successfully applied to the microextraction of seven β-blockers from aqueous samples with good reproducibility, wide linear range and low LODs using a standard HPLC-UV detector. The proposed method provides a reliable method for the removal and determination of β-blockers in aqueous solution, which can be applied in water treatment and the regulation of supplies. | 2021-05-22T00:03:47.814Z | 2020-04-18T00:00:00.000 | {
"year": 2020,
"sha1": "3def070f2d841d3f2b4cf90c638d3be7639f509c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.33552/icbc.2020.01.000508",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5abc4e46667f0892c4976bc5c01528674033f561",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
76662242 | pes2o/s2orc | v3-fos-license | Deformed alignment of super-resolution images for semi-flexible structures
Due to low labeling efficiency and structural heterogeneity in fluorescence-based single-molecule localization microscopy (SMLM), image alignment and quantitative analysis is often required to make accurate conclusions on the spatial relationships between proteins. Cryo-electron microscopy (EM) image alignment procedures have been applied to average structures taken with super-resolution microscopy. However, unlike cryo-EM, the much larger cellular structures analyzed by super-resolution microscopy are often heterogeneous, resulting in misalignment. And the light-microscopy image library is much smaller, which makes classification challenging. To overcome these two challenges, we developed a method to deform semi-flexible ring-shaped structures and then align the 3D structures without classification. These algorithms can register semi-flexible structures with an accuracy of several nanometers in short computation time and with greatly reduced memory requirements. We demonstrated our methods by aligning experimental Stochastic Optical Reconstruction Microscopy (STORM) images of ciliary distal appendages and simulated structures. Symmetries, dimensions, and locations of protein complexes in 3D are revealed by the alignment and averaging for heterogeneous, tilted, and under-labeled structures.
Introduction
In the past decade, the development of localization-based super-resolution microscopy has brought the light microscopy to nanometer scales. Imaging beyond the diffraction limit addressed structural and functional biomedical questions of subcellular organelles that could not be resolved by conventional light microcopy. For example, the in situ dissection of macromolecular protein complexes includes ciliary transition zone [1][2][3], neuronal synapses [4], nuclear pore complex [5,6], focal adhesion complex [7], clathrin-coated pits [8], centrosome [9,10], and the escort complex at viral budding sites [11].
To obtain optical super-resolution images, the target proteins or DNA/RNA sequences are fluorescently labeled by organic dyes or fluorescent protein tags. However, the targets are usually under labeled. The cause of the low labeling efficiency includes low affinity antibodies, dye PLOS quenching, immature fluorescent proteins, and less-than-one dye to antibody conjugation ratios, etc. In certain cases, it might not be possible to achieve full labeling experimentally. Consequently, image alignment and averaging are often required to make accurate conclusions about the biological system of interest. Template-free cryo-EM image alignment procedures have been applied and adapted to average structures taken with super-resolution microscopy in 2D [12][13][14], and in 3D [15]. Unlike cryo-EM, the much larger cellular structures analyzed by super-resolution microscopy are often not completely rigid and homogenous. For instance, the diameter of the rings of ciliary distal appendages varies from 369 to 494 nm [1]. And the shape for the rings are often elliptical due to the flexibility of cilia. In general, many cellular organelles larger than 200 nm are semi-flexible. These semi-flexible structures keep the common symmetry and angular arrangement, but have heterogeneous size and shape, either because of elastic deformation or because of variations in molecular composition. A wide range of cellular structures have such characteristics, including centrioles [9], ciliary transition zones, ciliary distal appendages [1][2][3], and pre-and post-synapse density [4]. In this case, direct alignment and averaging will lose structural details. Cryo-EM particle alignment deals with structural heterogeneity using classification and class-averaging [16][17][18]. However, super-resolution structure analysis often has much fewer images to start with for the following two reasons. First, there are only a few target organelles in a cell. For instance, usually one embryonic fibroblast cell only develops one primary cilium [1], and each cell only has two centrioles [9]. Second, different from EM, superresolution light microscopes detect fluorescence labels instead of electron density. Many structures seriously under-labeled cannot be used for the direct alignment and averaging. The small number of usable super-resolution images is often not enough to meet the minimal amount required for classification. Consequently, heterogeneity in semi-flexible structures will cause misalignment in direct rigid-registration used in existing algorithms for super-resolution image alignment [12][13][14][15], leading to substantial degradation of the resolution of the aligned images.
To address these problems, we take structural flexibility as one degree of freedom for image registration. We designed algorithms to first deform the semi-flexible structures to fairly uniform shape and size, and then to align all deformed structures based on cross correlations. As characterized by Fourier Ring Correlation (FRC) analysis, we have shown that our deformed alignment algorithm achieved better aligned image resolution for semi-flexible structures compared to the state-of-the-art rigid registration algorithm [12]. Although here we specifically developed our deformed algorithm for the ciliary transition zone and distal appendages [1], this procedure could be generalized for many other subcellular structures such as the nuclear pore complex and the centriole.
Overview of the workflow
We designed two algorithms to align 2D and 3D semi-flexible ring structures, respectively. The 2D deformed alignment algorithm is suitable for heterogeneous ring-shaped structures in which the heterogeneity is mainly caused by the flexibility of the structure, for example, transition zone and centrioles. The algorithm first deforms individual structures by circulating the structures, and then aligns the images. The 3D alignment algorithm was designed for randomly oriented flat structures in 3D. The random orientation causes the heterogeneity in the x-y projection. This 3D algorithm rotates the structures in 3D, and aligns the projection on the x-y plane, without structure deformation. Both deformed alignment algorithm and 3D alignment algorithm share the same alignment metric, the cross correlation with the reference in the frequency domain. And the initial reference is just the average image of the structures to be aligned. After iterations of alignment, all the images in the frequency domain are inverse Fourier transformed to the space domain. The final alignment information is used to deform, translate and rotate the coordinates in the molecule lists of the super resolution images. Both deformed alignment algorithm and 3D alignment algorithm achieved sub-pixel precision and fast alignment for linear computing time. On top of the single-color alignment algorithms, we facilitated multicolor alignment by applying the 3D alignment parameters for the reference channel to other imaged channels.
We aligned experimental data, single-color STORM images, to validate and demonstrate the utility of our deformed 2D alignment algorithm, and 3D algorithm. We used simulated structures to further test these algorithms' capabilities to handle various image resolution, labeling efficiency, and structural complexity. In addition, we validated our two-color alignment algorithm using simulated two-color images. Because the quality of the alignment performance is reflected in the resolution of the average image of aligned super-resolution structures we employed the FRC resolution [19] to quantify the performance of different alignment methods. All software codes and sample data in this manuscript are available at https://github. com/Huanglab-ucsf/Deformed-Alignment.
STORM image acquisition
We have described the technical details of the STORM image acquisition and reconstruction methods in our previous work on ciliary transition zone [1]. The same instrument and software were used to acquire the STORM data for algorithm testing in this work. Briefly, the photoswitchable dye pair Alexa 405/Alexa 647 was used for STORM imaging with a ratio of 0.8 Alexa 647 molecules per antibody. During the imaging process, the 405 nm activation laser was used to activate a small fraction of the Alexa 647 at a time, and individual activated fluorophores were excited with a 642 nm laser. The typical power for the lasers at the back port of the microscope was~1 kW/cm 2 for the 642nm imaging laser and 0-20 W/cm 2 for the 405 nm activation laser.
Analysis of STORM raw data was performed in the Insight3 software, which identifies and fits single molecule spots in each camera frame to determine their x, y and z coordinates as well as photon numbers. Sample drift during data acquisition were corrected using imaging correlation analysis. The lateral and the axial localization precision was calculated to be 17 nm (standard deviation).
Coordinate-based image deformation
Taking structural flexibility as one degree of freedom for image registration, we deform the semi-flexible structures before aligning the images (Fig 1A). Compared to pixel-based images, it is much easier to apply complex deformation functions to coordinate-based images. Fortunately, the data format of localization microscopy is a list of coordinates. Each coordinate (x, y, z) for a 3D image or (x, y) for a 2D image locates a fluorophore. First, we used robust fitting to fit the coordinates of individual structures to ellipses described as where the algebraic parameters A to E are converted to geometric parameters center (x 0 , y 0 ), long axis a, short axis b, and the rotation angle f. Robust fitting is an alternative to least squares fitting when data are contaminated with outliers. It is more resistant to outliers in the data because it uses least absolute deviations rather than least squares. Because STORM images are composed of noisy localizations, robust fitting should work better than least square fitting to find a function which closely approximate the localizations. In Fig 1B-1D, we compared the ellipse fittings of a STORM image of ciliary distal appendages with robust fitting and with least square fitting, respectively. Fig 1B is a STORM image of the distal appendages of a cilium. The structure looks like an ellipse with the long axis titling towards northwest. The robust fitting results in an ellipse with the long axis titling towards northwest (red ellipse in Fig 1C), agreeing with the structure. However, the least square fitting gives an ellipse with the long axis slightly titling towards northeast (blue ellipse in Fig 1D), disagreeing with the orientation of the structure. This comparison demonstrated the advantage of robust fitting for STORM data analysis. In this algorithm, coordinates out of 1.5 standard deviations are removed to clean the images. With the geometric parameters obtained from the ellipse robust fitting, we center the structure to (x 0 , y 0 ) and rotate all the coordinates by-f around the center. Finally, we circularize the structure by simply scaling (x, y) with Eqs [2] and [3], where a is the long axis, b is the short axis, and R is the average radius of all structures to be aligned (Fig 1A).
Fast image alignment using correlation in the Fourier domain
Different from our previously reported alignment framework using coordinate-based correlation, this work employs pixel-based correlation and DFT to accelerate computation speed. The DFT algorithm is a modification of the efficient subpixel image registration algorithm written by Guizar-Sicairos et al [20]. that achieved efficient subpixel image registration by up-sampled DFT cross-correlation. This algorithm registers images using 2D rigid translation. We considered sample rotation and implemented rotational registration ( Fig 1M). Briefly, the original algorithm first obtains an initial 2D shift estimate of the cross-correlation peak by fast Fourier transforming (FFT) an image to register to a reference image. Second, it refines the shift estimation by up-sampling the DFT only in a small neighborhood of that estimate by means of a matrix-multiply DFT. We added a loop to rotate the images to align from 0 to 359˚by 1˚at a step, and then performed the original first step after each rotation step to obtain 360 cross-correlation peaks. The biggest peak provides not only the initial 2D shift estimate but also the optimal angle of rotation. The images of single structure were aligned for ten iterations. To eliminate the bias that can be introduced by a template, we used the average of all the images to be aligned as the reference image for the first iteration. Each image was translated and rotated to the reference image to maximize the cross-correlation between the image to be aligned and the reference image in the Fourier domain. The average of the images aligned in one iteration was then served as the reference image for the next iteration. Our algorithm aligned the images with a precision of 1/100 of a pixel. The up-sampling rate can be adjusted higher without increasing computation time. We quantify the alignment using normalized root-mean-square error (NRMSE) E between f(x, y) and rotated g(x, y), defined by [4] E 2 ¼ min a;x 0 ;y 0 ;y P x;y ja � g y ðx À x 0 ; y À y 0 Þ À f ðx; yÞj where summations are taken over all image points (x,y), α is an arbitrary constant, g θ (x, y) is the image g(x, y) rotated by angle θ 0 as in Eq [5] x 0 Finding the x 0 , y 0 , and θ 0 for the minimum NRMSE is equivalent for the maximum crosscorrelation r fg , defined by [6] r fg x 0 ; y 0 ; y ð Þ ¼ P x;y f ðx; yÞg � y ðx À x 0 ; y À y 0 Þ ¼ where N and M are the image dimensions, ( � ) denotes complex conjugation, F(μ, ν) and G θ (μ,
3D alignment
Based on the 2D alignment algorithm, we developed a 3D alignment algorithm finding the minimum NRMSE (maximum cross-correlation) while rotating and translating the structures in 3D. According to Euler's rotation theorem, combinations of rotations around any two axes can reproduce the rotation around the third axis. For example, any rotation around y axis is equal to the combination of sequential rotations around z by 90˚, around x axis, and around z by -90˚. So, the 3D alignment algorithm adds only one for-loop out of the 2D alignment algorithm described above and excludes the deformation step. This for-loop rotates each individual structure around its x axis. The localization coordinates (x, y, z) are rotated by angle f in a preset range. The coordinates are transformed by operation
5: ½8�
The structures are aligned finding the x 0 , y 0 , θ 0 , and f 0 for minimum NRMSE or maximum cross-correlation r fg .
Two-color alignment
The two-color alignment includes two steps. We first picked the channel with higher resolution as the reference channel and aligned this channel with the algorithm described above. Second, the alignment parameters for the reference channel were applied to the other channel.
Coordinate-based image deformation and alignment
By deforming the super-resolution images, we correct the heterogeneity in semi-flexible structures which cause misalignment in rigid registration. The deformation improves the resolution and symmetry in the aligning result. With experimental data, we demonstrated this improvement, by comparing the average image of structures aligned by the deformed alignment algorithm (Fig 1H) to the aligning result of the rigid registration algorithm recently developed in our group [12] (Fig 1J). The FRC analysis quantified the resolution improvement made by the coordinate-based image deformation (Fig 1K & 1L). The FRC resolution of the average image of structures aligned with deformation is 41.4 ±1.2 nm, while the FRC resolution for the rigid registration is 45 ± 2.2 nm.
The algorithm was demonstrated with the experimental 2D STORM data of ciliary distal appendages of mouse tracheal epithelial cells (MTECs) with CEP164 labeled by Alexa Fluor 647. The deformed 2D alignment result (Fig 1H) of 31 under-labeled structures with semi-flexible shape and size (Fig 1A) showed clear 9-fold symmetry. The elongated distribution of CEP164 in each distal appendage was also represented in individual subunit of the average image. The elongated distribution of CEP164 in each subunit agrees with the fiber shape of the distal appendage imaged by EM [21]. 30 images can be aligned in a few minutes with a CPU at 2.8 GHz and 24 GB memory (our lab computer). And the computation time is linear to the number of the images to be aligned.
To test the capability of our algorithm to align images with various localization precision, labeling efficiency, and high structural complexity, we simulated three sets of localization images. (1) With the localization precision of 60 nm, we simulated twenty rings, each of which consists of 9 evenly distributed clusters with the diameter of 300 nm (Fig 2A average structure, B single structure). The algorithm was able to resolve 9 clusters clearly by aligning simulated structures with 10 integrations in 363 seconds on a server with 2.8 GHz CPU and 24GB memory (Fig 2C). We used autocorrelation of aligned image as a function of the angle of rotation to quantify how well the symmetry is retrieved by the alignment. The peak at 38˚in the autocorrelation function (Fig 2J) agreed well with the 9-fold symmetry in the simulated images. (2) To test the alignment of structures with low labeling efficiency, we simulated 20 9-cluster rings with 5 random clusters labeled at the localization precision of 15 nm (Fig 2D average structure, E single structure). The average image of aligned under-labeled structures recovered clear 9 clusters (Fig 2F). The peak at 40˚in the autocorrelation function ( Fig 2K) retrieved with the 9-fold symmetry in the simulated images. (3) We also evaluated the alignment accuracy with simulated structures with higher structural complexity. We simulated 20 rings with 18 clusters which have alternative 15˚and 25˚angular spacing between the neighboring clusters ( Fig 2G average structure, H single structure). The simulated labeling efficiency is 67%. The average image of aligned structures successfully represented the complex symmetry (Fig 2I). The peaks at 15˚, 25˚and 40˚in the autocorrelation function (Fig 2L) again perfectly retrieved the complex angular distribution of simulated structures.
3D alignment
We used the algorithm to align 29 experimental 3D STORM images of ciliary distal appendages with long axes varying from 376 to 470 nm (Fig 3B). These in situ structures were randomly oriented in 3D, and their labeling efficiency was about 60-85%. Before alignment, the average structure showed a ring shape but with no information on the angular distribution of the clusters. After 5 iterations of 3D alignment, the average image showed 9 clusters with approximately even angular distribution around the ring.
To test the capability of alignment of largely tilted structures, we simulated ten rings with 9 clusters. These simulated structures are randomly rotated around the x axis within 60˚and around the z axis within 360˚, at a localization precision of 15 nm (Fig 3F). The average of simulated ring images before alignment shows no information about the symmetry of the structures (Fig 3G). The 3D alignment result shows 9-fold symmetry (Fig 3H). The algorithm also provides good alignment results for simulated structures with a large tilting angle (90˚) around x-axis (Fig 3I and 3J), and for large localization precision (60 nm) in 3D (Fig 3K and 3L). These results indicate that our 3D alignment algorithm can accurately register largely titled and noisy images, and can efficiently extract the common structural pattern among these heterogeneous images.
Two-color alignment
To test the algorithm of two-color alignment, we simulated twenty two-color super-resolution images with 15 nm lateral localization precision and 30 nm axial localization precision ( Fig 4A). Each simulated structure is composed of two parallel rings with diameters of 200 and 300 nm, respectively. Each ring has 9 clusters that are evenly distributed. The average number of localizations in each cluster is 50. The angular distributions of the clusters of the smaller rings and the bigger ring are offset by 20˚. The two rings are 100 nm distal to each other. Our alignment algorithm successfully aligned the images. The aligned average image shows 9-fold symmetry and the 20˚angular offset between the two rings in the top view (Fig 4C), and 100 nm distal distance between the two rings in the side view (Fig 4E).
To validate the algorithm's capability of aligning two-color images that are tilted in 3D, we randomly rotated the structures used in the case above around x axis in a range of 30˚, and around z axis in a range of 360˚ (Fig 4F). The 3D alignment algorithm again efficiently aligned the titled structures and recovered the 3D arrangement in the average structure (Fig 4H and 4J).
Conclusions
We have demonstrated a deformed 2D alignment algorithm that can accurately align semiflexible ring structures from both experimental STORM images of ciliary distal appendages and simulated images with high noise and complexity. Information on symmetry and common structural features were efficiently extracted from a few tens of heterogeneous structures. The cross-correlation based alignment algorithm is largely accelerated by DFT and registration in the frequency domain. We also demonstrated that our 3D alignment algorithm can accurately align images of structures tilted in space, using 3D STORM images of ciliary distal appendages, and simulated structures randomly rotated around x axis. For two-color alignment, we simply applied the alignment parameters for the reference channel to the other channel, and achieved sub-pixel alignment of two-color super-resolution images. The two-color algorithm can be easily expanded to multiple color alignment. Use of these algorithms makes accurate registration of super-resolution images within a hundredth of a pixel. Information on the general geometric feature of heterogeneous structures can be extracted with tens of images. All the algorithms including deformed 2D, 3D, and two-color alignments are computationally manageable on a regular desktop computer. The deformed 2D algorithm can be applied to any semi-flexible ring-shaped structures. The 3D and multicolor algorithms will provide substantial advantage for any applications that require aligning and averaging super-resolution images in 3D. | 2019-03-15T02:58:00.896Z | 2018-11-05T00:00:00.000 | {
"year": 2019,
"sha1": "f6ac94ec6a9bd4433010b4008dc86cbf00e8847e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0212735&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6ac94ec6a9bd4433010b4008dc86cbf00e8847e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
233298727 | pes2o/s2orc | v3-fos-license | Two-Year Follow Up of the LATERAL Clinical Trial
Supplemental Digital Content is available in the text.
T he use of left ventricular (LV) assist devices (LVAD) to support patients with end-stage heart failure is growing. Historically, LVADs have been implanted via full midline sternotomy. 1,2 As LVAD technology evolved and devices became smaller, the trend to implant using a less invasive thoracotomy approach has increased, becoming the surgical strategy of choice at many centers. [3][4][5][6] The thoracotomy approach can be performed with optimal visualization of cardiac structures, even though incisions are smaller compared with conventional sternotomy. 7,8 The LATERAL trial has previously reported safety and efficacy of the thoracotomy implant technique of the HeartWare HVAD system (Medtronic, Minneapolis, MN) compared with conventional median sternotomy, by demonstrating noninferiority of 6-month survival on original device free from disabling stroke, transplanted or explanted for recovery, as well as significantly reducing hospital stay. 8 Despite the fact that the number of patients with heart failure continues to grow, there remains a paucity of donor hearts, creating longer wait times for heart transplantation. Additionally, with the newly revised United Network for Organ Sharing criteria, 9,10 stable LVAD patients awaiting heart transplant are lower on the priority listing, often extending LVAD support times rivaling destination therapy. As such, it is important to ensure that the adverse event (AE) profile of LVAD patients is low to ensure a good quality of life (QoL) with low long-term morbidity. We now present long-term data on patients implanted with the HVAD system via thoracotomy describing the AE profile through 2 years of support.
METHODS
The study design of the LATERAL trial, including specific inclusion and exclusion criteria, was described previously. 8 Briefly, between January 15, 2015, and April 26, 2016, 144 HVAD implants were performed at 26 investigational sites via a lateral thoracotomy approach in bridge to transplant (BTT) patients. The LATERAL patient population was comparable to other BTT trials, including 33% with ischemic cardiomyopathy, 23% prior cardiac surgery, >80% Interagency Registry for Mechanically Assisted Circulatory Support (INTERMACS) profiles 1 to 3, 3.5% cardiogenic shock, 18.8% chronic kidney disease, and 4.9% prior stroke. 8,11 The HVAD system was implanted via a left anterolateral thoracotomy with an upper hemi-sternotomy or a right anterior thoracotomy for outflow graft anastomosis to the ascending aorta. All implants were performed on cardiopulmonary bypass. All data were collected via the INTERMACS Registry database. Patients were followed for 2 years, or until device exchange, transplant, or death. The clinical data and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedures.
Data Collection and Statistical Analysis
In addition to primary end point and total hospital length of stay, 8 secondary end points of the LATERAL trial included major AE's (per INTERMACS Protocol 4.0 definitions), QoL, functional capacity, and survival. QoL was measured by Kansas City Cardiomyopathy Questionnaire and EuroQol EQ-5D. Functional status was measured by New York Heart Association functional class and 6-minute walk test.
Freedom from event analyses were performed using Kaplan-Meier methodology. Patients were censored from analysis at the time of original device exchange, explant, or death. AE comparisons of events per patient year (EPPY) across time intervals were performed using Poisson modeling. Events were considered clinically independent. Postimplant QoL and functional capacity measures were compared with baseline measures using paired t test at each time point. All statistical analyses were performed with SAS v.9.4 software (SAS Institute, Cary, NC). The study was conducted in compliance with Food and Drug Administration regulations for Good Clinical Practice and approved by each clinical site's institutional review board. All subjects or their authorized representatives provided informed consent.
RESULTS
This report describes 2-year follow-up of the 144 BTT patients implanted with the HVAD pump using a thoracotomy approach between January 2015 and April, 2016 in the United States and Canada. At 2 years, 53.5% patients underwent heart transplantation, 2.1% were explanted for recovery, and 31.9% were alive on the original device. Seventeen deaths (11.8 %) were reported, the most common cause of death being neurological dysfunction (n=4, 2.8%), followed by right heart failure (RHF; n=3, 2.1%).
During WHAT IS NEW?
• The LATERAL trial is the first published multicenter clinical trial of a centrifugal flow ventricular assist device implanted via thoracotomy as bridge to transplant in patients with end-stage heart failure.
WHAT ARE THE CLINICAL IMPLICATIONS?
• This analysis provides the first long-term outcomes of left ventricular assist device patients implanted via this innovative approach that may help reduce complications associated with sternotomy, including stroke, bleeding, and right heart failure, while shortening index hospital stay. • Further understanding of the long-term adverse events profile of left ventricular assist devices will advance the field's knowledge to develop best practices for optimizing patient management and surgical techniques, improving outcomes, and therapy adoption.
EPPY; Table 1). Kaplan-Meier analysis of freedom from any stroke was 91% at 6 months, 88% at 1 year, and 82% at 2 years. Freedom from disabling stroke (modified Rankin Scale, >3) was 96% at 6-months and oneyear and 95% at 2 years ( Figure). There were 3 LVAD exchanges for thrombus in the first year (2.1%) and none between years 1 and 2.
We evaluated the 2-year longitudinal temporal AE profile over time intervals of <30 days, 30 to 180 days, 6 to 12 months, and 1 to 2 years postimplant ( Table 2) to asses the overall AE burden. Stroke, bleeding, and cardiac arrythmia rates declined significantly after the first 30 days. After 6 months, most AE rates either stabilized or decreased. The hemorrhagic stroke rate between 1 and 2 years declined from 0.05 EPPY to 0.01 EPPY. There were no additional RHF episodes between 6 months through 2 years. An analysis of first-event-per-patient per category while on support, essentially a patient-based analysis, revealed a reduction or stabilization in the percentage of patients with a first-event after 6 months, except for DLI, which may be attributable to increased activity as highlighted by the functional capacity measures, and ischemic stroke and cardiac arrhythmia, all which declined initially with the larger ongoing support cohort, but were higher at 2 years (Table I in the Data Supplement). These firstevent percentages may be skewed by the reduced patient cohort at 2 years postimplant due to the number of patients who were transplanted or explanted (≈55%) during the course of the study. Hence, the AE-based analysis with EPPY provides a clearer understanding of the AE burden or risk over time.
Patient self-reported QoL measured by Kansas City Cardiomyopathy Questionnaire showed a mean improvement of 18.1 through 24 months, with the EQ-5D Visual Analog Scale improving by an average of 30.1 points above baseline (Table 3). Both New York Heart Association functional class and 6-minute walk test showed improvement from baseline through 24 months. The majority of patients had New York Heart Association functional class IV at baseline (76.4%), whereas by 24 months postimplant, 46.7% were New York Heart Association functional class II. Mean 6-minute walk test increased from 76.7 meters at baseline to 151.7 meters at 24 months (Table 3).
DISCUSSION
One of the biggest challenges with LVAD therapy is the AE burden, which greatly impacts overall perception and acceptance of mechanical circulatory support therapy in patients. Although the overall AE profile has greatly improved since earlier pulsatile devices, it remains high for this type of therapy. Despite advances in pump design and improved clinical management strategies, AEs still occur regularly in LVAD patients, with multifactorial risk factors. 12 According to the eighth Annual INTERMACS report, 60% of patients are rehospitalized at least once by 6-months postimplant, with 1-year rehospitalization rates as high as 80%. Traditionally, stroke and multisystem organ failure were associated with early risk of death, while DLI, RHF, and gastrointestinal bleeding were associated with recurrent hospitalizations. 13 Patient comorbidities and their preexisting end-organ dysfunction have been postulated as leading to higher rates of AEs. 13,14 The current LATERAL trial reports favorable AE rates at 2-years (Table 1). Notably, those events which are typically problematic for LVAD patients, specifically stroke, severe RHF, and DLI, remained low at 2-years. Maltais et al 15 described temporal trends in AE profiles over time in the HeartWare ADVANCE BTT+CAP cohort, showing that the total number of AEs occurring within the first 30 days postimplant was significantly higher compared with those occurring between 30 and 180 days (30.36 versus 5.34 EPPY, P<0.0001). Even at 1-year, overall AE rates continued to decrease and were considerably lower compared with the prior 5 months of support (5.34 versus 4.09 EPPY, P<0.0001). This trend could be attributed to the overall improvement in LVAD patient management postimplant. A longitudinal, temporal review of the AE rates for this study reveals AE rates following a similar trend of greatest risk early on, then considerably lower risk over increasing time on support ( Table 2). Both studies also highlight the need for greater understanding of the early (<6 months) postimplant AE rates and related risk profiles, and the potential clinical management strategies to significantly ameliorate them.
Neurological complications can create some of the most devastating outcomes post-VAD implant. Stroke rates have been described in previous HeartWare studies. 6,11,16 Understandably, multiple factors contribute to the risk of developing stroke postimplant, including history of stroke and atrial fibrillation, showered thrombi, postoperative infection, and both sub-and supra-therapeutic levels of anticoagulation. [17][18][19][20] Findings have consistently shown that the risk for stroke post-LVAD implantation is highest immediately following LVAD implantation. [17][18][19][20] There has been some difficulty comparing stroke rates between devices due to the differences in patient populations as well as different INTERMACS stroke definitions. 21,22 INTERMACS version 4.0 definitions were used in the LATERAL trial and categorized strokes as either ischemic, hemorrhagic, or transient ischemic attacks. All types of stroke events carried a low event rate (Table 1), with 6.3% hemorrhagic (n=9, 0.06 EPPY) and 7.6% ischemic (n=12, 0.07 EPPY), with 13.2% overall stroke (n=21, 0.13 EPPY) at 2 years. Despite the potential devastating impact of stroke, LATERAL trial patients experienced a 95% 2-year freedom from disabling stroke. These data corroborate prior temporal analyses of stroke rates from the ADVANCE BTT CAP cohort, which also showed a substantially declining risk after 6 months to 1 year. This risk continued to decline over time, specifically between 180 days and 3 years. 15 Multiple studies have described DLI in LVAD patients, with typical rates ranging from 10% to 25%. 23,24 In the INTERMACS database, continuous flow LVAD had a DLI rate of 1.31 per 100 patient months within the first 3 months postimplant (early) and 1.42 per 100 patient months occur after 3 months (late). 11 DLI rates reported in the LATERAL trial remain low (11.8%, 0.15 EPPY) at 2 years, which is superior to rates reported in the earlier HeartWare Advance BTT+CAP cohort (19.6%, 0.25 EPPY). 15 RHF presents a challenge for LVAD patient management, occurring in up to 30% of patients postimplant and has been associated with high mortality rates. 24,25 Persistent RHF can be managed based on the acuity and severity of clinical symptoms. Medical management with inotropes may ameliorate less severe RHF, however, for acute severe RHF, a RV assist device may be needed. The rate of severe RHF requiring RV assist device in the LATERAL trial at 2-years was 0.006 EPPY (n=1). A temporal review of severe RHF rates compared with BTT+CAP (0.00 versus 0.33 EPPY respectively in the first 30 days) suggests there may be some benefit in performing LVAD implant via a less invasive, thoracotomy approach. 15 Furthermore, the overall bleeding events requiring rehospitalization were significantly less in thoracotomy-implanted patients as compared with the BTT+CAP sternotomy patients. A temporal review of overall bleeding events reveals that the greatest impact on event rates was in the first 30 days (1.53 versus 5.21 EPPY, respectively). 15 The reduction in the incidence of early RHF and overall bleeding following thoracotomy may be attributable to the thoracotomy approach. The lateral approach respects the geometry of the LV and even more so the RV as partial opening of the pericardium may allow for less leftward interventricular septal shift, thereby preserving RV geometry 24,25 and RV ejection and function. Additionally, the thoracotomy approach may result in less surgical trauma and less overall surgical bleeding, reducing the need for blood transfusions and consequential RV dysfunction. With echocardiography confirmation, the lateral approach may also facilitate more precise inflow cannula placement since the heart is not lifted out of the cavity. Proper pump placement may reduce suction events and allow proper LV unloading, which may also help reduce strokes and avoid RV failure.
The analysis of overall QoL and functional capacity measures continues to reveal definite and sustained improvements. The Kansas City Cardiomyopathy Questionnaire, EQ-5D, and 6-minute walk test were all improved significantly from baseline, with sustained improvements through 2-years. This is important when considering the extended times on support for LVAD patients, especially since the revised United Network for Organ Sharing changes will likely result in longer support durations for BTT patients.
Limitations
There are several limitations to this study. First, this was not a randomized trial comparing thoracotomy to median sternotomy. The original LATERAL trial was designed using a performance goal based on historical data to measure success. A more robust randomized trial might more clearly elucidate the differences and nuances in surgical approaches, including long-term AE profiles. Second, this trial was limited to a BTT population, which could be more representative of a younger patient cohort. Third, the surgeons participating in this study were experienced in the thoracotomy implant approach. For those with less experience with minimally invasive thoracotomy surgery, there may be a learning curve before perfecting the approach. Lateral thoracotomy may not be suitable for all patients, particularly concurrent valvular procedures; therefore, it is crucial that there is a well-thought through surgical plan and appropriate patient selection before deciding whether thoracotomy is the best route for VAD implantation.
CONCLUSIONS
Long-term follow-up of patients in the LATERAL trial reveals encouraging rates of AEs through 2-years, in particular low DLI, RHF, and strokes. A temporal analysis confirmed that the greatest risk of AEs occurs in the first 30 days through 6 months, with stabilizing or decreasing rates thereafter throughout the 2-years of follow-up. These data establish that HVAD therapy can be used in patients for longer support, with improving AE profiles. Understanding the changing risk profiles may help to decrease AE rates through targeted surgical and patient management strategies, thus improving overall LVAD patient outcomes. | 2021-04-20T06:16:23.955Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "fa331d1632e9492d3b869ec52044809a35db1102",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/CIRCHEARTFAILURE.120.006912",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "8891723ae696ab9b6bfda82b3162400e6dfb8296",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8430568 | pes2o/s2orc | v3-fos-license | Systemic fungal infection in a dog: a unique case in Ireland
A three year old male entire Staffordshire bull terrier was referred to University College Dublin Veterinary Hospital, with a two week history of fever, inflammation of the right hock, lameness on the right hindlimb, peripheral lymphadenopathy and gastrointestinal signs (vomiting and diarrhoea). For the preceding three months the dog had been treated for atopic dermatitis with oral ciclosporin (5 mg/kg, PO, q 24 hours). Cytological analysis of the affected lymph nodes demonstrated fungal-like organisms predominantly contained within macrophages. Subsequent fungal culture and microscopic identification confirmed the presence of a Byssochlamys sp. This fungus is a saprophytic organism which has been associated with mycotoxin production. It has not previously been identified as a cause of systemic infection in animals or humans. Ciclosporin was discontinued, and a second generation triazole, voriconazole prescribed at a dose of 6 mg/kg for the first two doses, and continued at 3 mg/kg every 12 hours for six months. There was an excellent response. Follow-up examination five weeks after treatment was completed confirmed remission of the disease. The dog remains alive and well three years later. The present case represents an unusual fungal infection in a dog secondary to immunosuppressive therapy with ciclosporin. Such a possibility should be considered in animals presenting with signs consistent with systemic infection when receiving immunosuppressive medication.
Background
The immunosuppressive effects of ciclosporin have the potential to result in secondary bacterial [1,2], fungal [3], or parasitic [4,5] infections or malignancy [6,7]. Recently a systemic fungal infection in a dog treated with immunosuppressive therapy with ciclosporin was described [8].
The present report is the first documented case of a systemic fungal infection with a Byssochlamys sp in a dog that had been receiving chronic immunosuppressive therapy with ciclosporin. Previously associated with mycotoxin production, Byssochlamys sp has not previously been identified as a cause of systemic infection in animals or humans.
Case report
A three year old male entire Staffordshire bull terrier presented to the University College Dublin Veterinary Hospital with a two week history of pyrexia, gastrointestinal signs (vomiting and diarrhea), oedematous swelling of the right hind limb around the hock and moderately enlarged right pre-scapular and right popliteal lymph nodes. All other peripheral lymph nodes were within normal limits. The referring veterinarian had initiated therapy with oral cephalexin which had not resulted in any significant improvement.
Three months prior to presentation, the dog had been treated for suspected atopic dermatitis with immunosuppressive therapy (ciclosporin 5 mg/kg, q 24 hours with prednisolone at 1 mg/kg, q 24 hours). At the time of presentation the dog was still receiving daily ciclosporin. The prednisolone had been discontinued one month prior to the onset of clinical signs. Ciclosporin was discontinued the day of admission to the hospital.
On physical examination the dog was lethargic and pyrexic (40.1°C). The right pre-scapular and right popliteal lymph nodes were palpably and moderately enlarged. The dog was non weight bearing on the right hindlimb, oedematous swelling of the right hock was detected, without evident joint effusion. No abnormalities were noted on palpation of the abdomen or on thoracic auscultation.
Radiographs of the right hock and thoracic spine showed focal areas of osteolysis and new bone formation within the dorsal arch of the axis, and in the distal 1 cm of the tibial diaphysis, distal fibula and the plantarodistal aspects of the body of the calcaneus (Figures 1 and 2). These findings were suggestive of a neoplastic or infectious process.
The spleen and the iliac lymph nodes were mildly enlarged but had normal echogenicity on the abdominal ultrasonographic examination. Ultrasound guided fine needle aspirations (FNAs) were taken from the spleen and iliac lymph nodes. On cytological examination there was a non-septic neutrophilic inflammation, with no signs of malignancy.
FNAs of the oedematous area affecting the right hock were attempted, but the cytology was non diagnostic given the poor cell yield. Samples of the right carpotarsal joint were not taken pending FNA results from the enlarged lymph nodes (right prescapular and popliteal).
Lymph node smears showed moderate plasma cell hyperplasia and mild pyogranulomatous inflammation in association with fungal hyphae. Smears were highly cellular in a light background of fresh blood. Nucleated cells were predominantly lymphocytes, most of which were smaller in size than a neutrophil and had only scant cytoplasm. There were frequent plasma cells with prominent cytoplasmic basophilia and perinuclear clearing zones, occasionally binucleate. Rarely, fungal hyphae were seen, associated with increased numbers of neutrophils (mildly degenerate) and with macrophages. The hyphae were long, linear structures, up to 50 um in length and 3 to 4 um in width, without significant branching. Most of the hyphae were unstained, although the central one-third had dark, mixed, internal staining. Occasionally, septae were seen, with length of 10 -15 um. No bacteria were seen. There were occasional, solitary mast cells. These findings were suspicious of mycosis ( Figure 3). Aspirates of the affected lymph node were submitted for bacterial and fungal culture.
Ultrasound guided biopsies, using a semi-automatic Bard ®Magnum Reusable tru-cut Core Biopsy System device, were taken from the right pre-scapular lymph node, and were submitted for histopathology and bacterial and fungal culture. Histopathology depicted a reactive lymph node with no signs of malignancy or fungal elements.
A jugular blood sample (10 milliliters), FNA from the right prescapular and popliteal lymph node, and a tissue sample of the right pre-scapular lymph node were submitted for further investigations. The blood sample was inoculated into a blood culture system (Oxoid, Basingstoke, U.K.) and incubated at 37°C for 7 days. The FNA and tissue samples were routinely cultured using Columbia Blood Agar (Oxoid, Basinstoke, U.K.) enriched with 5% sheep's blood (Cruinn Diagnostics, Ireland), Columbia Blood Agar supplemented with colisitn-nalidixic acid (Oxoid, Basingstoke, U.K.) and MacConkey agar number 2 (Oxoid, Basingstoke, U.K.). Plates were incubated under aerobic and anaerobic conditions up to 36 hours in the event of no visible growth at 18 hours. Sabouraud Dextrose Agar plates (Oxoid, Basingstoke, U.K.) were also inoculated and incubated at 25 and 37°C for 5 days for the detection of pathogenic fungi or yeast. Direct Gram and Methylene Blue stains from the FNA and tissue samples were carried out. No organisms were observed from the prepared slides.
Bacterial cultures were examined for growth after 18 and 36 hours and a negative result was recorded. A negative result was recorded for the blood culture after 7 days. The fungal cultures were examined after 5 days. The FNA fungal cultures were negative, however two fungal colonies were isolated from the prescapular lymph node tissue sample (Figure 4).
A smear of this isolate was prepared and stained with Methylene Blue. Hyphal structures were observed under the microscope. The sample was sub-cultured and sent to The Mycology Reference Laboratory (Myrtle Road, Bristol, U.K.) for identification. The isolate was identified as a probable Byssochlamys sp.. The identification was based upon colonial appearance and microscopic morphology.
After 10 days of hospitalization there was no clinical improvement. Voriconazole (Vfend, Pfitzer) at 6 mg/kg, PO for the first 2 doses, followed by 3 mg/kg, PO, q 12 hours was introduced. Itraconazole and clindamycin were discontinued at that time. Two episodes of vomiting were noted at the initiation of therapy with voriconazole, after which no other side effects were noted.
The dog's condition dramatically improved over the first 7 days on voriconazole. The body temperature normalized, the lameness improved, and the swelling of the right hock diminished.
Antifungal treatment with voriconazole was continued for a total of 6 months. The progression of the disease was followed up monthly by repeating FNAs with cytology and culture of the affected lymph nodes. The swelling of the right hock had completely resolved 26 days after voriconazole was started. Fungal organisms were detected on cytology up until the 4 th month on treatment. All subsequent samples yielded negative fungal culture results. Six months after therapy was started the cytology of two consecutive FNAs (spaced one month apart) were also free of fungi, and the treatment was discontinued. The dog is still alive at the time of writing (3 years after diagnosis), and free of clinical signs.
Discussion
Systemic fungal diseases cause significant morbidity and mortality in dogs and cats. The species involved are usually opportunistic and frequently affect immunocompromised animals. Infections disseminate from a single portal of entry, either by inhalation, direct wound contamination or ingestion [9,10]. In the current case the portal of entry was most likely oral as there was no history or obvious evidence of a penetrating wound or respiratory involvement, however a percutaneous route of entrance with secondary haematogenous spread could not be excluded [9].
Ciclosporin has traditionally being used in humans, cats and dogs undergoing transplantation surgery. Other recent uses include medical therapy for anal furunculosis, autoimmune diseases and treatment of atopic dermatitis in the dog [11,12]. In Ireland ciclosporin is only licensed for treatment of chronic manifestations of atopic dermatitis in dogs. There are sporadic reports of dogs and cats developing secondary fungal, bacterial or parasitic infections after immunosuppressive treatment with ciclosporine [1][2][3][4][5]. Its immunosuppressive effects can also result in secondary malignancy [6,7]. Serum ciclosporin concentration is considered the best method currently available to assess adequacy of treatment, however it is normally reserved for patients that fail to respond to standard treatment [12]. Although it may have been interesting to measure circulating ciclosporin concentrations it was not considered necessary as it was discontinued on admission. However, it is likely that the immunosuppressive therapy with ciclosporin played a major role in the development of the fungal infection in the dog in this report.
The case findings were reported to the manufacturer of ciclosporin (Novartis), the Veterinary Medicine Directorate (VMD) and the Irish Medicines Board (IMB). Following the authors' query, the latter two organizations reported that a limited number of localized bacterial and fungal infections in animals being treated with ciclosporin existed in their respective databases. VMD had recorded a case of toxoplasmosis in one cat and both FIV and FeLV in another cat following ciclosporin treatment. Other sideeffects reported to the IMB included diabetes mellitus, emesis, pancreatitis, hypersensitivities, lymphadenopathy and limb weakness. Gingival hyperplasia and papillomas have also been reported [11,12]. As with most other immunosuppressive agents, the manufacturer states that it may increase susceptibility to secondary infections. However to date the VMD and IMB have not received any reports of systemic bacterial or fungal infections following use of ciclosporin despite the existence of a case report in a dog in the UK [8]. The dog of that report had similarly been receiving immunosuppresive therapy with oral ciclosporin [8]. The dog was initially treated with a combination of terbitafine and itraconazole but despite this treatment the clinical signs persisted, and a second generation triazole, voriconazole, was started in combination with terbitafine. This therapy was continued for a year and since then no episodes of recurrence of any clinical signs were reported. The long-term dose of voriconazole applied in this case (3 mg/kg every 12 hours) was slightly below that recommended in the literature (4 mg/kg every 12 hours) [13]. The lower dose was chosen because of the limited size of tablets available (50 or 200 mg) and a reluctance to split them to achieve a dose of 4 mg/kg. Additionally because of financial constraints only a 6 month course was administered. Despite this, complete resolution was documented with serial fine needle aspirates from the affected lymph node.
There are a few limitations in the present study. The exact fungal species was not identified. However given the knowledge of the genus involved further identification would not have altered the therapeutic decisions made. Susceptibility studies were not performed but given the excellent response to empirical treatment with voriconazole, susceptibility is assumed. On the other hand, itraconazole appeared to have a limited effect. Whether this can be translated to all fungi within the genus is not clear. The length of time therapy should be continued is unclear and few definitive recommendations exist in the literature. In the current case, therapy was continued until two consecutive FNAs, spaced one month apart, were negative and this methodology appeared to provide an excellent result. Interestingly fungal culture results were consistently negative whilst on treatment. Although this is a plausible scenario, it does emphasize that reliance should not be placed solely on culture results to guide therapeutic efficacy.
It would have been interesting to see if the magnitude of hyperglobulinaemia decreased after treatment, but given the clinical improvement of the dog, the excellent response to the treatment, and the owner's financial constraints, these biochemical changes were not re-evaluated.
Proteinuria was detected in the current case during initial investigations. Whilst this could be due to post renal disease, bacterial urine culture was negative. Given the magnitude of the proteinuria, the clinical history and clinicopathological findings, functional renal proteinuria was considered a possibility [14]. However a previous report stated that physiological proteinuria as a result of fever is typically lower than in the present case (UPCR < 0.5) [15]. On the other hand, although fungal elements were not observed on examination of the urine, specific fungal urine culture was not carried out, and consequently the possibility of renal fungal infection cannot be completely excluded. Unfortunately further investigation of the proteinuria by repeating urine protein creatinine ratios were not performed.
The main differential diagnoses for the multifocal osteolytic changes observed included neoplastic, metabolic/ endocrine or infectious disease processes. Given the results of protein electrophoresis, multiple myeloma was effectively excluded. Metabolic/endocrine causes were considered unlikely given the lack of other supportive clinical signs or clinicopathological abnormalities. Infectious causes Figure 4 Two fungal colonies growing on Sabouraud dextrose agar. The colony morphology under 25 degrees was small white cotton like. Under 37 degrees the colonies were waxy and yeast like, they were cream in colour. Under the microscope fungal hyphae were seen with some oval conidia.
were therefore considered to be a priority. The oedema of the right hock was most likely due to increase in vascular permeability (either due to infectious, or inflammatory processes); other differentials such as venous obstruction or compression were considered less likely after the clinical examination. Initially, given that fungal infections are not endemic in Ireland, greater focus was placed on possible bacterial infection. The results of the lymph node FNA was however highly suggestive of fungal infection prompting the performance of specific fungal cultures.
Conclusion
This report provides an example of a systemic fungal infection in a dog from Ireland receiving immunosuppressive ciclosporin treatment. There was a successful response to voriconazole administration. Systemic fungal infection should always be considered as a potential complication of immunosuppressive therapy with ciclosporin even in areas where fungal infections are not usually considered endemic. Reporting adverse effects of any drug to the relevant national bodies is necessary in order to ensure adequate and accurate information is amassed. | 2016-05-04T20:20:58.661Z | 2014-08-06T00:00:00.000 | {
"year": 2014,
"sha1": "37a27a20bcad1ad33b63253eae6f36c90e763a29",
"oa_license": "CCBY",
"oa_url": "https://irishvetjournal.biomedcentral.com/track/pdf/10.1186/2046-0481-67-17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c12c4cca1c4d6a823dde3eaa5abd770911d54e6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246819891 | pes2o/s2orc | v3-fos-license | Knowledge and practice about prevention on Hepatitis-B virus infection among the student nurses
The silent killer Hepatitis-B is a major threat to public health throughout the world and well recognized occupational risk for healthcare workers. Good knowledge and practice about hepatitisB virus (HBV) infection prevention is crucial for HBV infection control. In Bangladesh, there were few studies conducted regarding the knowledge and practice level of student nurses’ about HBV infection prevention. The purpose of this study was to assess the knowledge and practice level of student nurses’ about HBV infection prevention in Bangladesh. A cross-sectional descriptive type of study was conducted in three nursing colleges at Sylhet in Bangladesh. A pre-tested self-administered structured questionnaire with observation checklist was constructed and implemented to assess knowledge and practice about HBV infection prevention. A total of 150 student nurses’ from three nursing college were participated in this study. Data were analyzed using descriptive statistics and Chi-square test was used implemented to determine the relationship between categorical variables. The result shows that most of the respondents (83.3%) were female and mean age was 20±8.72 years. The level of knowledge was good (81.07%) and practice level was satisfactory (72.22%) on HBV infection prevention related activities. Most of the respondents had accurate concepts about HBV infection can be prevented by 96% vaccination, 83.33% safe sex, 92.63% use of disposable syringe, 84.67% wearing gloves during patient care and 78% promote public awareness. The study findings concluded that the majority of the respondents had good knowledge and satisfactory practice level in HBV infection prevention. However, not all of them with good knowledge carried out good practices about to HBV infection prevention in their working place. Authority should be ensuring vaccination status and periodical training program to maintain continued good level of knowledge and practice for prevention HBV infection.
Introduction
The silent killer Hepatitis-B is a major threat to public health throughout the world (WHO, 2020) and well recognized occupational risk for healthcare workers (Mehriban et al., 2016;Akazong et al., 2020). Hepatitis-B is an inflammatory disease of liver caused by the Hepatitis B DNA virus, which is transmitted through percutaneous or mucosal exposure to infection blood or body fluids (CDC, 2021). HBV infection is confirmed by laboratory test focuses on the detection of hepatitis-B surface antigen HBsAg (WHO, 2020). It can lead to lifelong chronic infection, resulting in cirrhosis of the liver, liver cancer, liver failure and death (Lim et al., 2020;CDC, 2021). Chronically infected HBV carriers are able to transmit HBV through contact with their body fluids, which includes occupational exposure to their blood secretions, sexual intercourse (WHO, 2020). People at risk include healthcare workers (HCWs) in contact with blood and human secretions, haemodialysis staff, oncology and chemotherapy nurses, all personnel at risk of needle stick and sharp injuries which includes those working in operating rooms and clinical laboratories, respiratory therapists, surgeons, doctors, dentists, as well medical, dental, health technology and also nursing students (Perez-Diaz et al., 2015;Demsiss et al., 2018). Hepatitis-B infection is a dreaded disease; its prevalence varies from country to country and depends upon a complex mix of behavioral, environmental and host factors. Bangladesh and the Indian sub-continent as a whole, together with the Middle-East, North-Africa and former Soviet Union, belongs to the intermediate prevalence region of HBV infection (Al-Mahtab, 2015;Hasan et al., 2017;Choudhuri et al., 2019). Based on review, there is no specific treatment for acute Hepatitis-B (WHO, 2020). Prevention is the only safeguard against epidemic of viral hepatitis. Knowing facts and having proper attitudes and behaviors are critical to prevent the spread of this infection (Balegha et al., 2021;Akazong et al., 2020). To make it more effective, we need to assess gaps in health education (Hasan et al., 2017). Such information will serve as a guide for development of information, education and communication activities for prevention and control of Hepatitis-B. The vaccine against Hepatitis-B is 95% to 98.8% effective in preventing HBV infection and its chronic complications (Chang et al, 2015;Hossain et al., 2018). But in chronic HBV infection can be treated with medication including anti-viral oral drugs (WHO, 2020). According to global statistics, nearly two billion people infected with Hepatitis-B virus and about 391 million or 5% world people live with chronic HBV infection worldwide (Collaborators, 2018). Each year 3o million people become newly infected HBV infection globally (Hepatitis-B Foundation, 2021). It is estimated that current global chronic HBV prevalence has shows 3.5% to 5.6% across all ages (Schmit et al., 2021). An estimated 887,000 die due to consequences of HBV infection, from cirrhosis and liver cancer (WHO, 2020). In Bangladesh, prevalence of Hepatitis-B has been estimated 5.5% (Health Bulletin 2019, 2020 p.84). Based on our country's studies, prevalence of risky group has noted 7-7.5% among the injection drug user, 7.96% health workers, 6.5% Thalassemic patients, and 3.84% tea gardeners (Al- Mahtab et al., 2017;Uz-zaman et al., 2018). The prevalence of HBV infection among HCWs shows 2-10 times higher than general population in globally (Abdela et al., 2016). Review from previous study in Ethiopia found prevalence of HBsAg among medical, nursing and health sciences students has noted 4.2% (Demsiss et al., 2018). Review found that HCWs especially nurses are high risk for occupational blood borne pathogens (Perez-Diaz et al., 2015;Abdela et al., 2016). Nursing students are more risky to HB due to they are in direct contact with the patients for management and nursing care in clinical settings. A study from Turkey found that 35.5% nursing students had experienced a needlestick injury and 66% had injured by ampoule during clinical practice training. Unexpectedly, it was also found that this thing is responsible for 20% of the injury had been in contact with patients' blood or body fluid (Karadag, 2010). Nursing profession is the fundamental part of healthcare team. Student nurses are pupil of the nursing profession and lead to future generation in healthcare services. They will also role transition student nurse to staff nurse in tomorrow nursing world. Proper knowledge and practice is essential for prevention of spread of infection and safety precaution. Based on global studies, nursing students preventive knowledge, attitude and practice on HBV has found good (Reang et al., 2015;Demsiss et al., 2018;Gebremeskel et al., 2020), positive (Abdela et al., 2016;Nalii et al., 2017), and satisfactory (Modawi et al., 2020;Gebremeskel et al., 2020) respectively. Although, other studies had found low level of knowledge (Modawi et al., 2020), poor attitude (Modawi et al., 2020), and poor level of practice (Reang et al., 2015;Abdela et al., 2016;Demsiss et al., 2018). In Bangladesh previous studies were done about registered nurses preventive knowledge and practice about Hepatitis-B who working in different level of public and private hospitals in Dhaka (Mehriban et al., 2015;Khan et al., 2017). However, there is little known about student nurses knowledge and practice on prevention of HBV infection during their academic period in Bangladesh. Therefore, this study is warranted to explore of the current level of knowledge and practice for prevention of HBV infection. This study findings will helps nurses to gain knowledge, attitude, and precaution to avoid becoming infected HBV infection and others dreadful diseases.
Materials and methods 2.1. Study design
This is a Descriptive cross-sectional study.
Study population
The study population was nursing students of nursing colleges in Sylhet.
Study places
This study was carried out from three nursing colleges at Sylhet District namely Sylhet Nursing College attached Sylhet MAG Osmani Medical College Hospital, Begum Rabeya Khatun Chowdhury (BRKC) Nursing College attached Jalalabad Ragib Rabeya Medical College Hospital, and North-East Nursing College attached North-East Medical College Hospital, Sylhet. Approximately, a total of 420 students were selected from three nursing colleges.
Study period
Total duration of study was from January 2012 to June 2012.
Sample size
A sample size of 150 was calculated based 50% unknown prevalence with an absolute error of 8%.
Sampling technique
Student nurses were selected from three nursing colleges using a proportional random sampling process. The sample technique was selected through simple random sampling and maintains proper inclusion and exclusion criteria.
Tool of the study
A pre-tested self-administer questionnaire were distributed to collected information. Section A assessed the socio-demographic characteristics of the respondents with 10 questions, covering age, sex, religion, education, name of nursing college, family income, family history of HBV infection and vaccine status. Section B covered with 11 questions related to knowledge on prevention of HBV infection viz, sources, types, mode of transmission, clinical features, risk person, investigation, complications, and ways of prevention etc., which as answer were multiple responses. The level of knowledge part had been scored into three categories, likes poor (<20), satisfactory (20-40), and good (>40). In addition, an observational checklist added by Principal Investigator (PI) to evaluate level of practice of students nurses during clinical practice training.
Data analysis
After completion of final data collection then data were checked thoroughly and cleaned followed by editing, coding, and categorizing to detect errors or emissions and to maintain consistency and validity. The data were entered in computer with a statistical software package of SPSS windows version 16 for analysis and interpretation. Both descriptive and Bivarate analysis were done. Values were expressed are frequencies and percentages. Non-parametric Pearson's Chi-Square (χ 2 ) test was carryout to explore the relationship between Hepatitis-B and Socio-demographic status of the respondents. A p value <0.5 was considered statistically significant.
Ethical procedures
The study protocol was approved by the Ethical Review Committee of National Institute of Preventive and Social Medicine (NIPSOM), Dhaka. Before data collection, written permission was taken from the Administrative Head of the selected Nursing Colleges and Hospitals. The participation is this study was voluntary and informed consent was obtained. The participants were briefed about the aims and benefits of the study. They also informed and ensured that data will keep in confidentially and they have rights to withdraw of him/her without any excuses or condition. Table 1 shows that most (68.7%) of respondent's education level was Diploma in Nursing Program. Majority (83.3%) of the respondents were female. Near half of them (49.7%) of the respondents were belong to 20-30 years. Major portion (75.3%) of the respondents was Muslim. More than half (54.7%) of the respondents was from Sylhet Nursing College. It was observed that nearly forty percent and above one third (37.3%) had income below BDT 10000 and one fourth (24.7%) to one fifth (21%) respondents had BDT 1000-20000 and BDT 20001-30000 respectively. Most (80.0%) of the respondents had family history of negative of HBV infection. Figure 1 shows that above than two third (67.30%) of the respondents were not vaccinated, 12% complete vaccinated and 21.70% respondents were partially HB vaccinated. Table 2 shows that regarding types of Hepatitis, most (92%) of the respondents answered Hepatitis-B prevalent in our country. Majority (90.67%) answered Hepatitis-B is serious types of Hepatitis. Most (93.33%) of the respondents knew the blood borne and sexual transmitted disease (STD) (89.33%) is the sources of HBV infection. Besides, most (96%) of the respondents answered mode of transmission of HB Virus is through blood transfusion, sexual intercourse (89.33%), and use of contaminated syringes, needles (88%) and also transplacental route (83.33%). Nurses appeared to be the highest (89.33%) risk person of HBV infection followed by doctor (78.67%), multiple injecting drug user (82.67%), and sexual worker (85.33%). Regarding symptoms of HBV infection, most of the respondents answered were yellow coloration of sclera (92%), followed by yellow coloration of urine (92.67%), weakness (90%), and anorexia (84.67%). Majority (95.33%) of the respondents know the complication of HBV infection is cirrhosis of liver, loss of immunity (88.67%), and liver cancer (70%). A large portion (86.0%) of the students answered HBsAg is the investigation for HBV infection. Most (96%) of the respondents know the vaccination is the way prevention of transmission of HBV infection. Then they also answered use of disposable syringe and needles (92.67%), blood transfusion through screening (88.67%), use of gloves (84.67%), and safe sexual relationship (83.33%). Majority of the students (91.90%) were always strictly checking during blood transfusion and most (96%) of them discarded HBV infected syringe and needles. Use of sterile instruments can be prevention of HBV infection in healthcare setting. Most of the respondents answered use in operational activities (99.33%), followed by dressing (98%), and catheterization (91.33%). Most of the students answered wearing of gloves during patients delivery (98%), dressing (94%), and during patients care (91.33%). Most of the students reported they do in duty hour's toward HBV infected patients sympathy and cooperation (93.33%), maintain isolation (91.33%), self protection from needle stick injury (94%), and use of gloves nursing care (84.67%). Majority of the students reported about advice of HBV infected patients take appropriate treatment (93.33%), use of condom in sexual intercourse (96%), and use of separate utensils likes brush, razor etc.(87.33%). To promote public awareness, most of the respondents say to regular advertisement in media (98%), awareness professional group (94%), students group (91%), and also strengthening community health services (87.33%). Table 3 shows that overall composite score 81.33% about prevention of HBV infection. According to study findings, evaluation of 90 student nurses does practice on prevention of HBV infection in working place out of 150 respondents. Ninety students (100%) do performed Hand washing and use of disposable syringe. Most (93.33% to 73.80%) of respondents do practice awareness during blood transfusion, wearing gloves, use of sterile instruments, and practice of medical waste discarded in proper way. However, unexpectedly found that below half (42.22%) of the respondents practices of wearing PPE namely mask, cap, gown during delivery and assist in Operation Room. Table 4 shows that most (81.07%) of the respondents had good level knowledge about prevention on HBV infection. PI observed that the daily practice on different issues of infection prevention related activities among student nurses in working/training place. Majority (72.22%) respondents performed do satisfactory level practice for prevention of HBV infection. Table 5 shows that practice on prevention of HBV infection was significant positive relationship with education (p<.005). However, others had no significant relationship between practice of transmission and prevention of HBV infection with gender and ages of the respondents (p>.05). The above Table 6 shows that no statistical significant association was found between knowledge and practice regarding prevention of HBV infection (p>.05).
Discussion
This cross-sectional study was carried out among 150 student nurses to assess the current level of knowledge and practice regarding prevention of HBV infection who were attaining in selected nursing colleges and hospitals. In this study, we found that near to half participants were within the age groups belong to 20-30 years, with a mean age 24 years (SD± 6.150) and nearly one third below 20 years. This findings contrast with others international studies. In an Arabian study by Modawi et al., in 2020 found that 81.7% respondent's age belong to 21-22 years and a Nepalese study Paudel et al., 2012 found 60.5% were 18-20 years group. Majority (83.3%) of respondents were female which is similar findings in an Indian study nearly 80% were female (Reang et al., 2015). Majority (75.3%) of the respondents was Muslim which is opposite finding in previous study where 87.6% was Hinduism (Paudel et al., 2012). Most of the respondent's (68.7%) education level was Diploma in Nursing Program which was similar in previous studies where most of them (95.8%) in Diploma Nursing program (Reang et al., 2015). It was observed that 37.3% had income below BDT 10000 and 24.7% to 21% had BDT 1000-20000 and BDT 20001-30000 respectively. This findings was nearly similar in an Indian study found 24.7% had a monthly family income 10000Rs-15000Rs (Reang et al., 2015) and Nepalese study noted 35.3% monthly family income were <10000Rs (Paudel et al., 2012). Majority (80.0%) of the respondents had family history of negative of HBV infection. There is no study findings related such kind of information is exists. In this study found that a large portion (67.30%) of the respondents was not vaccinated, 11.90% complete vaccinated and 21.70% respondents were partially vaccinated. This finding was difference from a previous Indian study in Meerut where nursing students had 40% not vaccinated followed by 41% fully vaccinated, and 19% were partially vaccinated (Anand et al., 2020). Regarding types of Hepatitis, most (92%) of the respondents answered Hepatitis-B prevalent in our country which is consistent with a study in Agartala City, India found 99.7% aware of HB virus (Reang et al., 2015). Majority (90.67%) answered Hepatitis-B is serious types of Hepatitis. Hepatitis is a serious form hepatitis caused by a virus and 92.2% to 94.9% respondents say causative agent of Hepatitis-B (Paudel et al., 2012;Anand et al., 2020). Most of the respondents know the blood borne (93.33%) and STD (89.33%) is the sources of HBV infection. Besides, the majority of the respondents answered mode of transmission of HB Virus is through blood transfusion (96%), sexual intercourse (89.33%), and use of contaminated syringes, needles (88%) and also trans-placental route (83.33%). This finding is similar with an Indian study where found 91% say vertical transmission, 83.2% through needle stick injury, 63.1% unsafe sex (Reang et al., 2015), and 97.7% infected blood transfusion (Paudel et al., 2012). Nurses appeared to be the highest (89.33%) risk person of HBV infection followed by doctor (78.67%), multiple injecting drug user (82.67%), and sexual worker (85.33%). This finding was similar with Reang et al., (2015) found 80.2% answered Doctor, Nurse, Lab Technologist are high risk group for HBV infection. Regarding symptoms of HBV infection, most of the respondents answered yellow coloration of sclera (92%), followed by yellow coloration of urine (92.67%), weakness (90%), and anorexia (84.67%). This finding was comparable with study in Nepal where found 82.2% yellow discoloration of eye, 71.7% anorexia (Paudel et al., 2012) and in Saudi Arab found 58.3% jaundice and dark urine (Modawi et al., in 2020). Majority of the respondents know the complication of HBV infection is cirrhosis of liver (95.33%), loss of immunity (88.67%), and liver cancer (70%). This finding varies from previous study where 60% (Modawi et al., in 2020) to 73.0% respondents (Paudel et al., 2012) stated complication of cirrhosis of liver and hepatic cancer. Most (86.0%) of the students answered HBsAg is the investigation for HBV infection. The finding contradicts with an Arabian study where 58.3% knew that blood test done for HBV infection in Saudi Arab (Modawi et al., in 2020) and also 53.5% knew in India (Anand et al., 2020). Most (96%) of the respondents know the vaccination is the way prevention of transmission of HBV infection. Then they also answered use of disposable syringe and needles (92.67%), blood transfusion through screening (88.67%), use of gloves (84.67%), and safe sexual relationship (83.33%). This findings matched with previous studies where 96.6% to100% response HBV infection prevented by vaccine 89% avoid multiple sex partners, 94% to 100% students response use of sterile syringes and needles, 94% use of sterile gloves during injecting or drawing bloods (Paudel et al., 2012;Reang et al., 2015;Mahore et al., 2015). Most (91.90%) of students always strictly checking during blood transfusion and most (96%) of them discarded HBV infected syringe and needles. This finding is similar with an Agartala study in India where 83.1% nursing students discarded the needles and syringes after use in a safe puncture proof container (Reang et al., 2015). Use of sterile instruments can be prevention of HBV infection in healthcare setting. Most of the respondents answered use in operational activities (99.33%), followed by dressing (98%), patients delivery (97.33%), and catheterization (91.33%). This findings similar with Indian study where most (93.5%) of nursing student reported that use of sterile equipments before using for prevention of HBV infection. Most of the students nurses' answered wearing of gloves during patients delivery (98%), dressing (94%), and during patients care (91.33%). This finding varies from previous studies where in India 94% uses sterile gloves during injecting or drawing blood (Reang et al., 2015) but in Bangladesh 73% uses gloves in hospital settings (Mehriban et al., 2014). Most of the students reported they do in duty hour's toward HBV infected patients sympathy and cooperation (93.33%), maintain isolation (91.33%), self protection from needle stick injury (94%), and use of gloves nursing care (84.67%). Most of studies had found nursing students were facing accidental injury by sharp instruments like's needles, blade and blood exposure. This prevalence was found varies from 40.7% to 53.4% (Reang et al., 2015;Anand et al., 2020). This finding matched with previous study (Paudel et al., 2012;Reang et al., 2015;Anand et al., 2020). Majority of the student nurse's reported about advice of HBV infected patients take appropriate treatment (93.33%), use of condom in sexual intercourse (96%), and use of separate utensils likes brush, razor etc.(87.33%). A study from Ethiopian university health sciences including nursing students stated that 76.5% response HBV is treatable (Gebremeskel et al., 2020). Study from Nepal has found 69.9% response avoiding sharing razor and tooth brush (Paudel et al., 2012). On the topic of use of condom, it was observed in Bangladesh that only 42% said use of condom during sexual intercourse for preventive practice on Hepatitis-B (Mehriban et al., 2015). To promote public awareness, most of the respondents say to regular advertisement in media (98%), awareness professional group (94%), students group (91%), and also strengthening community health services (87.33%). This finding did not match with others internal study. In our study found that most (81.07%) of the respondents had good level knowledge about prevention on HBV infection. This finding is similar with a Nepalese study where 85.2% nursing students had a high level of knowledge on prevention Hepatitis-B (Paudel et al., 2012). We also observed that majority (72.22%) respondents performed do satisfactory level practice for prevention of HBV infection. This finding is contradicted with others global studies where found most of student's practices were not satisfactory (Reang et al., 2015) and poor level for prevention of HBV infection (Gebremeskel et al., 2020). Practice on prevention of HBV infection was significant positive relationship with education (p<.005). This finding had similar with Paudel's study where education level was significantly correlation with the level knowledge on Hepatitis-B at p<.001 (Paudel's et al., 2012). However, others had no significant relationship between practice of transmission and prevention of HBV infection with gender and ages of the respondents (p>.05). This findings also alike with previous study (Balegha et al., 2017). In this study, there is no statistical significant association was found between knowledge and practice regarding prevention of HBV infection (p>.05). This finding contrary with a study in Dhaka, had found a positive significant association was found between level of nurses knowledge and level of preventive practice regarding HBV infection at p<.001 (Mehriban et al., 2015).
Conclusions
The overall knowledge of HBV infection was good level among study participants. Moreover, majority of their practices performed do satisfactory level practice for prevention of HBV infection. Although, there is no statistical significant association was found between level of knowledge and level of practice regarding prevention of HBV infection. Hospitals Authority should be arranging a program to coverage vaccination status of student nurses. In addition, arrange regular training and seminar to increase and maintain continue good level of knowledge and practice for prevention of HBV infection for both save of patients and nurses. | 2022-02-15T16:02:21.148Z | 2021-12-30T00:00:00.000 | {
"year": 2021,
"sha1": "c254ed08acffa556a87070cb2b3b272632dcb61c",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/AJMBR/article/download/57611/40386",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a2c1ec19d0348ae82ca8e7b99f822cc8f9e6c9bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
207958125 | pes2o/s2orc | v3-fos-license | Enzyme-Linked Aptamer Assay (ELAA) for Detection of Toxoplasma ROP18 Protein in Human Serum
Toxoplasma gondii engenders the common parasitic disease toxoplasmosis in almost all warm-blooded animals. Being a critical secretory protein, ROP18 is a major virulence factor of Toxoplasma. There are no reports about ROP18 detection in human serum samples with different clinical manifestations. New aptamers against ROP18 protein were developed through Systematic Evolution of Ligands by Exponential enrichment (SELEX). An Enzyme-Linked Aptamer Assay (ELAA) platform was developed using SELEX-derived aptamers, namely AP001 and AP002. The ELAA was used to evaluate total antigen from T. gondii RH strain (RH Ag) and recombinant protein of ROP18 (rROP18). The results showed that the ELAA presented higher affinity and specificity to RH Ag and rROP18, compared to negative controls. Detection limit of rROP18 protein in serum samples was measured by standard addition method, achieving a lower concentration of 1.56 μg/mL. Moreover, 62 seropositive samples with different clinical manifestations of toxoplasmosis and 20 seronegative samples were tested. A significant association between ELAA test positive for human serum samples and severe congenital toxoplasmosis was found (p = 0.006). Development and testing of aptamers-based assays opens a window for low-cost and rapid tests looking for biomarkers and improves our understanding about the role of ROP18 protein on the pathogenesis of human toxoplasmosis.
INTRODUCTION
Toxoplasma gondii (T. gondii) is an intracellular parasite with cosmopolitan distribution that infects the majority of warm-blooded animals (Jones and Dubey, 2012). Nearly one third (∼25%) of the world's human population may be chronically infected with T. gondii (Pappas et al., 2009). Infection in humans can cause severe ocular, neurologic, and sometimes systemic disease, especially in immunocompromised and congenitally infected individuals (Cardona et al., 2011;Pfaff et al., 2014). Transmission of the parasite has been demonstrated in humans by the consumption of meat, vegetables and contaminated water (Lora-Suárez et al., 2007;Franco-Hernandez et al., 2016;Triviño-Valencia et al., 2016). For all these reasons, Food and Agriculture Organization (FAO) and World Health Organization (WHO) declared toxoplasmosis as a foodborne parasite infection disease of global concern (Robertson et al., 2013).
Globally, the serological prevalence of toxoplasmosis is highly variable, ranging from 10 to 15% in the United States, to >60% in South and Central America (Gilbert et al., 2008). Additionally, it has been reported that South America is the continent with the highest burden of the disease, with congenital and ocular toxoplasmosis frequently associated with more severe symptoms (de-la-Torre et al., 2007;De-la-Torre et al., 2009;Torgerson and Mastroiacovo, 2013). The high rate of ocular toxoplasmosis in Colombia is likely attributable to exposure to more-virulent strains of T. gondii (Ajzenberg, 2012), even if other factors, such as inoculum exposure or the genetic background of the host, may be involved (de-la-Torre et al., 2013). Therefore, there are some indications that disease outcomes in humans can be influenced by the variability of the infecting T. gondii strain (Grigg et al., 2001;Reese et al., 2011;McLeod et al., 2012;Sánchez et al., 2014).
Experimental crosses between T. gondii strains with different virulence patterns allowed the identification of several polymorphic genes coding for secreted factors of the parasite, associated with differences in the virulence in mice (Saeij et al., 2006;Taylor et al., 2006;Talevich and Kannan, 2013). These key virulence factors include proteins from the rhoptry family (ROP kinases) that exert kinase or pseudokinase activities (Hunter and Sibley, 2012) contributing to disarm innate immunity and promote survival of the parasite (Hakimi et al., 2017). ROP18 is one of the major virulence factors of T. gondii, identified as a serine/threonine kinase secreted into the parasitophorous vacuole (PV) and host cytosol Talevich and Kannan, 2013). A recent study shows that ROP18 is a conserved virulence factor in genetically diverse strains from North and South America (Behnke et al., 2015). Furthermore, there is a report that demonstrates the presence of virulent alleles that code for ROP18 in humans with ocular toxoplasmosis in Colombia, who presents a more severe inflammatory reaction in the eye (Sánchez et al., 2014). Currently, there is only one study that indicates the presence of specific IgM and IgG antibodies against ROP18 in sera from humans with toxoplasmosis . However, there are not any reported methods that allow the direct detection of this protein in human serum. ROP18 protein identification in human serum would be of great importance in order to ascertain a possible correlation between the presence of this virulent factor and the severity of the disease.
To perform the identification and quantification of protein biomarkers in serum, DNA and RNA aptamers have been used (Drolet et al., 1996;Gold et al., 2010). Aptamers are short, single-stranded oligonucleotides, that bind to targets with high affinity and specificity by folding into tertiary structures (Ellington and Szostak, 1990;Tuerk and Gold, 1990). These molecules have promising roles in clinical diagnostics and as therapeutic agents (Zhang et al., 2019), showing some advantages compared to antibodies, such as shorter generation time, lower costs of manufacturing, no batch-to-batch variability, higher modifiability, better thermal stability and higher target potential (Zhou and Rossi, 2017). Due to these characteristics, aptamers could be used as molecular recognition agents alternative to antibodies in enzyme linked immunosorbent (ELISA) assays, hence its application has given rise to the ELAA assay (Enzyme-Linked Aptamer Assay), in which aptamers are the recognition agents (Toh et al., 2015). This ELAA assay has been used to recognize Leishmania infantum proteins, like H2A histones (Ramos et al., 2007;Martin et al., 2013) and also for detecting Mycobacterium tuberculosis culture filtrate protein and secreted antigen in sputum samples from tuberculosis patients (Rotherham et al., 2012).
Although aptamer research in the area of parasitology is still in the early stages, promising results have been obtained for the main protozoan parasites, including Trypanosoma spp., Plasmodium spp., Leishmania spp., Entamoeba histolytica, and Cryptosporidium parvuum. These aptamers have been used to detect and treat the parasitic infections caused by these parasites in human beings (Ospina-Villa et al., 2018). For T. gondii, only one work with DNA aptamers has been reported for the detection of anti-Toxoplasma IgG antibodies (Luo et al., 2013).
There are no aptamer-based methods for the detection of T. gondii proteins in serum. Therefore, we developed specific aptamers against ROP18 protein by SELEX. Those newly identified aptamers were utilized in a direct or a sandwich ELAA test to detect total antigen from Toxoplasma and recombinant ROP18 protein. Moreover, human serum samples with rROP18 protein were analyzed, as well as the seropositive samples from individuals with toxoplasmosis were evaluated with this novel ROP18-ELAA platform (Figure 1). The newly developed aptamer-based sensing platform for ROP18, will enhance our understanding about the role of virulence factors on the pathogenesis of toxoplasmosis in humans.
Human Clinical Samples and Definition of Clinical Manifestations
Human serum samples for the ELAA test were obtained from 62 individuals with toxoplasmosis, 20 seronegative for the infection and 5 from individuals with a different infection as a control of specificity. Most of the samples (n = 67) were collected at the Center for Biomedical Research (CIBM) at the University of Quindío and some of them with ocular toxoplasmosis (n = 20) were recruited at the "Clínica Barraquer" in Bogotá-Colombia, with the previous signature of the informed consent. We included 18 serum samples from patients with toxoplasmic lymphadenitis (IgM and IgG anti-Toxoplasma positive) with avidity <50%; 13 from individuals with chronic-asymptomatic infection without eye injury (IgM anti-Toxoplasma negative and IgG anti-Toxoplasma positive); 21 from patients with ocular toxoplasmosis diagnosed by indirect ocular fundoscopy, with antibody levels positive in serum/aqueous humor (index <2), with PCR for Toxoplasma B1 sequence positive and based on the criteria previously described (De La Torre and López-Castillo, 2009); and 10 serum samples with congenital toxoplasmosis (IgG anti-Toxoplasma positive) confirmed as described by the European Network in congenital toxoplasmosis (Lebech et al., 1996). In the same way, we included 20 serum samples from seronegative individuals (IgM and IgG anti-Toxoplasma negative) as the negative control of the assay. Additionally, five serum samples from IgM Dengue-positive individuals (Diagnosed by an IgG capture ELISA for Dengue, Vircell Ref. M1018, carried out in the CIBM), were included to evaluate the cross-reactivity of previously standardized ELAA.
The snap-cooled DNA library was brought to room temperature and incubated with 400 pmol GST-rROP18 protein that was conjugated with Glutathione Sepharose beads at RT with rotation for 1 h. After incubation, the supernatant containing unbound sequences was removed and the beads were washed three times with 1 mL washing buffer (a solution of DPBS containing 5 mM MgCl 2 ). The ssDNA-protein-bead complexes were suspended in DNase-free water for PCR amplification of ROP 18-bound sequences by using forward primer (5 ′ -ATCCAGAGTGACGCAGCA-3 ′ ) and reverse primer with biotinylated 5 ′ end (5 ′ -biotin-ACTAAGCCACCGTGTCCA-3 ′ ). PCR product was then passed three times through the DNA synthesis column loaded with streptavidin sepharose beads. The beads were washed again with 2.5 mL of PBS. 500 µL of 200 mM NaOH was added to elute the ssDNA. The eluted ssDNA was added into a NAP5 column prewashed with 15 mL of deionized water for desalting. 1,000 µL of DNase-free water was allowed to pass through the column to elute the ssDNA. The concentration of ssDNA was determined by UV absorbance at 260 nm and concentrated by using a DNA Speedvac dryer. Precipitated ssDNA was resuspended in binding buffer for subsequent round of selection. After 15 rounds of selection, the final enriched libraries were PCR-amplified and cloned into pJET1.2/blunt cloning vector using the CloneJET PCR Cloning Kit (Thermo Fisher Scientific) according to the manufacturer's instructions. One hundred fifty Colonies from the 15th round of selection were picked and analyzed by Sanger sequencing.
Aptamer and Antibody Anti-ROP18
Two top enriched DNA aptamers with biotin labeled were used as recognition agents. Those aptamers are AP001 with the sequence 5 ′ -TCCTGGCAGCGCTTTTGCTTGTTTGCTC TCGTACCTGTCC-3' and AP002 with the sequence 5 ′ -CGCA CCGATCCGGTGTTAATCTCGACGTCCCTTAAGTTTG-3'. In addition, a rabbit anti-ROP18 polyclonal antibody (a gift from the Dr. L. D. Sibley from the University of Washington, Saint Louis, United States of America).
Toxoplasma lysate antigen from the RH strain (RH Ag) was used as positive control, because it expresses the ROP18 protein (Supplementary Figure S1). RH Ag was prepared as previously reported (Torres-Morales et al., 2014) with some modifications. Briefly, T. gondii tachyzoites from the RH strain were maintained in vitro in human fibroblasts (HFF) at 37 • C and 5% CO 2 . The antigen was obtained after recovering the tachyzoites from the culture and centrifuged at 3,000 rpm for 5 min in RPMI medium, the tachyzoites pellet was resuspended in saline and subjected 5 times to freeze-thawing and to breakage by sonication 8 times a 20 W for 20 s. Subsequently, the lysis of the parasite was verified by microscopy. Finally, 1x protease inhibitor cocktail (dilution 1:100) was added to the antigen (Ref. I3786, Sigma-Aldrich, St. Louis, USA), the aliquots were performed and stored at −80 • C. The protein quantification was performed by the Bicinchoninic Acid Protein Assay (Ref. 23227, Thermo Scientific, Rockford, IL) by using the spectrophotometer EPOCH (BioTek Instruments, Winooski, VT, USA) at 280 nm. The RH Ag, was evaluated at different concentrations (from 200 to 6.25 µg/ml) in order to determine the detection limit for each assay.
In addition to RH Ag, the recombinant protein ROP18 (rROP18) of T. gondii RH strain produced in our lab was also used as positive control in the last steps of the standardization. In the same way, three negative controls were included: the recombinant protein Disulfide isomerase of T. gondii (PDI); Lucifensin-CPD, a recombinant protein from the fly Lucilia sericata (LucGT), both produced in our lab, and bovine serum albumin (BSA) (AMRESCO), these controls were used at a concentration of 6.25 µg/mL. Likewise, a lysate antigen from a Knockout strain for ROP18 (KOROP18, a gift from the Dr. Sibley, St. Louis, USA) of T. gondii was used as another negative control of the assay, this antigen was prepared similar to RH Ag.
Enzyme-Linked Aptamer Assay (ELAA) Standardization
For standardization of the ELAA assay, two different configurations were evaluated: direct and sandwich ELAA (Toh et al., 2015), in order to determine which configurations allowed to reach a higher detection limit of RH Ag and rROP18 protein in human serum. Initially, all the conditions for direct ELAA were standardized and based on these conditions we performed the sandwich ELAA, in which the only additional step was the anti-ROP18 antibody, added at the beginning of the assay.
Aptamer Concentration and Binding Affinity of AP001 and AP002
In order to study the binding affinity of aptamers AP001 and AP002, 50 µg/mL (5 µg/well) of RH Ag expressing ROP18 protein were plated in coating buffer and incubated in a 96well microtiter plate overnight at 4 • C. Then, the wells were washed 5 times in PBS-T and then blocked 1 h with 1% BSA in PBS. Afterwards, three washes were performed and biotinlabeled aptamers were diluted in binding buffer at concentration between 50 and 500 nM, and then incubated at 37 • C for 1 h. Next, 100 µL of streptavidin-HRP (1:10,000 dilution) were added to the individual wells and developed using TMB solution as above. Data were analyzed using non-linear regression with an equation y = (x × Bmax) / (x + Kd), where Bmax is the maximal binding and Kd is the concentration of ligand required to reach half-maximal binding.
Detection Limit of rROP18 Protein by Direct ELAA
To identify the detection limit of the rROP18 protein in serum, concentrations from 50 to 1.56 µg/mL of the antigen were evaluated. RH Ag and KOROP18 Ag were included at a concentration of 50 µg/mL (the maximum concentration used for rROP18). All the antigens were diluted in a serum sample from a seronegative individual (IgM and IgG Toxoplasma negative). To select the serum dilution we analyze results of absorbance after performing ELAA protocol with 1:2, 1:5, and 1:10 dilutions of serum from one seronegative individual (IgM and IgG Toxoplasma negative) that was artificially spiked with 2.5 µg of recombinant ROP18 protein. The 1:10 serum dilution was the only one that allowed to differentiate between the absorbance levels of rROP18 and KOROP18 Ag (p = 0.022) and between rROP18 and serum without antigen (p = 0.023). The direct ELAA was performed with the general protocol previously standardized and only with one of the selected aptamers (AP001).
Aptamer-Antibody Assay: Sandwich ELAA
The aptamer-antibody assay binding was performed using the direct ELAA described above with minor modifications. The anti-ROP18 polyclonal antibody was coated onto a 96-well microtiter plate overnight, diluted 1:500 in carbonate buffer and incubated overnight at 4 • C. After washing five times with PBS-T, unspecific ligand sites were then saturated with 300 µL of 1% BSA diluted in PBS for 1 h at 37 • C. After 3 washes, the samples were included: rROP18 was added at concentrations from 50 to 1.56 µg/mL, in order to identify a new detection limit, RH Ag and KOROP18 Ag were included again at a concentration of 50 µg/mL. All the antigens were diluted in the seronegative serum sample previously indicated and were diluted 1:10 in carbonate buffer. The samples were incubated by 2 h at 37 • C with shaking. The biotinylated aptamer was then added at 300 nM in binding buffer, followed by the HRP-conjugated streptavidin (1:10,000). The detection limit obtained from this assay was compared with that obtained in the direct ELAA, with the aim to analyze if the detection limit of the protein was affected in the presence of the antibody.
ROP18-ELAA in Human Serum Samples
The standardized direct ELAA was applied for ROP18 detection in all the serum samples previously described (n = 87). The 20 serum samples from seronegative individuals (IgM and IgG anti-Toxoplasma negative) were used to calculate the cut-off point of the test (Cut off: average absorbance plus two standard deviations). In order to normalize the data and establish a Reactivity index (RI) for each serum, the mean absorbance of each sample was divided by the cut-off point of the test. Serum samples with IR> 1 were considered positive (Caballero-Ortega et al., 2014). The serum samples were processed in duplicate and two tests were performed per sample. The inter-and intraassay coefficient of variation [CV = (standard deviation of the RI/arithmetic mean of the RI) * 100] was calculated.
Bioethical Aspects
This study was conducted according to the tenets of the Declaration of Helsinki, strictly following the Guide for Good Laboratory Procedures. Informed written consent, according to the regulation 008430 of 1993 of the Ministry of Health in Colombia was obtained from all people that accepted to participate in the study. The protocol was approved by the Institutional Ethical Committee (Reference numbers: 5-14-1 from Universidad Tecnológica de Pereira and 030314 from Escuela Superior de Oftalmología Instituto Barraquer de América) approved the study.
Statistical Analysis
Data from ELAA standardization were expressed as means ± SEM. Differences in means were compared by the Student t test or a by non-parametric test if values were not normally distributed. Kruskal Wallis test and the Dunn test, were used for multiple comparisons between the standardization conditions. Spearman correlation test was performed to evaluate associations between quantitative variables of the population and the Reactivity index from the ELAA test. These data were analyzed using Graph Pad Prism 6.0 software (San Diego, CA, USA).
Differences in proportions between groups of patients were analyzed using the Fisher exact test. In addition, the association between the test positivity and different clinical characteristics related to the severity of ocular and congenital toxoplasmosis were evaluated. Epi-Info software 7.0 (Centers for Disease Control and Prevention, Atlanta, Georgia) was used to perform these analysis (available at: http://www.cdc.gov/epiinfo/). A p < 0.05 was considered to be statistically significant.
In vitro Selection of ROP18 Aptamers by SELEX
A random ssDNA library was used to select aptamers binding to rROP18. GST-rROP18 protein conjugated with Glutathione Sepharose beads was used as the target. Following incubation, the bound aptamers were separated from unbound ones, and target-bound ssDNA were eluted and enriched at each round of selection by amplification using PCR. A total of 15 rounds of repeated separation-amplification cycles were completed in order to receive high affinity and specificity of DNA aptamers against ROP18 protein. Cloning and sequencing of aptamer pools from the 15 rounds of cycles identified several aptamer candidates ( Figure 1A). Aptamers AP001 with the sequence 5 ′ -TCCTGGCAGCGCTTTTGCTTGTTTGCTCTCGTACCT GTCC-3' and AP002 with the sequence 5 ′ -CGCACCGATCCG GTGTTAATCTCGACGTCCCTTAAGTTTG-3' were the top enriched sequences, representing 14.42% and 13.46% of the final enriched population. These two novel ROP18 aptamers were labeled with biotin and utilized as biorecognition elements to construct a ELAA sensing platform. Biotin-streptavidin strategy was used for signal production ( Figure 1B).
Direct ELAA Standardization
Direct ELAA has been reported as one of the simplest and fastest methods, in which the antigen is immobilized on the surface of the platform, followed by a blocking step, addition of biotinylated aptamers, then streptavidin conjugated with HRP enzyme and the TMB substrate (Toh et al., 2015). To start with, we first developed a direct ELAA for total antigen from Toxoplasma. PDI, LucGT, and BSA proteins were used as negative controls. The optimal conditions of this direct ELAA test were obtained by evaluation of conditions, including time and temperature of antigen incubation, blocking solution, streptavidin dilution and aptamers concentration.
Firstly, we found that incubation of RH Ag overnight at 4 • C allowed to reach a higher detection limit in the direct ELAA (Figure 2). Initially, antigen incubation for 1 h at 37 • C was evaluated, showed that the AP001 and AP002 aptamers reached a significant antigen detection limit of 25 µg/mL compared to the negative controls (p < 0.05) (Figures 2A,B). Subsequently, antigen incubation was analyzed overnight at 4 • C (Figure 2). We found that detection limit improved for the condition of 4 • C FIGURE 2 | Detection limit of RH antigen according time and temperature. The ELAA assays were performed with AP001 (A,C) and AP002 (B,D) anti-ROP18 aptamers. We evaluated incubation 1 h at 37 • C (A,B) and 4 • C overnight (C,D). Different concentrations of T. gondii total antigen of the RH strain (RH Ag) were evaluated and three negative controls, PDI, LucGT, and BSA were included. The data are represented with the mean of each sample evaluated in triplicate. Welch's t-test. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 vs. controls without antigen (No Ag). overnight, reaching a detection limit of 12.5 µg/mL (p < 0.01) with both aptamers (Figures 2C,D). Overnight incubation at 4 • C probably allowed more antigen adherence to the plate. Many other studies using the same incubation conditions were reported (Ramos et al., 2007;Rotherham et al., 2012;García-Recio et al., 2016).
Regarding the blocking solution, we found that 1% BSA was more effective compared to the other conditions (Figure 3). For AP001 ELAA, all negative controls showed significantly lower absorbance levels for the 1% BSA condition (p < 0.05) ( Figure 3A). In the case of AP002, lower absorbance values were detected for the negative controls with 1% BSA, although significant differences were only found for the blank condition and the negative control with albumin ( Figure 3B) (p = 0.019 and p = 0.028 respectively). Regarding the positive control (RH Ag), 1% BSA and the no blocking condition allowed to reach significantly higher levels of absorbance compared to 5% skim milk condition (p = 0.05 and p = 0.03 for AP001 and AP002, respectively), indicating a higher sensitivity of the assay. However, the no blocking condition was not selected due to the non-specificity generated for the negative controls. This could explain why BSA is more effective for biotin-streptavidin systems, as it contains only one purified protein without endogenous biotin (Alegria-Schaffer et al., 2009), thus avoiding background interferences or non-specific interactions. That's probably the reason why other studies also reported the use of BSA as blocking agent for ELAA tests with biotinylated aptamers (Vivekananda and Kiel, 2006;Balogh et al., 2010;Luo et al., 2013). Therefore, we continued working with 1% BSA as a blocking agent.
Related to streptavidin dilution, we found that 1:10,000 dilution allowed to reach higher absorbance levels in the positive controls of the assay (Figures 3C,D). In AP001 ELAA, only FIGURE 3 | Evaluation of different blocking conditions and streptavidin dilutions for AP001 (A,C) and AP002 (B,D) ELAA test. Different blocking conditions were evaluated: 1% bovine serum albumin, 5% skim milk and no blocking (A,B). Three different dilutions of streptavidin (1: 10,000, 1: 15,000 and 1: 20,000) were evaluated (C,D). RH Ag was used as a positive control (50 µg/ml) for both experiments. rROP18 (25 µg/ml) was also used as positive control for streptavidin experiment. Three negative controls were included, PDI, LucGT, and BSA. The data are represented by the mean ± SEM. Kruskal Wallis. *p < 0.05. the absorbance levels for rROP18 were significantly higher for 1:10,000 dilution (p = 0.021) ( Figure 3C); whereas in the ELAA test with AP002, the absorbance was significantly higher for both, the RH Ag and rROP18 for 1:10,000 dilution compared to 1:20,000 dilution (p = 0.013 and p = 0.040, respectively). Regarding the negative controls, although differences between evaluated dilutions were found, mainly for AP002 (Figure 3D), the absorbance levels obtained were very low for all controls with all dilutions, with mean values of OD that ranged between 0.005 and 0.028. Therefore, considering that 1:10,000 dilution favored the sensitivity of the experiment, it was selected for the subsequent trials. This result agrees with other studies using biotinylated aptamers (Murphy et al., 2003;Rotherham et al., 2012;Stoltenburg et al., 2016).
It is worth noting that both aptamers showed a minimal recognition profile toward three negative control proteins (PDI, LucGT, and BSA), compared to the positive control (RH Ag).
The significantly lower absorbance levels in negative controls suggested a higher specificity of the ELAA test.
Aptamer Concentration and Binding Affinity of AP001 and AP002
A direct ELAA including all the previous standardized conditions was performed to analyze the optimal aptamer concentration. Aptamer concentrations were analyzed from 50 to 500 nM. We found that recognition of RH Ag was concentrationdependent, therefore, the absorbance levels increased as the aptamer concentrations increased (r = 1; p = 0.003; Spearman correlation test) (Figures 4A,B). The same pattern has also been found in other studies (Martin et al., 2013;García-Recio et al., 2016). Based on these results, we concluded that it was possible to continue working with an intermediate aptamer concentration (300 nM) in the subsequent ELAA tests, since it allowed an appropriate detection of the antigen, with acceptable absorbance FIGURE 4 | Concentration and Binding affinity of AP001 (A) and AP002 (B) aptamers. Ag RH was used as a positive control and different concentrations of each aptamer (from 50 to 500 nM) were used. BSA was used as a negative control. The dissociation constant (Kd) was calculated through a non-linear regression to define the affinity of aptamers to the RH Ag, obtaining a Kd of 62.7 nM for AP001 and a Kd of 97.7 nM for AP002 (C). levels (OD: 0.3-0.5) and thus allowing a moderate use of the capture reagent.
Additionally, to determine the binding affinity, we used the absorbance and concentration data from this experiment to calculate the dissociation constant (Kd). The data were analyzed using a non-linear regression, where Kd is the concentration of ligand (aptamer) required to reach half of the maximum bond, finding that a lower value of Kd will be obtained by the aptamer with greater affinity toward the antigen. Regarding this analysis, we found that aptamer AP001 showed a higher affinity with a Kd value of 62.7 ± 17.27 nM; whereas the aptamer AP002 showed a Kd of 97.7 ± 22.20 nM ( Figure 4C); these results suggested that it was feasible to continue working with AP001 aptamer in subsequent trials with human serum samples.
Detection Limit of rROP18 Protein in Serum Samples by Direct ELAA and Sandwich ELAA
In order to identify the detection limit of direct ELAA with serum samples, rROP18 protein concentrations from 50 to 1.56 µg/mL were evaluated by standard addition method. The recombinant ROP 18 protein was added in the seronegative human serum sample and then diluted 1:10 in coating buffer. Total Ag of the RH strain was included as a positive control and total antigen of the KOROP18 strain was used as a negative control. The results indicated that recognition of the ROP18 protein was concentration-dependent and AP001 was able to detect rROP18 protein in serum since the minimum concentration (1.56 µg/mL), showing significant differences compared to the serum sample without antigen KOROP18 (p = 0.028) (Figure 5A).
In comparison, we also performed a sandwich ELAA using an anti-ROP18 polyclonal antibody as a capture agent and the aptamer AP001 as a detection agent. The data showed that sandwich ELAA allowed the detection of rROP18 protein since a concentration of 3.12 µg/mL (Figure 5B) in the serum. These results indicated that the sensitivity of sandwich ELAA was lower than the direct ELAA (1.56 µg/mL). We also found that absorbance levels obtained for sandwich ELAA were reduced, presenting OD values between 0.054 ± 0.002 for the minimum and 0.059 ± 0.001 for the maximum concentration FIGURE 5 | Evaluation of the detection limit of rROP18 protein in serum through direct (A) and sandwich (B) ELAA. Protein concentrations from 50 to 1.56 µg/mL diluted in serum from a seronegative individual for T. gondii were included. Total Ag of the RH strain (RH Ag at 50 µg / ml) was included as a positive control, and total antigen of the KOROP18 strain (KOROP18 at 50 µg / ml) was included as a negative control. The data are represented with the average of each sample evaluated in quadruplicate. Welch t test. *p < 0.05, **p < 0.01, ***p < 0.001 vs. the negative control KOROP18 (serum without antigen).
FIGURE 6 | Reactivity Index values obtained with ROP18-ELAA for the human serum samples. Serum samples with different clinical forms were included: toxoplasmic lymphadenitis (n = 18), chronic-asymptomatic toxoplasmosis (n = 13), ocular toxoplasmosis (n = 21), and congenital toxoplasmosis (n = 10). Additionally, samples with Dengue virus (n = 5) and individuals seronegative for Toxoplasma (n = 20) were included as negative controls. Two tests were performed for each sample. The data are represented with the median and the interquartile range. Serum samples with IR > 1 are considered positive. of rROP18 protein ( Figure 5B); while in the direct ELAA the absorbance values in the same concentration of the protein were 0.111 ± 0.001 and 0.207 ± 0.001, respectively ( Figure 5A). Therefore, we concluded that direct ELAA was a more suitable configuration to be applied in the serum samples of individuals with toxoplasmosis.
ROP18-ELAA Tests in Human Serum Samples From Individuals With Toxoplasmosis
To validate the suitability of the ROP18-ELAA platform on serum samples from individuals with toxoplasmosis, the direct ELAA with AP001 aptamer was applied. A total of 62 serum samples from individuals with different clinical manifestations of toxoplasmosis and 20 samples from seronegative individuals were included. The Reactivity Index (RI) was calculated for each sample. Due to the samples were processed in duplicate and two tests were performed per sample, the respective variation coefficients (VC) were calculated.
The comparison of RI between the group of individuals with toxoplasmic lymphadenitis and chronic-asymptomatic toxoplasmosis indicated that there were no statistically significant differences (p = 0.412). In the same way, although the percentage of positivity was higher in the group with lymphadenitis, no significant association was found between ELAA positivity and the acute or chronic stage of the infection (p = 0.058). Similarly, when comparing the RI between the groups with different clinical manifestations of toxoplasmosis, no statistically significant differences were found between them (p = 0.162) (Figure 6).
We found that the group with congenital toxoplasmosis had the highest RI values (Me: 1.285 Range: 0.270-2.104) and the highest positivity percentage in the test. So, the statistical analysis showed a significant association between this clinical form and the positivity for the ELAA test (p = 0.006). Additionally, after a stratified analysis according the clinical characteristics inside this group, we found that the positivity of the ELAA test was associated with higher severity of the disease, in other words the test was significantly positive for children with severe clinical manifestations such as presence of ocular and/or neurological symptoms than in children with congenital asymptomatic infection (p = 0.033, Table 1).
For the group of ocular toxoplasmosis, no statistical association was found between this clinical manifestation and ELAA positivity (p = 0.342). In the same way, other variables analyzed inside this group didn't show significant associations with the RI values obtained in the ELAA test; except for the total number of chorioretinal scars where we found a negative correlation (r = −0.74, p = 0.003) with the RI values (Supplementary Table S1).
Finally, some other characteristics in the total population, like age, gender, total IgM and IgG levels, as well as avidity percentage were related with the positivity of the ELAA test or the RI values; however, we didn't find any significant association between these variables (Supplementary Table S2).
DISCUSSION
Previous studies have reported that T. gondii produces some virulence factors that can modulate the host immune response and could explain the severe manifestations of toxoplasmosis, especially in South America (Bradley and Sibley, 2007;Etheridge et al., 2014;Petersen et al., 2017). The ROP18 protein has been described as one of the major virulence factors of T. gondii, involved in the regulation of the host innate immune response, promoting the survival and replication of the parasite (Saeij et al., 2006;Taylor et al., 2006). IgM and IgG antibodies have been identified against the ROP18 or against peptides derived from it (Sánchez et al., 2014), which indicates that the immune system recognizes the protein. However, until now, the presence of the protein in serum from individuals with toxoplasmosis has not been reported and it is unknown if its presence could be related to the clinical manifestation of the disease. Although antibodies to detect ROP18 protein are available, are difficult to obtain in developing countries and there are no other tools to readily and routinely assess T. gondii protein in serum. Aptamers are nucleic acids that are capable of selective binding to targets of interest. In addition to the easiest and cheaper production, the use of aptamers as biorecognition tools have several advantages in terms of storage compared to antibodies. Therefore, development and testing of aptamers-based technology for T. gondii protein opens a window for low-cost and rapid diagnostics that could in part support the great demand for point-of-care diagnostics in developing countries.
In this study, we developed DNA aptamers against ROP18 from T. gondii by the SELEX method. By utilizing those newly enriched aptamers, we developed a novel aptamer based biosensing platform for serum samples from people with toxoplasmosis. A direct ELAA was initially evaluated using recombinant protein ROP18 (rROP18) and total antigen from T. gondii RH strain. The optimal conditions, including time and temperature of incubation, as well as the buffer composition and aptamer concentration were achieved, allowing a best detection performance. Additionally, we found that AP001 was the aptamer with the higher affinity against the antigen.
The detection limit of the direct ELAA with aptamer AP001 was evaluated with rROP18 diluted in human serum samples. Similarly, we developed a sandwich ELAA configuration in order to compare which configurations allowing a greater sensitivity. The results indicated that direct ELAA was more sensible, allowing the detection of the protein in serum since a concentration of 1.56 µg/mL, while the sandwich configuration showed a detection limit of 3.12 µg/mL. Considering those results, we used the direct ELAA to analyze the serum samples from individuals with different clinical manifestations of toxoplasmosis. Our results indicated that the presence of the ROP18 protein was found significantly in higher proportion in serum from people with congenital toxoplasmosis group, but also, they had the highest RI values compared with the other groups. These data suggested that the group of congenital individuals may present a higher parasitic load and therefore a possible secretion of ROP18 proteins at higher levels. Also, it could be explained because the immune response generated in these individuals is not as efficient to control the infection caused by the pathogen as occurs in other clinical forms. It has been described that clinical manifestations in congenital infection are related to a host genetic susceptibility that lead to an insufficient control of the parasite compared to children also congenitally infected but without symptoms (Jamieson et al., 2010).
Additionally, we found interesting association between the presence of ocular and/or cerebral symptoms in the group with congenital toxoplasmosis and the positivity of ELAA test; therefore, the presence of ROP18 could be suggested as a biomarker related to a greater severity in this clinical form. On the other hand, although no statistically significant association was found between the acute or chronic stage of toxoplasmosis and the positivity of the ELAA test, we observed a tendency of higher percentage of positivity and elevated RI values in the group of individuals with toxoplasmic lymphadenitis. This result could suggest that individuals with the acute stage of the infection and the presence of T. gondii tachyzoites in blood (Halonen and Weiss, 2013) have more probability to be secreting the ROP18 protein. In support of this assumption, we found a negative correlation with the number of chorioretinal scars, that could be explained because increased number of scar indicates longer time from acquisition that it is related to the number of recurrences in one individual . Likewise, in chronicasymptomatic individuals (negative for the ELAA assay) the absence of the ROP18 protein could be explained by the chronic stage of the infection, in which the parasite is found in a dormant stage called bradyzoite, which is slow growing and it is controlled by the host's immune system (Blader and Saeij, 2009). Importantly, we didn't find positivity in the ELAA test with serum samples from individuals with dengue virus, which indicated that the test was specific and did not detect antigens from another pathogenic agent. However, it is important to evaluate more serum samples with other parasitic diseases such as malaria and leishmaniosis, as well as with other viral and bacterial infections.
A relevant fact of the present study is the explanation of how the ROP18 protein of T. gondii reaches the serum of individuals with toxoplasmosis. Previous studies indicate that ROP18 is secreted by the rhoptry organelles inside the host cell during the process of parasite invasion and later it is located in the membrane of the parasitophorous vacuole (Saeij et al., 2006;Hunter and Sibley, 2012). However, it is possible to suggest that the parasite secretes a certain amount of the ROP18 protein before entering to the host cell and it also could explain the presence of IgM and IgG antibodies in mouse and human serum that recognize specifically the ROP18 protein Grzybowski et al., 2015). Additionally, it could be assumed that the ROP18 protein is secreted once the parasite has been established within the host cell. A recent study shows that the secretion of proteins by the microneme organelles is directed by in vitro exposure to serum albumin, a host protein (Brown et al., 2016). A similar event could occur with the rhoptry organelles, being stimulated by any host protein to secrets some ROP kinases. Furthermore, we can propose that the ROP18 protein is released after the disruption of the host cell, caused by the uncontrolled replication of the parasite. This cellular breakdown has been reported mainly by infection with type I virulent strains in mice, which are not effectively controlled by the immune system of this murine host (Melo et al., 2011).
In conclusion, two ROP18-aptamers were selected by a SELEX method and were used to standardize an ELAA test. Results showed that AP001 aptamer had a higher affinity for rROP18 and RH T. gondii antigen, and therefore it was used to detect ROP18 in serum samples from people with different clinical forms of toxoplasmosis. The ELAA test with AP001 was positive in 60% of people with congenital infection and in 22.6% of the cases with toxoplasmosis. These results suggest that ROP18-ELAA could be used as a potential test to identify severity of the congenital toxoplasmosis. One limitation of this study is that were analyzed only one sample per patient, and it would be important to have a longitudinal follow up in order to identify how is the variation in levels of ROP18 according evolution of symptoms and the effect of treatment. This should be analyzed in a future study. Present findings open new research avenues to understand the role of virulence factors of T. gondi on the pathogenesis of toxoplasmosis in humans.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board from Universidad Tecnológica de Pereira. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
FUNDING
The project financially supported by the Natural Science Foundation of Shenzhen City (Project number JCYJ20170307150444573 and JCYJ20180306172131515). It was also financed by a Young researcher grant awarded by Colciencias, Colombia. | 2019-11-14T14:08:11.887Z | 2019-11-13T00:00:00.000 | {
"year": 2019,
"sha1": "4450e074ec3f1e1dc6201c4b1564169e63ddafcc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2019.00386/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4450e074ec3f1e1dc6201c4b1564169e63ddafcc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
225254681 | pes2o/s2orc | v3-fos-license | Energy and Resource Utilization of Refining Industry Oil Sludge by Microwave Treatment
The oily sludge from crude oil contains hazardous BTEX (benzene, toluene, ethylbenzene, xylene) found in the bottom sediment of the crude oil tank in the petroleum refining plant. This study uses microwave treatment of the oily sludge to remove BTEX by utilizing the heat energy generated by the microwave. The results show that when the oily sludge sample was treated for 60 s under microwave power from 200 to 300 W, the electric field energy absorbed by the sample increased from 0.17 to 0.31 V/m and the temperature at the center of the sludge sample increased from 66.5 ◦C to 96.5 ◦C. In addition, when the oily sludge was treated for 900 s under microwave power 300 W, the removal rates were 98.5% for benzene, 62.8% for toluene, 51.6% for ethylbenzene, and 29.9% for xylene. Meanwhile, the highest recovery rates of light volatile hydrocarbons in sludge reached 71.9% for C3, 71.3% for C4, 71.0% for C5, and 78.2% for C6.
Introduction
The refinery industry produces a large amount of sludge during crude oil exploration, production, storage, and the refining process [1][2][3]. Oil sludge usually contains a high content of petroleum hydrocarbons (PHCs) [4], heavy metals, and solid particles. The petroleum hydrocarbons from oil sludge contain a variety of aromatic hydrocarbons which are carcinogens, such as BTEX (benzene, toluene, ethylbenzene, xylene) and polycyclic aromatics hydrocarbons (PAHs) [5,6]. Therefore, petroleum sludge is considered harmful in many countries and improper treatment may pose a serious threat to the environment [2,7,8]. However, due to the high content of petroleum hydrocarbon in the sludge, it is also considered a potential recoverable resource.
The extensive development of oily sludge treatment now focuses on environmental impact and renewable resource technologies including combustion, pyrolysis, chemical treatment, froth flotation, and microwave irradiation [1,6,9]. Pyrolysis has excellent adaptability and fluidity of crude sludge to recycle valuable chemical raw materials and coke, and the process almost has no secondary pollution [7,9,10]. The disadvantage is that a large amount of external energy is required for the endothermic reaction to occur, resulting in a high cost. The thermochemical conversion process also involves an extremely complex reaction path [7,9]. Combustion produces energy to drive steam boilers to generate electricity, or to provide valuable thermal energy for pyrolysis or refinery endothermic processes. However, the resulting pollutant emissions may pose a great threat to the environment [9].
Chemical treatment processes require a large quantity of organic solvents, resulting in a large amount of secondary wastes, and further adequate treatment is required [11,12]. Application of microwave heating used for oil sludge treatment has been reported in recent years. There are many advantages to using microwave heating. The dielectric material could absorb the microwaves and raise its internal energy [13], resulting in a faster heating rate [14], shorter reaction time and higher efficiency of electric energy conversion (80-85%) [15]. Akbari et al. (2016) used microwave to treat crude oil in water emulsion. When emulsion samples (40-60% W/O) were treated with microwave irradiation under 360 W microwave power for 3 min, demulsification reached 100% [16].
The electric field interacts with polar materials whereas the magnetic field reacts with charged material. Through the interaction between the medium molecules and the MW electromagnetic field, the electromagnetic energy can be converted directly into heat energy [17][18][19]. Microwave heating is the result of absorption of microwave energy by a dielectric medium exposed to the electromagnetic field [20]. The rate of medium temperature increase caused by absorption of microwave energy is given by Equation (1) [18]: where P abs is the power conversion in one unit volume (W/m 3 ), f is the frequency of the radiation in Hz, ε 0 is the permittivity of free space (8.854 × 10 −12 F/m), ε e f f is the complex component of the relative permittivity of the dielectric (it is also known as the effective relative dielectric loss factor), Cp is the specific heat of the material in J/kg • C, ρ is the density of the material in kg m −3 , E is the electric field in V/m, ∆t is the time duration in seconds, and ∆T is the temperature rise in the material in • C. The density and heat capacity of crude oil are derived from the (2), (3) and (4) [21].
In this study, through the characteristics of microwaves coupling with high dielectric constant substances in oil sludge, the medium (moisture) in the sludge converts the absorbed microwave energy into heat energy which is enough for the removal of BTEX substances in the sludge. In this way, the value of oil resource recovery after microwave treatment is promoted. Furthermore, by combining with the membrane recovery technique, the high value tail gas could be produced in the process of sludge treatment.
Experimental Equipment
The microwave oven (SAMPO Co.) was equipped with a (Proportional-Integral-Derivative) (PID) to control the output power. The microwave frequency was 2.45 GHz and the maximum output power was 750 W. A 50 mL quartz reactor with 20 holes at the bottom was also used. The sludge samples were taken from the crude oil storage tank in the refinery. The sludge sample (10 g, semi-liquid, water content 18 wt%, heat value 10,968 cal/g) was placed in the reactor then put into the microwave oven. Carrier gas (N 2 ) was introduced from the bottom of the reactor. The tail gas was evaporated from top of the reactor, then entered a membrane separator ( Figure 1).
The hollow fiber tubing inside of the membrane separator (Dalian Eurofilm Industrial Ltd.) was made of a polyimide material coated with silicone rubber. The outside diameter of the hollow fiber tubing was between 300 and 450 µm with an inner diameter of 150 to 200 µm. 1 µL of the tail gas sample was taken every 10 s and the composition was analyzed by gas chromatography-mass spectrometry The hollow fiber tubing inside of the membrane separator (Dalian Eurofilm Industrial Ltd.) was made of a polyimide material coated with silicone rubber. The outside diameter of the hollow fiber tubing was between 300 and 450 μm with an inner diameter of 150 to 200 μm. 1 μL of the tail gas sample was taken every 10 seconds and the composition was analyzed by gas chromatography-mass spectrometry (GC-MS) quantitatively and qualitatively. The gas samples were also collected at the inlet and outlet of the membrane separator by a 1 L cylinder and then analyzed by GC-MS.
Experimental Methods
A layer of insulation cotton (about 2 mm thick) was laid on the bottom of the reaction apparatus to avoid blockage of the air venting from the oil sludge. Sludge (10 grams) was evenly spread on the insulating cotton, and the reactor was placed in the microwave oven. The experimental parameters were as follows: (1) the microwave output power (in W) was controlled at 200, 250 or 300; (2) the microwave irradiation time (in seconds) was set to 0, 10, 20, 30, and 60 s; (3) absorbed energy and electric field intensity were used; (4) under a microwave power of 300 W, each cycle consisted of 10 s irradiation time followed by 10 sec intermission interval. Up to 90 cycles, for a total of 900 s multi-interval irradiation, were tested in the experiment.
The gas generated from the microwave treatment was introduced into a mixer before it entered the membrane device where the gas pressure stabilized. The inlet gas flow rate at the membrane device was adjusted by the gas permeation pressure at the outlet.
Analyses
An HP 6890 gas chromatography (GC) equipped with a capillary column (HP-5MS) and coupled with an HP 5973 mass selective detector (MSD) was used for identifying and quantifying intermediates and final products of the tail gas. The carrying gas (He) flow rate was maintained at a constant of 10 ml/min. The oven temperature was programmed to increase from 100 °C to 280 °C at a gradient rate of 20 °C/min. It was then held at 280 °C for 10 min. A thermocouple, K-Type with error 0.3% of full scale, was used for temperature measurement.
The analysis method used was the National Institute of Environmental Analysis (NIEA) W785.54B and United States Environmental Protection Agency (USEPA) method 8260B, which can detect a total of 60 volatile organic compounds. The heat value of the petro-sludge was carried out according to the standard methods (e.g., National Institute of Environmental Analysis (NIEA)
Experimental Methods
A layer of insulation cotton (about 2 mm thick) was laid on the bottom of the reaction apparatus to avoid blockage of the air venting from the oil sludge. Sludge (10 g) was evenly spread on the insulating cotton, and the reactor was placed in the microwave oven. The experimental parameters were as follows: (1) the microwave output power (in W) was controlled at 200, 250 or 300; (2) the microwave irradiation time (in seconds) was set to 0, 10, 20, 30, and 60 s; (3) absorbed energy and electric field intensity were used; (4) under a microwave power of 300 W, each cycle consisted of 10 s irradiation time followed by 10 sec intermission interval. Up to 90 cycles, for a total of 900 s multi-interval irradiation, were tested in the experiment.
The gas generated from the microwave treatment was introduced into a mixer before it entered the membrane device where the gas pressure stabilized. The inlet gas flow rate at the membrane device was adjusted by the gas permeation pressure at the outlet.
Analyses
An HP 6890 gas chromatography (GC) equipped with a capillary column (HP-5MS) and coupled with an HP 5973 mass selective detector (MSD) was used for identifying and quantifying intermediates and final products of the tail gas. The carrying gas (He) flow rate was maintained at a constant of 10 mL/min. The oven temperature was programmed to increase from 100 • C to 280 • C at a gradient rate of 20 • C/min. It was then held at 280 • C for 10 min. A thermocouple, K-Type with error 0.3% of full scale, was used for temperature measurement.
The analysis method used was the National Institute of Environmental Analysis (NIEA) W785.54B and United States Environmental Protection Agency (USEPA) method 8260B, which can detect a total of 60 volatile organic compounds. The heat value of the petro-sludge was carried out according to the standard methods (e.g., National Institute of Environmental Analysis (NIEA) R214.01C for heat value). The data analysis of the heavy metal content for petroleum sludge was based on the use of aqua regia (nitric acid: hydrochloric acid = 1:3) as the acid solution of heavy metal leaching followed by the use of inductively coupled plasma optical emission spectrometry (ICP-OES) to conduct the heavy metal total analysis after microwave digestion, using the NIEA R317.11C method. The data analysis was also based on the use of nitric acid as the heavy metal extraction solution, which was put into a bottle extraction vessel and performed a 30 ± 2 RPM selection device for 18 ± 2 h of extraction. The extracted acid solution was subjected to heavy metal dissolution analysis with toxicity characteristic leaching procedure (TCLP), using NIEA R201.15C.
Effect of Energy Absorption and Electric Field Strength at Different Microwave Power
The characteristics of microwave radiation heating rely on dipole rotation and ionic conduction. The dipole rotation changes with the direction of the alternating microwave radiation field [22]. When a solvent is used as a media in the microwave radiation field, the solvent changes from polarization to polarized state and the electric energy is stored. The degree of polarization depends on the composition and morphology of the solvent and the frequency of the electric field applied. Figure 2 shows the sludge temperature in the microwave oven when the oil sludge is irradiated inside the microwave oven at different microwave power with varied radiation durations. The oil sludge temperatures 66.5 • C, 79.6 • C, and 96.5 • C correspond to the microwave power at 200, 250, and 300 W with the treatment time for 60 s. R214.01C for heat value). The data analysis of the heavy metal content for petroleum sludge was based on the use of aqua regia (nitric acid: hydrochloric acid = 1:3) as the acid solution of heavy metal leaching followed by the use of inductively coupled plasma optical emission spectrometry (ICP-OES) to conduct the heavy metal total analysis after microwave digestion, using the NIEA R317.11C method. The data analysis was also based on the use of nitric acid as the heavy metal extraction solution, which was put into a bottle extraction vessel and performed a 30 ± 2 RPM selection device for 18 ± 2 hours of extraction. The extracted acid solution was subjected to heavy metal dissolution analysis with toxicity characteristic leaching procedure (TCLP), using NIEA R201.15C.
Effect of Energy Absorption and Electric Field Strength at Different Microwave Power
The characteristics of microwave radiation heating rely on dipole rotation and ionic conduction. The dipole rotation changes with the direction of the alternating microwave radiation field [22]. When a solvent is used as a media in the microwave radiation field, the solvent changes from polarization to polarized state and the electric energy is stored. The degree of polarization depends on the composition and morphology of the solvent and the frequency of the electric field applied. Figure 2 shows the sludge temperature in the microwave oven when the oil sludge is irradiated inside the microwave oven at different microwave power with varied radiation durations. The oil sludge temperatures 66.5 °C, 79.6 °C, and 96.5 °C correspond to the microwave power at 200, 250, and 300 W with the treatment time for 60 s.
The density (ρ 0 ) and thermal capacity (Cp) of oil sludge changes when exposed to different microwave power and irradiation time. These values can be calculated by Formulas (2)-(4). Finally, the calculated results from Formulas (2)-(6) are used in Formula (1) to determine the average absorbed energy of sludge material volume. By using a different microwave power (200, 250 and 300 W) with the same microwave irradiation time of 60 s, the electric field produced by oil sludge under the microwave electromagnetic field were 12.1, 13.2, and 13.8 V/m, respectively. This increased with higher microwave power and oil sludge temperatures. According to the results from Formulas (5) and (6), dielectric loss increased along with the rises of the oil sludge temperature. At the same time, dielectric loss indicates the ability of the material absorbing microwave energy to convert it into heat energy. Therefore, the oil sludge absorbs more MW energy in the initial stage and makes the temperature rise faster. However, with the increase of oil sludge temperature, the impacts on the dielectric loss also increase and the intensity of the microwave energy absorption by oil sludge gradually declines [23].
In addition, the temperature rise causes the volume of sludge to expand and the density to decrease, which further affects the heat capacity of the sludge material after being continuously microwaved. Therefore, according to the microwave power and irradiation time, the sludge mass density (ρ 0 ) and the substance heat capacity (Cp) change. The calculation results of Formulas (2)-(4) are used in Formula (1) to calculate the average microwave energy of the sludge absorption electric field. The results in Figure 3 show that whether the microwave power was 200, 250, or 300 W, the average microwave energy of the sludge absorption electric field decreased along with the temperature of the sludge. = 2.24 -0.00072 × T (5) = (0.527T + 4.82)×10 (6) The density (ρo) and thermal capacity (Cp) of oil sludge changes when exposed to different microwave power and irradiation time. These values can be calculated by formula (2), (3) and (4). Finally, the calculated results from formula (2)-(6) are used in formula (1) to determine the average absorbed energy of sludge material volume. By using a different microwave power (200, 250 and 300 W) with the same microwave irradiation time of 60 s, the electric field produced by oil sludge under the microwave electromagnetic field were 12.1, 13.2, and 13.8 V/m, respectively. This increased with higher microwave power and oil sludge temperatures. According to the results from formula (5) and (6), dielectric loss increased along with the rises of the oil sludge temperature. At the same time, dielectric loss indicates the ability of the material absorbing microwave energy to convert it into heat energy. Therefore, the oil sludge absorbs more MW energy in the initial stage and makes the temperature rise faster. However, with the increase of oil sludge temperature, the impacts on the dielectric loss also increase and the intensity of the microwave energy absorption by oil sludge gradually declines [23].
In addition, the temperature rise causes the volume of sludge to expand and the density to decrease, which further affects the heat capacity of the sludge material after being continuously microwaved. Therefore, according to the microwave power and irradiation time, the sludge mass density (ρo) and the substance heat capacity (Cp) change. The calculation results of formula (2)-(4) are used in formula (1) to calculate the average microwave energy of the sludge absorption electric field. The results in Figure 3 show that whether the microwave power was 200, 250, or 300 W, the average microwave energy of the sludge absorption electric field decreased along with the temperature of the sludge.
Recovering of the Tail Gas by Memberane Separator Following the Microwave Process
During the microwave treatment process, the temperature of the oil sludge increased and enabled volatile organic compounds (VOCs) be released. These VOCs were separated and recovered via a membrane separator device.
Recovering of the Tail Gas by Memberane Separator Following the Microwave Process
During the microwave treatment process, the temperature of the oil sludge increased and enabled volatile organic compounds (VOCs) be released. These VOCs were separated and recovered via a membrane separator device.
Gases with high solubility and small molecules pass through the membrane quicker than less soluble gases with larger molecules which need to permeate the membrane. In addition, different membrane materials have different separation capacities. The driving force needed to separate gases is achieved by means of a partial pressure gradient that is caused by the pressure difference between the residual side and the permeate side. The greater the difference, the more gas permeates through the membrane. To maintain the permeation pressure at 2.0 kg/cm (g), the permeate flow rate was adjusted Sustainability 2020, 12, 6862 6 of 9 for testing at 0.7, 0.9, 1.1, 1.3, and 1.5 L/min. The results are shown in Figure 4, when the permeate flow rate increased from 0.7 to 1.5 L/min. The VOCs recovery rates also increased dramatically from 23.3% to 71.9% for C 3 , 21.9% to 71.3% for C 4 , 17.6% to 71.0% for C 5 , and 14.3% to 78.2% for C 6 .
Gases with high solubility and small molecules pass through the membrane quicker than less soluble gases with larger molecules which need to permeate the membrane. In addition, different membrane materials have different separation capacities. The driving force needed to separate gases is achieved by means of a partial pressure gradient that is caused by the pressure difference between the residual side and the permeate side. The greater the difference, the more gas permeates through the membrane. To maintain the permeation pressure at 2.0 kg/cm (g), the permeate flow rate was adjusted for testing at 0.7, 0.9, 1.1, 1.3, and 1.5 L/min. The results are shown in Figure 4, when the permeate flow rate increased from 0.7 to 1.5 L/min. The VOCs recovery rates also increased dramatically from 23.3% to 71.9% for C3, 21.9% to 71.3% for C4, 17.6% to 71.0% for C5, and 14.3% to 78.2% for C6.
Effect of Microwave Radiation on BTEX Removal from Oil Sludge
The crude oil sludge absorbed MW energy under microwave irradiation and resulted in rising the temperature to decomposed heavy hydrocarbons into light hydrocarbons (such as alkanes and aromatics hydrocarbons). Subsequently, the cracking reactions occurred when alkyl side chains of alkanes and aromatic hydrocarbons were destroyed in heated zones generated by microwave energy, and the resulting compositions were physically volatilized. Among them, polar substances (heavy chain molecules) absorbed the higher microwave energy, which in turn destroyed the structure of the low energy chain (C-H). The degree of cracking depended on the heat generated by the microwave process.
The results show that when the oily sludge sample had a total irradiation of 60 s in 6 cycles (10 s of irradiation with 10 s of interval per cycle) under a microwave power ranging from 200-300 W, the BTEX removal rates increased from 15.6% to 38.3% for benzene, 16.8% to 29.8% for toluene, 24.1% to 29.4% for ethyl benzene, and 15.8% to 24.2 % for xylene. Meanwhile, when increasing total irradiation to 900 sec in 90 cycles (Table 1) the removal rates for benzene, toluene, ethylbenzene, and xylene were 98.5%. The residual concentration was 0.4% mg/L, 62.0% mg/L, 51.6% mg/L, and 29.9% mg/L respectively.
Effect of Microwave Radiation on BTEX Removal from Oil Sludge
The crude oil sludge absorbed MW energy under microwave irradiation and resulted in rising the temperature to decomposed heavy hydrocarbons into light hydrocarbons (such as alkanes and aromatics hydrocarbons). Subsequently, the cracking reactions occurred when alkyl side chains of alkanes and aromatic hydrocarbons were destroyed in heated zones generated by microwave energy, and the resulting compositions were physically volatilized. Among them, polar substances (heavy chain molecules) absorbed the higher microwave energy, which in turn destroyed the structure of the low energy chain (C-H). The degree of cracking depended on the heat generated by the microwave process.
The results show that when the oily sludge sample had a total irradiation of 60 s in 6 cycles (10 s of irradiation with 10 s of interval per cycle) under a microwave power ranging from 200-300 W, the BTEX removal rates increased from 15.6% to 38.3% for benzene, 16.8% to 29.8% for toluene, 24.1% to 29.4% for ethyl benzene, and 15.8% to 24.2 % for xylene. Meanwhile, when increasing total irradiation to 900 s in 90 cycles (Table 1) the removal rates for benzene, toluene, ethylbenzene, and xylene were 98.5%. The residual concentration was 0.4% mg/L, 62.0% mg/L, 51.6% mg/L, and 29.9% mg/L respectively. In addition, the experimental results show that the viscosity of oil sludge can be reduced by microwave irradiation. The oil sludge was actually water in oil emulsion (W/O), which was formed by Sustainability 2020, 12, 6862 7 of 9 vigorous mixing under extremely high external pressure. The microwave increased the temperature of the emulsion, resulting in the decrease of the alkane content leading to decrease in viscosity [24].
Thermal Value Analysis for Oil Sludge
The demulsification of oil sludge was achieved after microwave irradiation, which can separate the water phase and oil phase in the sludge. The longer the microwave irradiation time, the deeper the microwave penetration depth. When the microwave power was set at 300 W and the total irradiation was 300, 600, and 900 s with 10 s of irradiation and 10 sec of interval per cycle, the calorific value (higher heating value, dry) of the dried sludge was determined under each condition. The calorific value of the sludge was 10,012, 10,284, and 10,423 cal/g corresponding to a total irradiation time of 300, 600, and 900 s, respectively.
In addition, the water content in the sludge will affect the power required for subsequent microwave heating. When the water content is higher, using higher power for the test will cause the temperature of the sludge to rise too fast and cause liquefaction. The boiling of the water phase leads to the separation of the oil phase and the solid phase. Therefore, when the total irradiation time increased, the calorific value increased due to water loss in the oil sludge. In the meantime, the lighter hydrocarbons in the oil sludge decreased along with the increase in temperature of the sludge.
The utilization of microwave energy could efficiently reduce the viscosity of the oily sludge. The heavy hydrocarbons contained in the oil sludge were able to absorb microwave energy immensely; hence, the high temperature generated by the microwave power could effectively reduce the high carbon number of hydrocarbons [25]. In fact, the heavy molecule hydrocarbons were destroyed, thus reducing the viscosity of the oil sludge.
After high temperature (800 • C) combustion on microwave-treated sludge, the residual ash content of heavy metals was analyzed as follows: iron ( [5]. In light of the above results, after the process of microwave treatment, the benzene content was reduced to 0.4 mg/L in the oil sludge. This is lower than the regulatory standard of 0.5 mg/L set by the Environmental Protection Administration (EPA) in Taiwan. In addition, the concentration of heavy metals is less than the regulated standards, and the heat value reached 10,423 cal/g. In the future, it would be beneficial to mix the microwave-treated oil sludge into rice husks, wood chips or other crop wastes and convert it into refuse-derived fuel, so as to enhance the value of oil sludge resource recovery.
Conclusions
In this study, the microwave irradiation technique was able to remove BTEX from sediment of crude oil sludge. The results showed that the benzene removal rate was 98.5% and the residual concentration was below the regulatory standard of 0.5 mg/L when 10 g of oil sludge was treated with 900 s (90 cycles) of interval irradiation under 300 W microwave power. In the meantime, the removal rates of toluene, ethyl-benzene, and xylene were 62.8%, 51.6%, and 29.9%, respectively.
With only 60 s of microwave irradiation at 200, 250, and 300 W, the oil sludge temperature increased from room temperature to 66.5 • C, 79.6 • C and 96.5 • C, respectively. The electric fields of the sludge were 12.1, 13.2 and 13.8 V/m, respectively. However, with the increase of oil sludge temperature, the dielectric loss of oil sludge decreased gradually and the average microwave energy of the oil sludge absorbing the electric field increased from 0.17 to 0.24 and 0.31 W/cm. In addition, the VOCs in the oil sludge evaporated when the temperature increased in the microwave process, | 2020-08-27T09:05:23.923Z | 2020-08-24T00:00:00.000 | {
"year": 2020,
"sha1": "46874bdfc2f30406011405e296578cd8e3758ab0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/17/6862/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ab269b40121dc65eb85b6fbfd6503c109f9c1240",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
16194951 | pes2o/s2orc | v3-fos-license | Methicillin-resistant Staphylococcus aureus risk profiling: who are we missing?
Background Targeted screening of patients at high risk for methicillin-resistant Staphylococcus aureus (MRSA) carriage is an important component of MRSA control programs, which rely on prediction tools to identify those high-risk patients. Most previous risk studies reported a substantial rate of patients who are eligible for screening, but failed to be enrolled. The characteristics of these missed patients are seldom described. We aimed to determine the rate and characteristics of patients who were missed by a MRSA screening programme at our institution to see how the failure to include these patients might impact the accuracy of clinical prediction tools. Findings From March-June 2010 all patients admitted to 13 internal medicine wards at the University of Geneva Hospital (HUG) were prospectively screened for MRSA carriage. Of 1968 patients admitted to the ward, 267 patients (13.6%) failed to undergo appropriate MRSA screening. Forty-one (2.4%) screened patients were MRSA carriers at admission. On multivariate regression, patients who were missed by screening were more likely to be aged < 50 years (OR 2.4 [1.4-3.9]), transferred to internal medicine from another ward in the hospital (OR 2.8 [1.1-7.1]), and have a history of malignancy (OR 3.2[2.1-5.1]). There was no significant difference in the rate of previous MRSA carriage between screened and unscreened patients. Conclusions Our findings highlight the potential bias that “missed” patients may introduce into MRSA risk scores. Reporting on the proportions and characteristics of missed patients is essential for accurate interpretation of MRSA prediction tools.
Introduction
Prevention and control of MRSA cross infection is among the most important challenges of infection control. Surveillance of all patients for MRSA carriage on admission to hospital allows those patients colonised with MRSA to be isolated and contact precautions undertaken, with the aim of minimising spread to other patients. As patients with MRSA evident on routine clinical specimens represents a small fraction of the burden of MRSA, surveillance is needed to identify the reservoir of colonised but not infected patients [1,2]. However, universal surveillance utilises significant healthcare resources, and its effectiveness is debatable [3][4][5]. Despite this, screening is increasingly utilised in hospital MRSA control programs, and is still legislated in the United Kingdom and some states of the USA [6,7]. To mitigate costs without sacrificing the effectiveness of surveillance, many MRSA screening programs rely on clinical prediction tools to target patients at high risk of MRSA carriage [5]. Several epidemiological studies form the basis of these tools in which the major risk factors for MRSA carriage have been identified, including: a history of MRSA colonization, admission to intensive care, hospitalization in the previous 12 months, extensive contact with health care, previous receipt of antibiotic therapy and skin or soft tissue infection at admission [8][9][10][11][12]. However, these studies report 5-83% of patients who were eligible for study, but not screened. The characteristics of these missed patients are seldom described [11]. We examined the characteristics of patients who were missed during a MRSA surveillance study at our institution to ascertain whether their exclusion might introduce bias and affect the accuracy of clinical prediction tools and risk profiling. Specifically, we hypothesised that an important proportion of patients would be missed by our MRSA screening programme, and that these patients would differ from those patients who were not missed.
Setting and methods
The University of Geneva Hospitals (HUG) are a 2200bed tertiary hospital network providing in-and outpatient care to the Canton of Geneva. From March to June 2010 a universal MRSA surveillance program was undertaken to prospectively screen all patients consecutively admitted to 13 internal medicine wards. The primary aim of this study was to determine the rate of MRSA carriage amongst patients admitted to internal medicine. Secondary aims included: to formulate a clinical prediction tool that would accurately predict those patients at high risk of MRSA carriage on admission to internal medicine, and to: determine the effectiveness of our programme to capture all patients for screening. Over the study period, all patient admissions to internal medicine were recorded and basic demographic and clinical data was collected. Further clinical data were obtained by retrospectively accessing electronic medical records. All patients >18 years of age were eligible for screening and were screened for MRSA by pooled nose and groin swabs. Trained ward nurses conducted the screening seven days a week. Pooled samples were streaked onto MRSAid agar (bioMérieux, Lyon, France) and then inoculated into a colistin-salt (CS) broth. When no MRSA was detected on chromogenic agar at day 1, a second MRSAid plate was inoculated using the overnight enrichment in the CS broth. Suspect colonies were confirmed by a duplex polymerase chain reaction to assess the presence of the mecA gene [13].
The proportion of patients who were eligible for, but did not have MRSA screening was determined. Wilcoxon rank sum tests and chi 2 -tests were used to assess differences between screened and unscreened groups. Factors potentially associated with failure to screen were first evaluated using univariate logistic regression. Variables with a P value <0.2 were retained. Multivariate models were then developed and variables were eliminated in a stepwise fashion using likelihood ratio tests to compare each model to the previous one (STATA 11.2; StataCorp, College Station, Texas, USA).
Results
Of 1968 patients admitted to internal medicine, 1740 (88.4%) underwent admission screening within 48 hours of admission. 228 (11.6%) admitted patients were not screened and 39 (2.0%) patients underwent screening but not within 48 hours of admission. Therefore, 267 patients (13.6%) failed to undergo appropriate MRSA screening. Forty-one (2.4%) screened patients were MRSA carriers at admission. Patients who were missed during MRSA screening were younger (57.1 years vs 61.6 years; P < 0.0001) and a greater percentage had been transferred to internal medicine from another hospital ward (7.0% vs 2.7%; P < 0.0001). The proportions of patients identified as previous MRSA carriers was not significantly different between the screened and unscreened groups (9.6% vs 13.2%, respectively, P = 0.308). There was no significant difference in the proportion of patients missed by screening on weekends as compared to weekdays. The results of uni-and multivariate regression analysis of factors potentially associated with being missed for MRSA screening are shown in the Table 1. On multivariate regression, patients who were missed by screening were more likely to be aged < 50 years, admitted to internal medicine from another hospital, and have a history of malignancy.
Discussion
Screening patients for MRSA carriage on admission to hospital is an increasingly important component of hospital MRSA control programs. Many programs rely on prediction tools so that patients at high risk of MRSA carriage may be targeted for selective screening rather than to utilise universal screening which is costly and resource intensive. Ideally, prediction tools are formulated using local epidemiological data from (universal) surveillance studies. However, many of these studies report a substantial rate of patients who are eligible for screening, but fail to be enrolled by the surveillance programme. The characteristics of these patients are seldom described.
In this study, 13.6% of patients failed to have admission MRSA screening swabs performed. This rate of "missed" screening opportunities is comparable to that found in other MRSA risk profiling studies [8][9][10][11]14]. Patients who were not screened differed from those who were in several ways. Firstly, younger patients (<50 years) were more likely to be missed during MRSA screening. A possible explanation for this is that nurses perceived younger patients to be at low risk for MRSA carriage and were thus less inclined to pursue screening. Although older age is frequently identified as a risk factor for MRSA carriage [8,10,12], it is possible that the tendency to miss younger patients from screening may contribute to this finding and inflate effect estimates. Transfer to internal medicine from another hospital department (intra-hospital transfer) was also a risk factor for being missed during screening. Intrahospital transfer has been previously identified as a risk factor for MRSA admission carriage [10]; missing this group of patients could result in an underestimation of the true MRSA carriage rate and a failure to recognise intra-hospital transfer as an important risk factor for MRSA carriage. Patients with malignancy were more likely to be missed during screening in our study. This was due to logistic difficulties (e.g. frequent readmissions for chemotherapy; ultra-short hospitalizations) within our hospital oncology ward that impeded their regular participation in screening, and was therefore a problem specific to our institution.
To our knowledge, the study by Furano et al. is the only one to report detailed characteristics of patients missed by MRSA screening [11]. In this study, 83.7% of eligible patients were not enrolled in screening. Unenrolled patients were older, less likely to have had a hospital admission in the previous year, and had a higher in-hospital mortality than those patients who were enrolled [11]. The present study will help to further elucidate the importance and magnitude of misclassification bias in MRSA risk profiling studies. Our study has several limitations. Firstly, it is likely that the effectiveness of hospital surveillance programmes to enrol patients on admission may be heavily influenced by institutional and local factors. Therefore, the generalizability of our findings may be limited. Secondly, some of our data was collected retrospectively from medical records and is therefore subject to the inaccuracies inherent to data collected in this way.
Nevertheless, we believe that our findings highlight some of the potential misclassification biases that may occur in MRSA risk profiling studies due to patients missed from screening. This could have important implications for the accuracy of MRSA risk scores developed to target MRSA screening. Clear reporting on patient recruitment and the proportions and characteristics of those patients missed is essential for accurate interpretation of clinical prediction tools identifying patients at high risk for carriage of antibiotic-resistant bacteria. | 2017-06-20T09:34:57.013Z | 2013-05-30T00:00:00.000 | {
"year": 2013,
"sha1": "5592f18e179cd995dacce06eb494c3a0bfaa6b5e",
"oa_license": "CCBY",
"oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/2047-2994-2-17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73a8bb5b1d88cf7cde020d0ebe7fd1df3bb5d457",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17398576 | pes2o/s2orc | v3-fos-license | High-Resolution spectroscopy of the low-mass X-ray binary EXO 0748-67
We present initial results from observations of the low-mass X-ray binary EXO 0748-67 with the Reflection Grating Spectrometer on board the XMM-Newton Observatory. The spectra exhibit discrete structure due to absorption and emission from ionized neon, oxygen, and nitrogen. We use the quantitative constraints imposed by the spectral features to develop an empirical model of the circumsource material. This consists of a thickened accretion disk with emission and absorption in the plasma orbiting high above the binary plane. This model presents challenges to current theories of accretion in X-ray binary systems.
Introduction
X-ray spectroscopic observations should provide a useful probe of the accretion processes in low-mass X-ray binaries (LMXB's). The continuum emission from the bright central source is reprocessed as the photons are absorbed and re-emitted in the surrounding material. The details of the resulting X-ray spectra are extremely sensitive to the physical conditions in the plasma and therefore provide an excellent way to constrain the circumstellar environment and the mass flow that fuels the X-ray emission. Previous observatories have not had the spectral resolution necessary to resolve the discrete structure in the X-ray spectrum. The Reflection Grating Spectrometer (RGS) on the recently launched XMM-Newton Observatory provides both a high resolving power and a large effective area making it uniquely sensitive to the diagnostic spectral features, particularly in the soft X-ray band that contains transitions from highly ionized charge states of the most abundant elements.
In this letter we present the results of RGS observations of EXO 0748−67, a highly variable LMXB that was first discovered with EXOSAT ). Analysis of the EXOSAT light curves, which show deep eclipses with a 3.82 hour orbital period and a complex dipping structure, led to a derived inclination angle of 75 • − 82 • ). The detection of type I Send offprint requests to: jcottam@astro.columbia.edu X-ray bursts confirmed that the compact object is a neutron star ). Observations with both ASCA (Thomas et al. 1997) and ROSAT (Schulz 1999) revealed structure in the soft X-ray spectrum, but the limited spectral resolution of these instruments made it impossible to distinguish whether this was due to absorption or emission features. In the RGS observations that we present below, we find a spectrum that is rich in discrete structure including absorption and emission from ionized oxygen, nitrogen, and neon. We use the available spectral diagnostics to construct a quantitative empirical model of the circumsource flow.
Data Reduction
The RGS covers the wavelength range of 5 to 35Å with a resolution of 0.05Å (roughly constant across the band) and a peak effective area of ∼ 140 cm 2 at 15Å. A complete description of the instrument can be found in den Herder et al. (2001). EXO 0748−67 was observed repeatedly during both the comissioning and calibration phases of the mission. We have chosen the two longest exposures for this analysis; the first was 49.3 ks on 2000 March 28, and the second was 44.8 ks on 2000 April 21. The raw data were processed with the development version of the XMM-Newton Science Analysis Software (SAS). The spectra were extracted by first applying a 30 ′′ wide spatial filter to the CCD image. The surviving events were then plotted in dispersion channel vs. CCD pulse-height space and additional filters were applied to collect events from the different spectral orders. Background spectra were extracted by applying the same spectral order filters to events from a spatial region that was offset from the source location on the CCD image.
We assigned nominal wavelengths to each dispersion channel based on the geometry of the instrument and the pointing angles of the spacecraft. The pointings were found to be offset by 46 ′′ for the first observation and 60 ′′ for the second using images from the European Photon Imaging Camera (EPIC). We expect the wavelengths to be accurate to ∼ 0.010Å. The development version of the SAS did not yet have the capability to correct the events for aspect variations. However, after excluding the initial part of each observation (8.6 ks of the first observation and 1.7 ks of the second), we find that the pointings are stable to less than ∼ 2 ′′ . Since this is much smaller than the resolution of the telescope, further aspect correction is unnecessary.
We have determined the effective area for each exposure by applying the same extraction regions that were used with the data to the full response matrix, which includes all information on the efficiency of the instruments. Based on our ground calibration we expect the uncertainty to be less than ∼ 10% for wavelengths longer than 9Å and at most ∼ 20% at the shortest wavelengths. We have calculated the flux for each observation using these effective area curves and the standard SAS exposure maps, which give exposure times for each wavelength bin. We combined the first order spectra from both instruments for the two observations in order to maximize the statistical quality of our measurement.
Light Curve
The observed EXO 0748−67 count rate varies by up to a factor of ten on timescales as short as a few hundred seconds. This is illustrated in Fig. 1, which shows a portion of the RGS light curve. There are periods of stable low-level emission where the count rate is less than ∼ 0.5 ct s −1 , periods where the count rate varies rapidly between ∼ 0.5 and 3.5 ct s −1 , and several type I X-ray bursts with peak intensities from 2.5 to 5.2 ct s −1 . We see no correlation between the variability and the orbital phase; similar to the soft X-ray light curve from ASCA (Church et al 1998), the RGS light curve does not show the quiescent level and dipping structure that is characteristic of the light curves from higher energy observations. Most importantly, we see no eclipses during the times predicted for the hard X-ray eclipses.
Discrete Spectral Structure
As shown in Fig. 2, the EXO 0748−67 spectra show significant discrete structure both in absorption and emission. We see bright emission lines from O viii Lyα and the O vii He-like complex. The Ne x Lyα, Ne ix He-like complex and the N vii Lyα emission lines are weaker, but clearly visible, particularly during the periods of low emission when the equivalent widths of the lines are highest. We see the photoelectric absorption edges of both O viii and O vii, particularly during the periods of rapid variation when the contiuum intensity is high. We detect the narrow radiative recombination continua of O viii and O vii at their respective absorption edges. This means that the line excitation mechanism in EXO 0748−67 is via radiative cascades following photoionization.
While the intensity of the continuum level varies significantly, the shape of the continuum is constant. In addition, the equivalent widths of the spectral lines vary with the continuum intensity, but the actual emission line fluxes remain constant thoughout the observations. The absorption structure is most prominent when the continuum intensity is high, but the optical depths at the edges do not change. We must therefore always be looking through the same material.
Velocity Broadening
All of the emission lines show significant velocity broadening. This is illustrated in Fig. 3, which shows the O viii Lyα line overlaid with the instrument line spread function (LSF). Fitting the Lyα lines by convolving the LSF with gaussian distributions, we measure velocity widths of σ = (2600 ± 490) km s −1 for Ne x Lyα, (1390 ± 80) km s −1 for O viii Lyα, and (850 ± 180) km s −1 for N vii Lyα. We find a direct correlation between the magnitude of the velocity width and the degree of ionization. We measure the shifts in the line centroids to be less than 0.020Å for all of the lines. This means that the systemic velocity of the line emitting plasma with respect to the line of sight is less than 300 km s −1 .
Density Sensitivity
We can use the line ratios in the He-like complexes to put limits on the density of the plasma in the different ionization regions. At sufficiently high densities the upper energy level of the forbidden line transition is collisionally Fig. 4). We have measured upper-limits to the flux in the forbidden lines and compared them to the flux in the intercombination lines to determine lowerlimits to the electron densities. For O vii we find an upper limit of 0.19 for the ratio, which corresponds to a lowerlimit to the electron density of 2.0 × 10 12 cm −3 . For Ne ix we measure an upper-limit of 0.21, which corresponds to a lower-limit of 7 × 10 12 cm −3 (Porquet & Dubau 2000).
Discussion
From the observed spectral behavior and the derived source parameters we can construct an empirical model of the circumstellar material. The large equivalent widths of the emission lines require that the emission regions subtend a fairly large solid angle in order to be sufficiently illuminated by the central source. These regions must also extend high above the plane of the binary or else we would have observed eclipses or other modulations correlated with the orbital phase. We can estimate the geometry of this material by comparing the observed absorption and emission in the hydrogen-like and helium-like oxygen ions. For a plasma in photoionization equilibrium the ionization rate is equal to the recombination rate everywhere in the gas. This means that the number of photons that are absorbed by a particular ion can be directly compared to the number of photons emitted by that ion. Given the efficiency of cascade through a particular line transition, η line , this can be expressed as where N is the photon flux. The ratio of observed absorption to observed emission is then determined by the geometry of the system. For example, for a disk oriented with its axis along the line of sight, photons will be absorbed in the surrounding material but then re-emitted isotropically, so that the observed line emission would be higher than expected from the observed absorption. Sako et al. (1999). We see approximately three times more absorption than expected in each of the ions. The absorbing material must therefore be flattened along our line of sight with a solid angle of roughly 1 3 4π. Since we know that the system is highly inclined, this material must be aligned with the plane of the binary and therefore with the accretion disk.
The fact that the emission lines show large velocity broadening with no systemic Doppler-shifts suggests that the observed widths are not associated with net outflow or inflow but rather with orbital motion around the central source. Using the ionization parameters of formation calculated as described in Sako et al. (1999) (see Table 1 for these values) we find a correlation with velocity that is consistent with an orbital velocity structure; Ne x, which has the largest ionization parameter and is preferentially emitted closest to the central source, is observed to be moving with the largest velocity while N vii, which has the lowest ionization parameter and should be emitted farthest from the source, is observed to be moving with the lowest velocity.
The EXO 0748−67 system must therefore contain a thickened accretion disk where the orbiting material extends high above the binary plane. The plasma in the uppermost regions probably produces both the emission and absorption features in the observed spectra. The continuum intensity is independent of orbital phase so the central source region must be extended; a compact X-ray source would show orbital modulation in such a highly inclined system. Since the flux in the emission lines remains constant, the source luminosity must be constant; variations in luminosity would cause changes in the ionization structure and therefore changes in both the line flux and the line ratios. The optical depths at the absorption edges are also constant so we must always be looking through the same absorbing and emitting material. The observed variations in continuum intensity are best explained by local obscurations of the central source region. Interestingly, the velocity parameters derived for the optically-emitting material ( ∼ 2000 km s −1 line broadening with 210±92 km s −1 amplitude Doppler-shifts (Crampton et al. 1986)) are very similar to those measured here. Although it must be in a very different region of ionization, it is possible that the optically-emitting material is also orbiting in this thickened disk.
As a simple test of this model we can compare the emission measure for an orbiting plasma to the empirical emission measure. Assuming a velocity profile with v = ( GM r ) 1/2 and a mass of 1.4M ⊙ for the neutron star, we can calculate the radial distances for each ion from the measured velocities. Then, using the ionization parameters of formation (ξ = L ner 2 ) and the radial distances we can calculate the densities in these regions assuming a source luminosity of 1×10 36 ergs s −1 . Defining the emission measure as EM = dV n 2 e ≃ ∆Ωr 2 ∆rn 2 e , we take ∆r = 1 2 r, and ∆Ω = 1 2 1 3 4π to account for the estimated geometry. The resulting emission measures as well as the components of the calculation are given in Table 1. We find emission measures that range from ∼ 22 × 10 57 cm −3 for Ne x Lyα to ∼ 10 × 10 57 cm −3 for N vii Lyα.
To calculate the empirical emission measure we divided the line luminosity by the line power evaluated at the ionization parameter of formation. The line power for each ion was calculated using the HULLAC atomic codes (Bar-Shalom et al. 1998) following the procedure described in Sako et al. (1999). We adjusted our measured line fluxes to account for interstellar absorption using the cross sections of Morrison & McCammon (1983) with a column density of 1.1 × 10 21 cm −3 . We assumed a distance to the source of D = 10 kpc ). The resulting emission measure for each ion, assuming solar abundances (Anders & Grevesse 1989), is included in Table 1. For Ne x and N vii we find emission measures of 2.1 and 2.6×10 57 cm −3 . For O viii we find an emission measure of 0.6 × 10 57 cm −3 . This suggests an underabundance of oxygen that may have interesting implications for the evolutionary state of the companion star. Adopting the factor of two uncertainty in the empirical emission measure that is estimated by Sako et al. (1999), we find that the predicted emission measures are roughly a factor of five higher than the empirical emission measure except at O viii. Considering that the predicted emission measures are highly dependent on our geometric assumptions and on a velocity profile that is only strictly valid for a thin Keplerian disk, the agreement with the empirical emission measures is remarkably good.
This empirical model of a thickened disk is difficult to understand theoretically. For a disk in hydrostatic equilibrium, the vertical height at a particular radius should be roughly given by the product of the radius and the ratio of the sound speed of the gas to its local orbital velocity. Here, we can directly infer the sound speed from the ionization parameter and the orbital velocity from the widths of the lines, yet we infer a disk height that grossly violates this condition. However, despite the theoretical challenge that this presents, we find no alternative model that sat- isfies the empirical constraints. The large observed line equivalent widths and the clear evidence for intrinsic absorption essentially guarantee that this material is well out of hydrostatic equilibrium. Hopefully, as high-resolution spectroscopic observations of other LMXB's accumulate, we may gain a clearer understanding of how such accretion flows can be formed and maintained. | 2014-10-01T00:00:00.000Z | 2000-10-31T00:00:00.000 | {
"year": 2000,
"sha1": "2585ee6e89a87ca4088037f0795fd6ac3496d096",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2001/01/aaxmm01.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6fa952faa1cf36d52abbcddc1c093397ef54721e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3085711 | pes2o/s2orc | v3-fos-license | Mannan Oligosaccharides in Nursery Pig Nutrition and Their Potential Mode of Action.
Simple Summary The aim of the paper is to provide a review of mannan oligosaccharide products in relation to their growth promoting effect and mode of action. Mannan oligosaccharide products maintain intestinal integrity and the digestive and absorptive function of the gut in the post-weaning period in pigs and enhance disease resistance by promoting antigen presentation. We find that dietary supplementation has growth promoting effects in pigs kept in a poor hygienic environment, while the positive effect of MOS is not observed in healthy pig herds with high hygienic standards. Abstract Mannan oligosaccharides (MOSs) are often referred to as one of the potential alternatives for antimicrobial growth promoters. The aim of the paper is to provide a review of mannan oligosaccharide products in relation to their growth promoting effect and mode of action based on the latest publications. We discuss the dietary impact of MOSs on (1) microbial changes, (2) morphological changes of gut tissue and digestibility of nutrients, and (3) immune response of pigs after weaning. Dietary MOSs maintain the intestinal integrity and the digestive and absorptive function of the gut in the post-weaning period. Recent results suggest that MOS enhances the disease resistance in swine by promoting antigen presentation facilitating thereby the shift from an innate to an adaptive immune response. Accordingly, dietary MOS supplementation has a potential growth promoting effect in pigs kept in a poor hygienic environment, while the positive effect of MOS is not observed in healthy pig herds with high hygienic standards that are able to maintain a high growth rate after weaning.
Introduction
Mannan oligosaccharides (MOSs) are often referred to as a potential alternative for antibiotic growth promoters. Although they are non-digestible oligosaccharides, the mode of action of MOSs differs from other prebiotics. By definition, prebiotics are non-digestible components of feed that stimulate the growth and/or activity of beneficial bacteria in the digestive system and promote the gut and general health of the host [1]. MOSs have recently been assigned to nutricines based on the reasoning that they are not direct nutrients either for intestinal microbiota or for the host, but potentially have a positive effect on the health and performance of farm animals [2].
The first experience with mannan products was investigating their potential to adhere to the mannose specific lectin on the surface of E. coli [3]. Results of in vitro studies suggested that dietary MOS could reduce the colonization of pathogenic bacteria in the gut, which was indeed confirmed later in animal, particularly poultry trials [4,5]. Dietary MOS supplementation became prevalent in the 90s as a growth promoter in broiler and turkey feeding [6,7] and to a lesser extent in pig feeding [8]. Relevant data show that mannan products can be efficiently used in two critical periods of swine production, i.e., in piglets during the nursery period and in sows during late gestation and lactation. Numerous publications reported that supplementation of the sow diet with dietary MOS (2 g/kg or 5 g/day/sow) in the last 2-3 weeks of gestation and during lactation improved the growth rate of piglets [9][10][11]. The data on weaned pigs are less consistent in this respect. The mode of action of mannan-containing products has been investigated for approximately 20 years and the following underlying mechanisms have been proved: mannans potentially affect (i) the intestinal micro biota, (ii) the morphology of gut tissue and thus the digestibility of nutrients, (iii) the immune response of farm animals; and (iv) the supposed toxin-binding ability of mannan containing yeast cell derivates. This latter, however, is attributed principally to the ȕ-glucan content [12], therefore in the present paper this property is not discussed. Based on the effects listed above, it is suggested that dietary MOS is able to support gut recovery of piglets after weaning. Excellent reviews have been published on the dietary effect of MOSs [12][13][14]; however, a few details of the underlying mechanisms remained unexplained. Since some recent results have filled gaps in our knowledge, the aim of the present paper is to provide an overview on MOS products in relation to their growth promoting effect and mode of action based on the latest publications.
Post-Weaning Changes within the Gastro Intestinal Tract of Pigs
Weaning is a stress for the piglets that is usually associated with a dramatic feed refusal. The immediate post-weaning anorexia results in the alteration of gut integrity leading to different physiological changes in the gastro intestinal tract (GIT), such as morphological and functional changes, shift in microbiota population, and increased production of inflammatory cytokines (e.g., reviewed by Dong and Pluske [15]). Even a short period of starving or malnutrition (2-3 days) results in villus atrophy [16] that weakens the absorption capacity of the intestine by reducing the surface area of the gut wall. It is also well known that the brush border enzyme (amino-peptidase N, dipeptyl-peptidase-4 [16]; sucrase [17]) and pancreatic enzyme activity (trypsin, chymotrypsin, amylase, lipase [18]) decline drastically right after weaning. The temporal changes induced in the gut after weaning can be divided into an acute phase of 5-7 days and an adaptive phase of 9-10 days [19]. The changes in the acute phase definitely result in a poorer digestibility of nutrients. The higher rate of undigested nutrients, and particularly of protein, may lead to undesirable processes in the hindgut. The fermentation of N-containing compounds in the digesta yields ammonia that increases the gut pH and supports pathogenic bacteria. Since the microbiota in young piglets is unstable, the ability of the gut flora to block the colonization of harmful species is inadequate. Moreover, neither the innate nor the acquired immune defense of a 4-week old pig can adequately respond to a pathogen challenge. Due to the low antibody concentration of sow milk at late lactation, the level of maternal antibodies is also low in the gut and blood of piglets. This is concurrent with insufficient own antibody production at 4 weeks of age, which only begins to increase at 6-7 weeks of age.
These changes often result in post-weaning diarrhea and a drastic depression in growth rate. In the past, antibiotic growth promoters efficiently prevented the complex post-weaning symptoms. Nowadays all feed additives that promote gut health, sustain eubiosis in the intestine or boost the immune defense of the pigs can be effectively used in nursery feeds. MOSs are suggested to be one of the promising alternatives for antibiotic growth promoters.
Source and Chemical Traits of Mannan Products
MOSs are non-digestible carbohydrates that are composed of mannose blocks and can be found in the yeast cell wall in complex formation. The composition of the yeast cell is determined by the species, the growth phase and the environmental factors of fermentation. The cell wall is approximately 25-30% of the dry weight of the cell. Saccharomyces cerevisiae is a well-known yeast in the bakery and brewery industries and its derivate is used exclusively as MOS product in animal nutrition. The cell wall of S. cerevisiae contains both mannan-proteins and ȕ-glucans. The main constituents of the outer cell wall are mannan polymers with Į(1-6) and Į(1-2) bonds or to a lesser extent Į(1-3) bounded side chains (see more details in [12]). The enzymes of the host or of the intestinal bacteria are unable to cleave these bonds and thus MOS has no direct nutritive value, but it has been shown to be able to maintain the gut health. It can be concluded from the relevant literature that although mannan-containing feed additives are almost exclusively derivates of S. cerevisiae, due to the processing and production technology, their chemical composition and therefore biological efficiency might be different [13].
The Effect of Dietary MOS on the Intestinal Microbiota
The development of the beneficial microbiota and sustainment of eubiosis play a crucial role in the defense mechanisms and gut health. There is increasing evidence suggesting that, unlike in young pigs, the composition of the GIT microflora in a healthy adult host remains remarkably stable [20]. The steady state can, however, promptly change at immune suppression or during infection. In general, supporting the growth of Bifidobacteria and Lactobacilli in the hindgut is supposed to have a positive impact on the host; however, in stress situations, such as weaning, the number of beneficial bacteria often declines, while the number of harmful species, like E. coli and Salmonella, increases. In vitro studies show that in the presence of mannan products the enteric pathogens attach to the mannan compounds in the gut lumen instead of the epithelia, which reduces their colonization. Results of relevant studies suggest that dietary MOS supplementation can reduce the number of harmful bacteria in the hind gut if the pathogen exposure was high, such as in post-infection [21][22][23][24]. A small number of other trials, however, failed to prove that the number of (facultative) pathogens was indeed affected by dietary MOS [25,26]. Singboottra [27] concluded that subject to their chemical structure (bonds and proportion of mannose) the efficiency of mannan products might differ in regard to their potential to moderate the number of E. coli and Salmonella typhimurium.
The microbiota exists in a dynamic state; increasing the number of any bacterial specie may result in the decrease of another specie. Some studies show that the reduction in the number of pathogenic bacteria in response to dietary MOS supplementation was indeed associated with an increase of beneficial flora, particularly lactobacilli [21,23,28], but this finding is not consistent throughout the literature [24,29,30]. Sims et al. [5] found in turkey that dietary MOS supported the growth of Enterococcus. These bacteria produce not only short chain fatty acids but also bacteriocin and enterocin and thus enhance the competition and development of beneficial flora [31]. Other results also confirm the positive effect of MOS, since it is reported to reduce the ammonia concentration in the gut [32].
Literature data show that dietary MOS supplementation can efficiently reduce the number of pathogens post-infection; however, it is unable to modify consistently the quantity of harmful species under adequate hygienic conditions. The shift in population of beneficial bacteria is not consistent in the different studies, therefore it can be concluded that although dietary MOS may support the maintenance of eubiosis, it probably has no real prebiotic effect.
The Effect of Dietary MOS on Gut Morphology
The first studies proving that MOS supplementation has a significant impact on gut morphology were conducted with broilers [33]. Results showed that dietary mannan products increase the villous height: crypt depth ratio in young broilers [33] and turkey [14] and also in weaned piglets [34,35]. In a recent study with nursery pigs, Poeikhampha and Bunchasak [36] found that 3 g MOS/kg in the diet resulted in increased crypt depth in the jejunum. An increased villous height: crypt depth ratio is generally associated with a bigger absorptive surface; this ratio is, however, usually reduced during the initial post-weaning period [37]. There are several hypotheses on the beneficial effect of MOS on intestinal morphology, but not all of them were proven in swine. As discussed earlier, the reduction in the Enterobacteria population [24] and/or increase in beneficial flora [21,23,28] enhance the short chain fatty acid production in the intestine, which positively affects the recovery of the epithelia. A number of studies reported that dietary MOS supplementation increases the lactic acid and/or volatile fatty acid production in the hind gut [5,28,36]. The microflora as a whole has a trophic effect on the epithelium, which can lead to a faster turnover rate [20]. In particular, butyric acid has anti-inflammatory property, alleviating the hypersensitive reaction of the gut wall associated with the post-weaning period. In turkey, the increased production of the mucus gel layer [14], and in pigs, a prompt recovery of the intestinal mucosal cells [38], was promoted by MOS. Moreover, enhanced gut maturation was reported in broilers [33,39].
The post-weaning morphological changes in the gut might be alleviated if the pig feed contains MOSs. It seems, however, that the structural changes of the epithelial cells are associated with functional changes of the gut tissue. Kim et al. [40] found slightly better apparent ileal amino acid digestibility for valine, isoleucine, leucine, lysine and arginine when the piglet diet was supplemented with 1 g MOS product/kg of feed, but the differences were statistically not significant. Nochta et al. [41] reported that supplementation of a mannan containing feed additive significantly improved the apparent ileal digestibility of nutrients, particularly that of indispensable amino acids (Lys, Met, M+C, Thr), Ca and P (Table 1). When the diet was supplemented with MOS at the rate of 2 g/kg, the apparent ileal digestibility of nutrients was similar to that in the treatment containing an antibiotic growth promoter. However, further increase of MOS (4 g/kg) did not further improve the ileal digestibility data. In addition to a lessened erosion of the absorptive surface, the better digestibility might be explained at least partly by the lower pH attributed to the more active fermentation by hind gut bacteria. Beneficial microflora produces short chain fatty acids that reduce the pH of the ileal digesta, which can thus result in higher protein hydrolysis and improved protein and amino acid, as well as Ca and P digestibility [41]. The relatively big improvement in apparent threonine digestibility generally indicates a decreased endogenous threonine excretion. Table 1. The effect of dietary mannan oligosaccharide (MOS) supplementation on the apparent ileal digestibility of nutrients in weaned pigs (%) [41]. Therefore, the data shown in Table 1 suggest that the endogenous protein loss can be reduced by MOS supplementation likely due to a faster recovery of the intestinal mucosal cells [38]. Due to the high threonine content of gut cell and mucus protein, the ileal threonine excretion can be twice as high as the lysine or methionine+cystine excretion [42]. The higher threonine digestibility might be associated with a lower turnover of the gut wall layer and less endogenous N losses. However, there are no data available in the literature that would report a reduced endogenous protein and/or threonine loss when dietary MOS is fed.
T R E A T M E N T S *
In conclusion, dietary mannan products can improve the gut morphology in the post-weaning period, which in turn has a positive impact on nutrient supply and on the first line of defense in nursery pigs.
The Effect of Dietary MOS on the Immune Response of Weaned Pigs
Dietary MOS has both an indirect and a direct effect on the immune response of farm animals. According to the fact that the microbiota can modulate the local immunity of the host, the MOS induced shift in gut flora may result in the changes of certain immune variables.
More evidence has been reported that addition of dietary mannan products directly enhance the immune competence of pigs, particularly that of sows and weaned pigs. Numerous studies with rats, dogs and chickens show that dietary MOS enhances the secretory IgA in different segments of the intestinal mucosa [43][44][45]. The higher mucosa IgA production is likely attributable to an activation of the local immune defense through the mannose binding receptors located in the gut surface. Davies et al. [46] reported that 21 days of feeding 2 g of phosphorylated mannan per kg diet altered the T lymphocyte repertoire of jejunal lamina propria. The local immune function initiates the systemic immune response of the host, which is frequently reported in response to MOS supplementation.
Recent results of the present authors show that the non-specific cellular immune variables are modulated by dietary MOS, and the level of supplementation is determining in this respect (Table 2). Nochta et al. [47] found that a lower dose of MOS (1 g/kg) increased the responsiveness to lymphocyte stimulation test (LST) with non-specific mitogens (pokeweed mitogen: PWM, Concavalin A: ConA and phytohaemagglutinin: PHA) in weaned pigs; bigger doses (2 or 4 g/kg), however, had no influence or even impaired it. Although Davis et al. [48,49] did not prove that MOS increases the responsiveness of LST with PWM or PHA mitogens, their results with regard to the ratio of CD4+/CD8+ lymphocytes could also indicate that MOS supplementation enhances the establishment of a mature T cell repertoire within the gastrointestinal tract of 3-weeks old weaned pigs. Moreover, in the study of Davies et al. [46] the percentage of neutrophils tended to increase and percentage of lymphocytes significantly increased in the peripheral blood when piglets were fed 2 g of mannan product/kg feed. The authors supposed that the alteration in systemic immune function was possibly an indirect response to changes that were occurring in the gastrointestinal immunity [46].
Recent results suggest that it is worth supplementing dietary mannan in case of an immune challenge. Dietary MOS supplementation (1 g MOS/kg feed) enhanced the specific immune response, particularly the virus neutralization, 2 weeks after immunization with inactivated Aujeszky virus in a study carried out with weaned pigs [47]. In agreement with those results, Franklin et al. [50] reported that the specific immunity was enhanced by MOS supplementation as evidenced by greater serum rotavirus neutralization titers in cows supplemented with 10 g MOS daily compared with control cows. The humoral immune response is the result of the activation of the B-cells responsible for the production of antigen specific immunoglobulin. According to a serial study, White et al. [21] reported that the serum IgG concentration tended to increase in early weaned piglets fed with mannan containing yeast. Moreover, Shashidhara and Devegowda [51] found significantly higher specific antibody titers in the serum of broiler breeders after vaccination with bursal disease virus when the diet was supplemented with 0.5 MOS g/kg feed. Table 2. Effect of dietary MOS supplementation on non-specific and specific cellular immune response of weaned pigs (lymphocyte stimulation index) [47]. RMSE: root mean square error; M0: negative control without antibiotic or MOS; M1: supplementation of 1 g AgriMos/kg feed; M2: supplementation of 2 g AgriMos/kg; M4: supplementation of 4 g AgriMos/kg feed; AB: antibiotic growth promoter containing feed with 0.2 g Maxus-200 (40 ppm avilamycin supplementation)/kg feed; NI: non-immunized group fed no supplementation. a,b,c common letters within rows indicate no differences at P < 0.05.
T R E A T M E N T S *
** There was no effect of interaction between treatment and replication. † Replication effect was significant (P < 0.05).
The earlier and stronger immune response is essential for the livestock in order to moderate or eliminate the antigen attack. There are two potential modes of action of dietary MOS as discussed by Newman [52] and Franklin et al. [50]. In the present paper those pathways are summarized briefly. The first underlying mechanism involves the presence of a collectin, a mannose-binding protein in the blood serum that may act as opsonin. Opsonins are molecules that make foreign antigens more susceptible to the action of the phagocytes. Mannose-binding proteins may bind to mannose-containing structures of a number of viruses and bacteria and trigger the complement cascade of the host immune system [52]. Nielsen et al. [53] reported increased presence of mannose-binding proteins in chickens during virus infection. It is likely that MOS stimulates the production of mannose-binding proteins resulting in improved phagocytosis, activation of the complement system, and enhancement of the immune response [50]. The other possible mode of action of MOS involves the natural production of antimannan antibodies [50]. The antimannan antibodies are directed against an oligosaccharide-based epitope of the viruses and microbes and these carbohydrate-specific antibodies may be produced during a normal immune response against the intestinal microflora. Dietary MOS probably enhances the production of these antimannan antibodies at the gut level, which in turn may enter the blood stream allowing for an enhanced response to a viral challenge [21,47].
In a recent study with 3-week old weaned pigs, Che et al. [54] found that MOS supplementation was associated with rapidly increased numbers of leukocytes, lymphocytes, and neutrophils at the early stage of porcine reproductive and respiratory syndrome (PRRS) virus infection (7 days post-infection). Results of the same study and an earlier trial conducted with mice suggest that dietary MOS has the potential to alleviate inflammation and has an anti-allergic effect, caused by the activation of cellular immunity [55]. Ozaki et al. [55] reported lower number of peritoneal acidophils in MOS-fed mice compared to that in control diet-fed ones, moreover MOS treatment reduced interleukin-10 production and tended to suppress ovoalbumin-specific IgE in serum. Che et al. [56] supported evidence on the potential of dietary MOS (2 g/kg) to alleviate the hypersensitive reaction post-infection. It has been proved that MOS down-regulates the expression of non-immune and immune genes in pig leukocytes, perhaps providing benefits by enhancing the pig's immune responses to an infection, while preventing over-stimulation of the immune system [56,57]. It also altered the expression of genes regulating pathogen detection in the peripheral blood molecular cells [56] and thus MOS may enhance disease resistance in pigs.
Based on the discussed data it can be concluded that in the most critical period, right after weaning, dietary MOS supplementation may boost the immune response of the pigs and save nutrients for growth in case of infection.
The Effect of Dietary MOS on the Growth Performance of Weaned Pigs
Subject to the duration of starvation in the post-weaning period, the growth performance and the immune defense of the piglets suffer to a different extent. Some data show that the growth rate right after weaning has a significant impact on the pig performance in the growing and fattening phases [58]. Although the pig has the ability for compensation, the bigger the stress the less the chance is for recovery. This is certainly true for intensive genotypes. Therefore, any feeding strategy or feed supplement that alleviates the reduction in growth rate post weaning may have a positive effect on the efficiency of pork production.
The real growth promoter effect of MOS in nursery pigs is apparently inconsistent. Some studies report no benefits [25,26,59], while others found an improved rate of daily gain and/or feed efficiency in weaning pigs [24,48,60]. The results appear to be better in the case of younger pigs, especially in a challenged environment [8,61]. Based on the above-mentioned mode of action, the effect of MOS supplementation on piglet performance is affected by different factors, principally by weaning age, health status, duration of feeding and the amount of MOS addition. If the piglets are weaned at 4 weeks of age-as it is commonly done in the European Union, or even earlier, as in the United States-the intestine is less matured and the weaning is associated with a higher rate of gut epithelial atrophy than at a later age. Thus in the case of epithelial atrophy the positive effect of dietary mannan on gut wall repair can be demonstrated. Accordingly, the pig performance, e.g., growth rate and/or feed conversion were reported to be significantly enhanced in studies where the gut cell wall atrophy was reduced by dietary MOS supplementation [34][35][36].
In a meta-analysis involving studies with 54 comparisons, Miguel et al. [61] found that dietary MOS supplementation improves the growth rate mainly in the first 2 weeks of nursery, which is in agreement with the results showing that dietary mannans are associated with enhanced gut integrity in the post-weaning period. The same report suggests that pigs that received 1 or 2 g of the used mannan product (Bio-Mos) per kg of feed had a more pronounced growth response than pigs fed diets with supplementation of 3 or 4 g/kg [61]. This fact is supported by our findings as well; i.e., in contrast to 4 g/kg, 1 or 2 g of a MOS product (AgriMos) per kg of feed increased the ileal digestibility of nutrients [41], as discussed earlier in this paper.
Recent studies conducted on any growth enhancer, in general, are in accordance with the earlier antibiotic growth promoter studies: under higher environmental pressure the treatment results in a better improvement of the zootechnical parameters (such as the average daily gain, feed conversion), however, the growth promoting effect is low if the animal performance is close to the genetic potential [62]. It has to be noted, however, that the activated immune system requires extra nutrient supply, therefore less amino acids and energy is available for growth in case of an immune challenge. Considering that dietary MOS supplementation has the potential to boost immune functions and defense mechanisms of the pig, the fact that the higher immune response is usually not associated with a lower growth rate is a benefit per se.
Conclusions
Based on the relevant literature it is likely that mannans help to maintain the intestinal integrity and the digestive and absorptive function of the gut post-weaning. Therefore the malabsorption syndrome associated with this period can be alleviated with dietary MOS supplementation. Recent results suggest that MOS enhances the disease resistance in swine by promoting antigen presentation, thus enhancing the shift from an innate to an adaptive immune response. Accordingly, dietary MOS supplementation may have a growth promoting effect in pigs kept in a poor hygienic environment, while the positive effect of MOS is not observed in healthy pig herds with high hygienic standards that are able to maintain their high growth rate post-weaning. In addition to the economic benefits resulting from the maintenance of gut health and support of the defense mechanisms, the use of medication and drugs can be reduced during the post-weaning period with the use of dietary MOS supplementation that enables healthier and safe, and therefore more desirable pork production. | 2016-05-21T08:49:11.215Z | 2012-05-23T00:00:00.000 | {
"year": 2012,
"sha1": "33b2dcc2401e0a0431af87bb49a5d1fc3b5afd19",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/2076-2615/2/2/261/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33b2dcc2401e0a0431af87bb49a5d1fc3b5afd19",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16339471 | pes2o/s2orc | v3-fos-license | On the State Complexity of the Shuffle of Regular Languages
We investigate the shuffle operation on regular languages represented by complete deterministic finite automata. We prove that $f(m,n)=2^{mn-1} + 2^{(m-1)(n-1)}(2^{m-1}-1)(2^{n-1}-1)$ is an upper bound on the state complexity of the shuffle of two regular languages having state complexities $m$ and $n$, respectively. We also state partial results about the tightness of this bound. We show that there exist witness languages meeting the bound if $2\le m\le 5$ and $n\ge2$, and also if $m=n=6$. Moreover, we prove that in the subset automaton of the NFA accepting the shuffle, all $2^{mn}$ states can be distinguishable, and an alphabet of size three suffices for that. It follows that the bound can be met if all $f(m,n)$ states are reachable. We know that an alphabet of size at least $mn$ is required provided that $m,n \ge 2$. The question of reachability, and hence also of the tightness of the bound $f(m,n)$ in general, remains open.
The shuffle of two languages K and L over Σ is defined by Note that the shuffle operation is commutative on both words and languages.
The state complexity of the shuffle operation was first studied by Câmpeanu, Salomaa, and Yu [2], but they considered only bounds for incomplete deterministic automata. In particular, they proved that 2 mn − 1 is a tight upper bound for that case. Since we can convert an incomplete deterministic automaton into complete one by adding the empty state, it follows that 2 (m−1)(n−1) − 1 is a lower bound for the case of complete deterministic automata. Here we show that this lower bound can be improved, and we derive an upper bound for two regular languages represented by complete deterministic automata, but the question whether this bound is tight remains open.
A nondeterministic finite automaton (NFA) is a quintuple A = (Q, Σ, δ, s, F ), where Q is a finite non-empty set of states, Σ is a finite alphabet of input symbols, δ : Q × Σ → 2 Q is the transition function which is extended to the domain 2 Q × Σ * in the natural way, s ∈ Q is the initial state, and F ⊆ Q is the set of final states. The language accepted by NFA A is the set of words L(A) = {w ∈ Σ * | δ(s, w) ∩ F = ∅}.
An NFA A is deterministic and complete (DFA) if |δ(q, a)| = 1 for each q in Q and each a in Σ. In such a case, we write δ(q, a) = q ′ instead of δ(q, a) = {q ′ }. A DFA is minimal (with respect to the number of states) if all its states are reachable, and no two distinct states are equivalent.
Every NFA A = (Q, Σ, δ, s, F ) can be converted to an equivalent DFA A ′ = (2 Q , Σ, δ, {s}, F ′ ), where F ′ = {R ∈ 2 Q | R ∩ F = ∅}. The DFA A ′ is called the subset automaton of NFA A. The subset automaton may not be minimal since some of its states may be unreachable or equivalent to other states.
Let D = (2 QK ×QL , Σ, δ ′ , {(q K , q L )}, F ′ ) be the subset automaton of N . If |Q K | = m and |Q L | = n, then NFA N has mn states. It follows that DFA D has at most 2 mn reachable and pairwise distinguishable states. However, this upper bound cannot be met, as we will show.
In the sequel, we assume that Q K = {1, 2, . . . , m}, q K = 1, Q L = {1, 2, . . . , n}, and q L = 1. We say that a state (p, q) of NFA N is in row i if p = i, and it is in column j if q = j. Proposition 1. Let a ∈ Σ. Let S be a state of D. Let π col (S) = {p | (p, q) ∈ S for some q}, and π row (S) = {p | (p, q) ∈ S for some p}. Then π x (S) ⊆ π x (S · a) for x ∈ {col, row}.
⊓ ⊔ We claim that in the subset automaton D, every reachable subset S of Q K × Q L must contain a state in column 1 and a state in row 1, that is, it must satisfy the following condition.
Condition (C):
There exist states (s, 1) and (1, t) in S for some s ∈ Q K and t ∈ Q L . Lemma 2. Every reachable subset S of subset automaton D satisfies Condition (C).
⊓ ⊔
Let K and L be two regular languages over Σ. If κ(K) = κ(L) = 1, then each of K, L, and K L is either ∅ or Σ * , and κ(K L) = 1; hence the bound f (1, 1) = 1 is tight. Now suppose that κ(K) = 1; here we have two possible choices for K, the empty language or Σ * . The first choice leads to κ(K L) = 1. Hence only the second choice is of interest, where the language K L = Σ * L is the allsided ideal [1] generated by L. If κ(L) = 2, the upper bound f (1, 2) = 2 is met by the unary language L = aa * . Hence assume that κ(K) = 1 and κ(L) 3. The next observation shows that in such a case, the tight bound is less than f (1, n) = 2 n−1 .
In what follows we assume that m 2 and n 2. First, let us show that the upper bound f (m, n) cannot be met by regular languages defined over a fixed alphabet. Proof. For s = 2, 3, . . . , m and t = 2, 3, . . . , n denote If all the subsets satisfying Condition (C) are reachable, then, in particular, all the subsets A s , B t , and C st must be reachable. Let us show that all these subsets must be reached from some subsets containing state (1, 1) by distinct symbols.
Suppose that a set A s is reached from a reachable set S with S = A s by a symbol a, that is, we have A s = δ(S, a) and S = A s . The set A s contains only states in column 1 and rows 1 or s. By Proposition 1, the set S may only contain states in column 1 and in rows 1 or s, that is, we have S ⊆ {(1, 1), (s, 1)}. Since S = A s , we must have S = {(1, 1)}.
Thus each A s is reached from {(1, 1)} by a symbol a s , each B t is reached from {(1, 1)} by a symbol b t , each C st is reached from a set containing (1, 1) by a symbol c st , and we must have It follows that all the symbols a s , b t , and c st must be pairwise distinct. Therefore Unfortunately, this lower bound on the size of the alphabet is not tight, as is demonstrated by the following example: (1) If m = n = 2, we have f (2, 2) = 10. Let Σ = {a, b, c, d}, and let the DFAs K and L be as shown in Fig. 1, and let K and L be their languages. Then κ(K L) = 10. We have used GAP [3] to show that the bound cannot be reached with a smaller alphabet, and that the DFAs of Fig. 1 are unique up to isomorphism.
(2) For m = 2 and n = 3, the minimal size of the alphabet of a witness pair is 6. We have verified this by a dedicated algorithm enumerating all pairs of non-isomorphic DFAs with 2 and 3 states. In contrast to the previous case, over a minimal alphabet there are more than 60 non-isomorphic DFAs of L -even if we do not distinguish them by sets of final states -that meet the bound with some K. One of the witness pairs is described below.
Let The bound mn − 1 on the size of the alphabet is not tight for m = n = 2, where an alphabet of size four is required. For any m, n 2 the subsets of {1, 2} × {1, 2} satisfying (C) must be also reachable, and to reach them we can use only transformations mapping 1 to either 1 or 2. There are only three such transformations counted in Proposition 5; thus we need one more letter.
Partial Results about Tightness
To prove that the upper bound f (m, n) of Equation (1) is tight, we must exhibit two languages K and L with state complexities m and n, respectively, such that κ(K L) = f (m, n). As usual, we use DFAs to represent the languages: Let K and L be minimal complete DFAs for K and L. We first construct the NFA N as defined in Section 1, and we consider the subset automaton D of NFA N . We must then show that D has f (m, n) states reachable from the initial state {(1, 1)}, and that these states are pairwise distinguishable. We were unable to prove this for all m and n, but we have some partial results about reachability in Subsection 2.1, and we deal with distinguishability in Subsection 2.2.
Reachability
We performed computations verifying reachability of the upper bound for small values of m and n. These results are summarized in Table 1.
The computation in the hardest case with m = n = 6 took about 48 days on a computer with AMD Opteron(tm) Processor 6380 (2500 MHz) and 64 GB of RAM. Moreover, we verified that in all these cases, every subset of size at least 3 is directly reachable from some smaller subset. We also verified that for reachability in case of m = n = 3 an alphabet of size 12 is sufficient, and in case of m = n = 4 an alphabet of size 50 is sufficient. Using these results, we are going to prove reachability for all m, n with 2 m 5 and n 2.
Without loss of generality, the set of states of any n-state DFA is denoted by Q n = {1, 2, . . . , n}. Let T n be the monoid of all transformations of the set Q n . Let p, q ∈ Q n and P ⊆ Q n . Let 1 denote the identity transformation. Let (p → q) denote the transformation that maps state p to state q and acts as the identity on all the other states. Let (p, q) denote the transformation that transposes p and q.
Here we deal only with reachability, so final states do not matter. We assume that the sets of final states are empty in this subsection.
Let Σ m,n = {a s,t | s ∈ T m and t ∈ T n } be an alphabet consisting of m m n n symbols. If an input a induces transformations s in T m and t in T n , this will be indicated by a : s; t.
Define DFAs K m,n = (Q m , Σ m,n , δ m , 1, ∅) and L m,n = (Q n , Σ m,n , δ n , 1, ∅), where δ m (p, a s,t ) = ps if p ∈ Q m and δ n (q, a s,t ) = qt if q ∈ Q n . Let N m,n be the NFA for the shuffle of languages recognized by DFAs K m,n and L m,n as described in Section 1, and let D m,n be the subset automaton of N m,n . The NFA N m,n has alphabet Σ m,n , and so has an input letter for every pair of transformations in T m × T n . Therefore the addition of another input letter to the DFAs K m,n and L m,n cannot add any new set of states of N m,n that would be reachable from Let m ′ m and n ′ n. Then DFA K m ′ ,n ′ = (Q m ′ , Σ m ′ ,n ′ , δ m ′ , 1, ∅) (respectively, the DFA L m ′ ,n ′ = (Q n ′ , Σ m ′ ,n ′ , δ n ′ , 1, ∅)) is a sub-DFA of K m,n (respectively, of L m,n ), in the sense that Q m ′ ⊆ Q m , Σ m ′ ,n ′ ⊆ Σ m,n , and δ m ′ ⊆ δ m . As well, NFA N m ′ ,n ′ is a sub-NFA of N m,n . Note that D m,n is extremal for the shuffle: every language K L, where K and L are languages with state complexities m and n respectively, is recognized by some sub-DFA of D(m, n) after possibly renaming some letters.
For the next lemma it is convenient to consider a subset S of states (p, q) of N m,n as an m × n matrix, where the entry in row p and column q is (p, q) if (p, q) ∈ S, and it is empty otherwise. We first introduce the following notions.
that is, if it contains a state in row 1 and a state in column 1.
Lemma 8. Let S be a valid subset of Q m × Q n with the property that there are distinct i, i ′ or j, j ′ such that either row i ′ contains row i or column j ′ contains column j. Assume that every valid subset Proof. If S contains an empty row or column, then without loss of generality we can renumber the n states of L m,n in such a way that column n is the empty column in S. By the inductive assumption we know that S is reachable in D m,n−1 by some word w. Since N m,n−1 is a sub-NFA of N m,n , S is reachable in D m,n as well by the same word. Suppose that S has neither an empty row nor an empty column. By symmetry, it is sufficient to consider the case with distinct i and i ′ such that row i ′ contains row i. Let S ′ = S\{(i ′ , j) | (i, j) ∈ S for j ∈ {1, . . . , n}}. Since |S ′ | < |S|, the set S ′ is reachable by assumption. To obtain S, we apply the letter that induces the transformation i → i ′ ; 1.
⊓ ⊔ Lemma 9. Let S be a valid subset of Q m × Q n such that there is a column or a row with exactly one element. Assume that every valid subset Proof. Recall that we can assume m 2 and n 2. We may assume that there is neither an empty row nor an empty column in S; otherwise S is reachable by Lemma 8. It is sufficient to consider the case involving a column, since the case involving a row follows by symmetric arguments. Let (p, q) be the only element in column q. If there are more elements in row p, then column q is contained in another column and by Lemma 8, the set S is reachable. Let S ′ be the subset of Q m−1 × Q n−1 obtained by removing row p and column q, and renumbering the states to Q m−1 × Q n−1 in the way such that i ∈ Q m becomes i − 1 if i > p and otherwise remains the same, and j ∈ Q n becomes j − 1 if j > q and otherwise remains the same. We have that S ′ is a valid subset, and by the inductive assumption it is reachable in D m−1,n−1 by some word u ′ ; let u be the word corresponding to u ′ in the original numbering of the states. We consider four cases.
Case p = 1 and q = 1: This is symmetrical to the previous case. Let S be a valid subset of Q m × Q n , where m h and n > h ⌊h/2⌋ , and assume that every valid subset S ′ of Q m ′ × Q n ′ is reachable if m ′ < m, or n ′ < n, or |S ′ | < |S|. By Sperner's theorem [5], the maximal number of subsets of an m-element set such that none of them contains any other subset is m ⌊m/2⌋ . This is not larger than h ⌊h/2⌋ ; hence, there exist some columns j, j ′ with j = j ′ such that the j-th column is contained in j ′ -th column. By Lemma 8, the subset S is reachable. For a subset S of Q m × Q n , by col(S, i) we denote the subset of Q m contained in the i-th column. Then cols(S) = 1 i n col(S, i) is the set of the subsets in the columns of S.
The following lemma assures reachability (under an inductive assumption) of a special kind of subsets whose columns form only full and empty equivalence classes under some permutation ϕ.
Lemma 12. Let ϕ be a permutation of m rows. Let S be a valid subset of Q m × Q n such that [U ] ϕ ⊆ cols(S) for every U ∈ cols(S), and there is a Proof. We can assume that no two columns contain the same subset of rows, no column is empty, and the first row contains at least two elements; otherwise S is reachable by Lemma 8 or by Lemma 9.
Let S j = col(S, j) be the j-th column of a valid subset S. Thus we have S = {(i, j) | 1 j n and i ∈ S j }. Since |[V ] ϕ | 2, we can always choose V so that ϕ −1 (V ) is in a k-th column S k with k = 1. Let S ′ be the set obtained from S by omitting the states in the k-th column and by taking the pre-image of S j under ϕ in any other column, that is, Since k = 1 and the first row of S contains at least two elements, the set S ′ is valid. Since V is non-empty, we have |S ′ | < |S|. Let ψ be a permutation that maps a column j to the column containing ϕ −1 (S j ), that is, we have S ψ(j) = ϕ −1 (S j ). Let t be the transformation given by a ϕ,ψ . Let us show that S ′ t = S.
⊓ ⊔ Corollary 13. Let 1 m 5 and n 1. Then every valid subset can be reached in D m,n .
Proof. The proof follows by analysis of valid subsets S ⊆ Q 5 × Q n , with the aid of Corollary 11, Lemma 8, Lemma 12, and the results from Table 1.
Suppose that there is a valid subset S ⊆ Q 5 × Q n that is not reachable; let S be chosen so that n is the smallest number and S is a smallest non-reachable subset of Q 5 × Q n .
By Corollary 11 and the choice of n, every valid subset S ′ ⊂ Q m ′ ×Q n ′ , where m ′ < 5, or n ′ < n, or |S ′ | < |S|, is reachable. Hence, S has no column containing another column; otherwise, we can apply Lemma 8. Since we have verified the reachability of all valid subsets for m = 5 and n 7 (Table 1), we must have n 8 and so S has at least 8 distinct columns. Obviously there is neither an empty nor a full column. If there is a column U with |U | = 1 or |U | = 4, then by Sperner's theorem if n > 4 2 = 6, then S has a column containing another column; hence S can have only columns U with |U | = 3 or |U | = 2.
Let C 3 be the number of 3-element columns (|U | = 3), and C 2 be the number of 2-element columns (|U | = 2). We are searching for possible subsets S that do not have a column containing another column, and with C 3 +C 2 8. We consider the following six cases.
(2) Let C 3 = 1. The only possible subset, up to permutation of columns and rows, is shown in Table 3. It has all columns with two elements that are not contained in the 3-element column. By Lemma 12 with ϕ = [1,2,3,5,4], it is reachable.
(3) Let C 3 = 2. A simple analysis reveals that if the 3-element columns have only one common element, then C 2 is at most 4. If they have two common elements, then C 2 is at most 5. Thus in this case, we have C 2 + C 3 7.
(6) Let C 3 5. These cases are symmetrical to those with C 3 3; it is sufficient to consider the complement of S.
Since these cover all the possibilities for set S, this set is reachable. ⊓ ⊔
Proof of Distinguishability
The aim of this section is to show that there are regular languages defined over a three-letter alphabet such that the subset automaton of the NFA for their shuffle does not have equivalent states. To this aim let A = (Q, Σ, δ, s, F ) be an NFA. We say that a state q in Q is uniquely distinguishable if there is a word w in Σ * which is accepted by A from and only from the state q, that is, if there is a word w such that δ(p, w) ∈ F if and only if p = q. First, let us prove the following two observations. Proposition 14. If each state of an NFA A is uniquely distinguishable, then the subset automaton of A does not have equivalent states.
Proof. Let S and T be two distinct subsets in 2 Q . Then, without loss of generality, there is a state q in Q with q ∈ S \ T . Since q is uniquely distinguishable, there is a word w which is accepted by A from and only from q. Therefore, the subset automaton of A accepts w from S and it rejects w from T . Hence w distinguishes S and T .
⊓ ⊔ Proposition 15. Let a state q of an NFA A = (Q, Σ, δ, s, F ) be uniquely distinguishable. Assume that there is a symbol a in Σ and exactly one state p in Q that goes to q on a, that is, (p, a, q) is a unique in-transition on a going to q. Then the state p is uniquely distinguishable as well.
Proof. Let w be a word which is accepted by A from and only from q. The word aw is accepted from p since q ∈ δ(p, a) and w is accepted from q. Let r = p. Then q / ∈ δ(r, a) since (p, a, q) is a unique in-transition on a going to q. It follows that the word w is not accepted from any state in δ(r, a). Thus A rejects aw from r, so p is uniquely distinguishable.
⊓ ⊔ Now we can prove the following result. Construct the NFA N for K L as described in Section 1 on page 2. The transitions on a, b, c in N for m = 4 and n = 5 are shown in Fig. 3. Notice that each state (i, j) with 2 i m and 2 j n has a unique in-transition on symbol a and this transition goes from state (i − 1, j); see the dashed transitions in Fig. 3 (top-left). Next, each state (m, j) with 2 j n has a unique intransition on b which goes from (m, j − 1), and each state (i, 2) with 2 i m has a unique in-transition on b going from (i, 1); see the dashed transitions in Fig. 3 (top-right). Finally, the state (2, 1) has a unique in-transition on c going from (1, 1); see the dashed transition in Fig. 3 (bottom).
The empty word is accepted by N from and only from the state (m, n) since this is a unique accepting state of N . Thus (m, n) is uniquely distinguishable. Next, consider the subgraph of unique in-transitions in N . Fig. 4 shows this subgraph in the case of m = 4 and n = 5. Notice that from each state of N , the state (m, n) is reachable in this subgraph. By Proposition 15, used repeatedly, we get that each state of N is uniquely distinguishable. Hence by Proposition 14, the subset automaton of N does not have equivalent states.
Conclusions
We have examined the state complexity of the shuffle operation on two regular languages of state complexities m and n, respectively, and found an upper bound for it. We know that this bound can be reached for any m with 1 m 5 and any n 1, and also for m = n = 6. For the remaining values of m and n, however, the problem remains open. Since there exist two languages K and L for which all pairs of states in the subset automaton of the NFA accepting the shuffle K L are distinguishable, the main difficulty consists of proving that all valid states in the subset automaton can be reached for the witness languages. | 2015-12-04T08:33:08.132Z | 2015-12-03T00:00:00.000 | {
"year": 2015,
"sha1": "f5217f94d8dc7eed8a9780336c121377390fb897",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1512.01187",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4b90ac0a2a15a44f5241530776e6bc7089f60308",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
156447211 | pes2o/s2orc | v3-fos-license | Robustness of mathematical models and technical analysis strategies
The aim of this paper is to compare the performances of the optimal strategy under parameters mis-specification and of a technical analysis trading strategy. The setting we consider is that of a stochastic asset price model where the trend follows an unobservable Ornstein-Uhlenbeck process. For both strategies, we provide the asymptotic expectation of the logarithmic return as a function of the model parameters. Finally, numerical examples find that an investment strategy using the cross moving averages rule is more robust than the optimal strategy under parameters mis-specification.
Introduction
There exist three principal approaches for investments in financial markets (see Blanchet-Scalliet et al. (2007)). The first one is based on fundamental economic principles (see Tideman (1972) for details). The second one is called the technical analysis approach and uses the historical prices and volumes (see Taylor & Allen (1992), Brown & Jennings (1989) and Edwards et al. (2007) for details). The third one is the use of mathematical models and was introduced in Merton (1969). He assumed that the risky asset follows a geometric Brownian motion and derived the optimal investment rules for an investor maximizing his expected utility function. Several generalisations of this problem are possible (see Karatzas & Zhao (2001), Brendle (2006), Lakner (1998), Sass & Haussmann (2004), or Rieder & Bauerle (2005) for example) but all these models are confronted to the calibration problem. In Bel Hadj Ayed et al. (2015a), the authors assess the feasibility of forecasting trends modeled by an unobserved mean-reverting diffusion. They show that, due to a weak signal-to-noise ratio, a bad calibration is very likely. Using the same risky asset model, Zhu & Zhou (2009) analyse the performance of a technical analysis strategy based on a geometric moving average rule. In Blanchet-Scalliet et al. (2007), the authors assume that the drift is an unobservable constant piecewise process jumping at an unknown time. They provide the performance of the optimal trading strategy under parameters mis-specification and compare this strategy to a technical analysis investment based on a simple moving average rule with Monte Carlo simulations.
In this paper, we consider a stochastic asset price model where the trend is an unobservable Ornstein Uhlenbeck process. The purpose of this work is to characterize and to compare the performances of the optimal strategy under parameters mis-specification and of a cross moving average strategy.
The paper is organized as follows: the first section presents the model, recalls some results from filtering theory and rewrites the Kalman filter estimator as a corrected exponential average.
In the second section, the optimal trading strategy under parameters mis-specification is investigated. For this portfolio, the stochastic differential equation of the logarithmic return is found. Using this result, we provide, in closed form, the asymptotic expectation of the logarithmic return as a function of the signal-to-noise-ratio and of the trend mean reversion speed. We close this section by giving conditions on the model and the strategy parameters that guarantee a positive asymptotic expected logarithmic return and the existence of an optimal duration.
In the third section, we consider a cross moving average strategy. For this portfolio, we also provide the stochastic differential equation of the logarithmic return. We close this section by giving, in closed form, the asymptotic expectation of the logarithmic return as a function of the model parameters.
In the fourth section, numerical examples are performed. First, the best durations of the Kalman filter and of the optimal strategy under parameters mis-specification are illustrated over several trend regimes. We then compare the performances of a cross moving average strategy and of a classical optimal strategy used in the industry (with a duration τ = 1 year) over several theoretical regimes. We also compare these performances under Heston's stochastic volatility model using Monte Carlo simulations. These examples show that the technical analysis approach is more robust than the optimal strategy under parameters mi-specification. We close this study by confirming this conclusion with empirical tests based on real data.
Setup
This section begins by presenting the model, which corresponds to an unobserved mean-reverting diffusion. After that, we reformulate this model in a completely observable environment (see Liptser & Shiriaev (1977) for details). This setting introduces the conditional expectation of the trend, knowing the past observations. Then, we recall the asymptotic continuous time limit of the Kalman filter and we rewrite this estimator as a corrected exponential average. 1.1. The model. Consider a financial market living on a stochastic basis (Ω, F, F, P), where F = {F t , t 0} is the natural filtration associated to a two-dimensional (uncorrelated) Wiener process (W S , W µ ), and P is the objective probability measure. The dynamics of the risky asset S is given by with µ 0 = 0. We also assume that (λ, σ µ , σ S ) ∈ R * + × R * + × R * + . The parameter λ is called the trend mean reversion speed. Indeed, λ can be seen as the "force" that pulls the trend back to zero. Denote by F S = F S t be the natural filtration associated to the price process S. An important point is that only F S -adapted processes are observable, which implies that agents in this market do not observe the trend µ.
1.2. The observable framework. As stated above, the agents can only observe the stock price process S. Since, the trend µ is not F Smeasurable, the agents do not observe it directly. Indeed, the model (1)-(2) corresponds to a system with partial information. The following proposition gives a representation of the model (1)-(2) in an observable framework (see Liptser & Shiriaev (1977) for details).
Proposition 1. The dynamics of the risky asset S is also given by where N is a P, F S Wiener process.
Remark 1.1. In the filtering theory (see Liptser & Shiriaev (1977) for details), the process N is called the innovation process. To understand this name, note that: Then, dN t represents the difference between the current observation and what we expect knowing the past observations.
1.3. Optimal trend estimator. The system (1)-(2) corresponds to a Linear Gaussian Space State model (see Brockwell & Davis (2002) for details). In this case, the Kalman filter gives the optimal estimator, which corresponds to the conditional expectation E µ t |F S t . Since (λ, σ µ , σ S ) ∈ R * + × R * + × R * + , the model (1)-(2) is a controllable and observable time invariant system. In this case, it is well known that the estimation error variance converges to an unique constant value (see Kalman et al. (1962) where The steady-state Kalman filter can also be re-written as a corrected exponential average: where m * = β−1 β andμ * is the exponential average given by: with an average duration τ * = 1 λβ .
Optimal strategy under parameters mis-specification
In this section, we consider the optimal trading strategy under parameters mis-specification. For this portfolio, we first give the stochastic differential equation of the logarithmic return and we provide, in closed form, the asymptotic expectation of the logarithmic return.
2.1. Context. Consider the financial market defined in the first section with a risk free rate and without transaction costs. Let P be a self financing portfolio given by: where ω t is the fraction of wealth invested in the risky asset (also named the control variable). The agent aims to maximize his expected logarithmic utility on an admissible domain A for the allocation process.
In this section, we assume that the agent is not able to observe the trend µ. Formally, A represents all the F S -progressive and measurable processes and the problem is: The solution of this problem is well known and easy to compute (see Lakner (1998) for example). Indeed, it has the following form: In practice, the parameters are unknown and must be estimated. In Bel Hadj Ayed et al. (2015a), the authors assess the feasibility of forecasting trends modeled by an unobserved mean-reverting diffusion. They show that, due to a weak signal-to-noise ratio, a bad calibration is very likely. Using Proposition 3, the steady state Kalman filter is a corrected exponential moving average of past returns. Therefore, a mis-specification on the parameters (λ, σ µ ) is equivalent to a mis-specification on the factor β−1 β and on the duration τ * . Suppose that an agent thinks that the optimal duration is τ and considers: Using this estimator, the agent will invest following: where m > 0. The following lemma gives the law of this filterμ: Moreover, this filter is a centered Gaussian process, whose variance is: Proof. Applying Itô's lemma to the function f (μ t , t) =μ t e t τ and using Equation (1), it follows that: The integral of this stochastic differential equation from 0 to t gives Equation (12). Therefore,μ is a Gaussian process. Its mean is null (because µ 0 = 0). Since µ and W S are supposed to be independent, the variance of the processμ is equal to the sum of V e − t τ τ t 0 e s τ µ s ds The first term is computed using: Since µ is a centered Ornstein Uhlenbeck, for all s, t ≥ 0, we have: Finally, the second term is computed using: with k > 0.
Portfolio dynamic.
The following proposition gives the stochastic differential equation of the mis-specified optimal portfolio: Proposition 4. Equation (10) leads to: Proof. Equation (10) is equivalent to (by Itô's lemma): Using Equation (6), Itô's lemma on Equation (6) gives: Using this equation, the dynamic of the logarithmic return follows.
Remark 2.2. Proposition 4 shows that the returns of the optimal strategy can be broken down into two terms. The first one represents an option on the square of the realized returns (called Option profile). The second term is called the Trading Impact. These terms are introduced and discussed in Bruder & Gaussel (2011) for this strategy without considering a specific diffusion for the risky asset.
Expected logarithmic return.
The following theorem gives the asymptotic expected logarithmic return of the mis-specified optimal strategy.
Theorem 2.3. Consider the portfolio given by Equation (10). In this case: where β is given by Equation (5).
Proof. Using Proposition 4, it follows that: Moreover, E (μ t ) 2 is given by Lemma 2.1. Then, integrating the expression from 0 to T and tending T to ∞, the result follows.
The following result is a corollary of the previous theorem. It represents the asymptotic expected logarithmic return as a function of the signal-to-noise-ratio and of the trend mean reversion speed λ. (10). In this case:
Corollary 2.4. Consider the portfolio given by Equation
where SNR is the signal-to-noise-ratio: Moreover: (1) If m < 2, for a fixed parameter value λ, this asymptotic expected logarithmic return is an increasing function of SNR.
(2) For a fixed parameter value SNR, it is a decreasing function of λ.
Proof. Since β = 1 + 2SNR λ , the use of this expression in Equation (14) gives the result.
where SNR is defined in Equation (16).
The following proposition gives conditions on the trend parameters and on the duration τ that guarantee a positive asymptotic expected logarithmic return and the existence of an optimal duration. Proposition 5. Consider the portfolio given by Equation (10) and suppose that m < 2. In this case, the asymptotic expected logarithmic return is positive if and only if: ( Moreover, there exists an optimal duration τ min < τ opt < ∞ if and only if SNR λ > 2m 2−m and: Proof. Using Equation (15), the first part of the proposition follows.
Since the asymptotic expected logarithmic return of the mis-specified strategy is positive after τ min and tends to zero if τ tends to the infinity, there exists an optimal duration τ opt . This point is computed with setting to zero the derivative of Equation (15) with respect to the parameter τ .
cross moving average strategy
In this section, we consider a cross moving average strategy based on geometric moving averages. For this portfolio, we first give the stochastic differential equation of the logarithmic return and we provide, in closed form, the asymptotic expectation of the logarithmic return.
3.1. Context. Consider the financial market defined in the first section with a risk free rate and without transaction costs. Let G (t, L) be the geometric moving average at time t of the stock prices on a window L: Let Q be a self financing portfolio given by: where θ t is the fraction of wealth invested by the agent in the risky asset: with γ, α ∈ R and 0 < L 1 < L 2 < t. This trading strategy is a combination of a fixed strategy and a pure cross moving average strategy.
3.2. Portfolio dynamic. The following proposition gives the stochastic differential equation of the cross moving average portfolio.
3.3. Expected logarithmic return. The following theorem gives the asymptotic expected logarithmic return of the cross moving average portfolio.
Theorem 3.1. Consider the portfolio given by Equation (21). In this case: where Φ is the cumulative distribution function of the standard normal variable and: Proof. Since the processes µ and W S are centered, Proposition 6 implies that: where T > L 2 . Let t > L 2 and consider the following process: where ∀i ∈ {1, 2}: Then X is a Gaussian process. Based on Lemma 2 in Zhu & Zhou (2009), ∀t > L 2 : The following lemma gives the mean, the asymptotic variance of the process X and the covariance function between the processes X and µ.
Lemma 3.2. Consider the process X defined in Equation (23). In this case, ∀t > L 2 : Cov where s (L 1 ,L 2 ,λ,σµ,σ S ) is defined in Theorem 3.1 and Proof of Lemma 3.2. Since: Equation (27) follows. Moreover: u, v) , and the drift µ is an Ornstein Uhlenbeck process: and tending t to ∞ Equation (28) follows. Since the processes W S and µ are supposed to be independent, there holds: where the function g is defined in Equation (30). Equation (29) follows The use of Lemma 3.2 gives: Moreover, a direct calculus shows that: the result of Theorem 3.1 follows.
3.4. Strategy with one moving average. Suppose that L 1 = 0 and L 2 = L. In this case, the fraction of wealth invested by the agent in the risky asset becomes: ,L) , where G is the geometric moving average defined in Equation (20) and the self financing portfolio Q 1 becomes: This particular case corresponds to the allocation introduced in Zhu & Zhou (2009) when we assume that the two Brownian motions W S and W µ are uncorrelated and that the trend is mean reverted around 0. Given this framework, we can provide the asymptotic expected logarithmic return of this trading strategy (which has already been found in Zhu & Zhou (2009)): where Φ is the cumulative distribution function of the standard normal variable and:
Theorem 3.3. Consider the portfolio given by Equation (31). In this case:
and the functions s and m are introduced in Theorem 3.1.
Proof. This result is a consequence of Theorem 3.1. Indeed, tending L 1 to 0 and using L 2 = L, the result follows.
Simulations
In this section, numerical simulations and empirical tests based on real data are performed. The aim of these tests is to compare the robustness of the optimal strategy under parameters mis-specification and of an investment using cross moving averages. First, the best durations of the Kalman filter and of the optimal strategy under parameters mis-specification are illustrated over several trend regimes. We then consider the asymptotic expected logarithmic returns of the cross moving average strategy (see Section 3) with (L 1 , L 2 ) = (5 days, 252 days) and of the optimal strategy with a duration τ = 252 days. Using this configuration, we study the stability of the performances of these strategies over several theoretical regimes. We also confirm our results under Heston's stochastic volatility model with Monte Carlo simulation. Finally, backtests of these two strategies on real data confirm our theoretical expectations.
Well-specified Kalman filter.
In these simulations, we consider a signal-to-noise ratio inferior to 1. This assumption corresponds to a trend standard deviation inferior to the volatility of the risky asset. Using τ * = 1 λβ and β = 1 + 2SNR λ , The figures 1 and 2 represent the optimal Kalman filter duration τ * as a function of the trend mean reversion speed λ and of the signal-to-noise ratio. This duration is a decreasing function of these parameters. Indeed, if the variation of the trend process is low and if the measurement noise is high compared to the trend standard deviation, the window of filtering must be long. Moreover, we observe that for a trend mean reversion speed inferior to 1 (which corresponds to a slow trend process), the duration τ * is superior to 0.5 years and can reach 10 years. If the trend mean reversion speed is superior to 1, this duration is inferior to 1 year. 4.1.2. Best filtering window for the optimal strategy under parameters mis-specification. Under parameters mis-specification, we can also define an optimal duration using the strategy introduced in Section 2 and Proposition 5. This duration is the one maximizing the asymptotic expected logarithmic return of the optimal strategy under parameters mis-specification. This optimal window exists if and only if SNR λ > 2m 2−m . We assume that m = 1. Then, the condition becomes SNR λ > 2. The figures 3 and 4 represent this duration τ opt (m = 1) as a function of the trend mean reversion speed λ with respectively SNR= 1 and SNR= 0.5. This duration has a similar behaviour than the optimal Kalman filter duration, except when the trend mean reversion speed λ tends to SNR 2 . Indeed, if λ = SNR 2 , the condition SNR λ > 2 is not satisfied and the optimal duration becomes infinite.
Stability of the performances over several theoretical regimes under constant spot volatility.
In this subsection, we consider the model (1)-(2). Moreover, we assume that a year contains 252 days and that the risky asset volatility is equal to σ S = 30%. We consider two trading strategies. The first one is the optimal strategy (introduced in section 2) with a duration τ = 252 days (= 1 year) and a leverage m = 1. The second strategy is the cross moving average strategy (introduced in section 3) with (L 1 , L 2 ) = (5 days, 252 days) and the following allocation: where G is the geometric moving average defined in Equation (20). Then, if the short geometric average is superior (respectively inferior) to the long geometric average, we buy (respectively sell) the risky asset. In order to compare the performance stability of these two strategies, we use the asymptotic expected logarithmic returns found in Theorems 2.3 and 3.1. The figures 5, 6, 7 and 8 represent the performances of these strategies after 100 years as a function of the trend volatility σ µ respectively with λ = 1, 2, 3 and 4. Even if the optimal strategy can provide a better performance (for example with λ = 1 and σ µ = 90% ), it can also provide higher losses than the cross average strategy (for example with λ = 4 and σ µ = 10%). We can conclude with these tests that the theoretical performance of this cross average strategy is more robust than the theoretical performance of this optimal strategy. Model and optimal strategy. The aim of this subsection is to check if the cross average strategy is more robust than the optimal trading strategy under Heston's stochastic volatility model (see Heston (1993) or Mikhailov & Nögel (2003) for details). To this end, consider a financial market living on a stochastic basis (Ω, G, G, P), where G = {G t , t 0} is the natural filtration associated to a three-dimensional Wiener process (W S , W µ , W V ), and P is the objective probability measure. The dynamics of the risky asset S is given by We also assume that (λ, σ µ ) ∈ R * + × R * + and that 2kV ∞ > (in this case, the variance V cannot reach zero and is always positive, see Cox et al. (1985) for details). Denote by G S = G S t be the natural filtration associated to the price process S. In this case, the process V is G Sadapted. Now, assume that the agent aims to maximize his expected logarithmic wealth (on an admissible domain A, which represents all the G S -progressive and measurable processes). In this case, his optimal portfolio is given by (see Bjork et al. (2010)): Let δ be a discrete time step, and denote by the subscript k the value of a process at time t k = kδ. Using the scheme that produces the smallest discretization bias for the variance process (see Lord et al. (2010) for details), the discrete time model is: 1 − e −2λδ and z k ∼ N (0, δ). Monte Carlo simulations. In this section, Monte Carlo simulations are used to check if the cross average strategy is more robust than the optimal trading strategy under Heston's stochastic volatility model. To this end, we consider the discrete model (36)-(37)-(38) and we assume that α = 4 (quarterly mean-reversion of the variance process), that = 5%, that V ∞ = V 0 = 0.3 2 (which means an initial and a long horizon spot volatility equal to 30%) and that ρ = −60% (when the spot decreases, the volatility increases). Moreover, we consider an investment horizon equal to 50 years and δ = 1/252 (which means that that a year contains 252 days and that each allocation is made daily). With this set-up, we consider several trend regimes, we simulate M paths of the risky asset over 50 years and we implement two strategies: (1) The discrete time version of the optimal strategy presented above. Since the process V is G S -adapted, V k is observable at time t k and the conditional expectation of the trend is tractable with the non stationary discrete time Kalman filter (see Kalman et al. (1962)). We assume that the agent thinks that the parameters are equal to λ a = 1 and σ a µ = 90% when he uses the Kalman filter.
(2) The cross moving average strategy (introduced in section 3) with (L 1 , L 2 ) = (5 days, 252 days) and the following allocation: where G d (k, L) is the discrete geometric moving average computed on the last L values of S.
The figures 9 and 10 represent the estimated performances of these strategies after 50 years as a function of the trend volatility σ µ with M = 10000 and respectively with λ = 1 and 2. These results confirm that the performance of the cross average strategy is less sensitive to a trend regime variation than the performance optimal trading strategy with parameters mis-specification. Moreover, The figures 11, 12, 13 and 14 represent the empirical distribution of the logarithmic return of these strategies after 50 years over M = 10000 paths for different configurations. These figures show that, even with a good calibration, the logarithmic return of the cross average strategy is less dispersed than the logarithmic return of the optimal strategy. Then the cross average strategy is more robust than the optimal strategy. Figure 14. Empirical distribution of the expected logarithmic return of the optimal strategy (with λ a = 1 and σ a µ = 90%) and of the cross average strategy (L 1 = 5 days and L 1 = 252 days) with M = 10000, σ µ = 10% , λ = 2, α = 4, = 5%, V ∞ = V 0 = 0.3 2 , ρ = −60% and T = 50 years 4.2.3. Tests on real data. Here we test the performances of the two previous strategies on real data. The performance of a strategy is evaluated with the annualised Sharpe ratio indicator (see Sharpe (1966)) on relative daily returns. For the optimal strategy, we assume that τ = 252 business days, that m = 0.1 (it has no impact on the Sharpe ratio indicator), and that the volatility σ S is computed over all the data and used since the beginning of the backtest. For the cross moving average strategy, we keep the same assumptions than the previous section (a window of x days is replaced by a window of x business days). The universe of underlyings are nine stock indexes (the SP 500 Index, the Dow Jones Industrial average Index, the Nasdaq Index, the Euro Stoxx 50 Index, the Cac 40 Index, the Dax Index, the Nikkei 225 Index, the Ftse 100 Index and the Asx 200 Index) and nine forex exchange rates (EUR/CNY, EUR/USD, EUR/JPY, EUR/GBP, EUR/CHF, EUR/MYR, EUR/BRL, EUR/AUD and EUR/ZAR). The period considered is from 12/22/1999 to 2/1/2015. In this test, we assume that these indexes are tradable and that the traded price is given by the closing price of the underlying. The backtest is done without transaction costs. For each strategy, the reallocation is made on a daily frequency. The figure 15 gives the measured annualised Sharpe ratio of the 18 underlyings for each strategy. We observe that, even with an over-fitted volatility for the optimal strategy, the cross moving average strategy outperforms the optimal strategy except for the EUR/BRL.
Test on real data
Optimal strategy with τ = 252 bd Cross average strategy with L1 = 5 bd and L2 = 252 bd Figure 15. Sharpe ratio of the optimal strategy (with τ = 252 bd) and of the cross average strategy (L 1 = 5 bd and L 1 = 252 bd) on real data from 12/22/1999 to 2/1/2015
Conclusion
The present work quantifies the performances of the optimal strategy under parameters mis-specification and of a cross moving average strategy using geometric moving averages with a model based on an unobserved mean-reverting diffusion.
For the optimal strategy, we show that the asymptotic expectation of the logarithmic returns is a an increasing function of the signal-to-noise ratio and a decreasing function of the trend mean reversion speed.
We find that, under parameters mis-specification, the performance can be positive under some conditions on the model and strategy parameters. Under the same assumptions, we show the existence of an optimal duration which is equal to the Kalman filter duration if the parameters are well-specified.
For the cross moving average strategy, we also provide the asymptotic logarithmic return of this strategy as a function of the model parameters.
Moreover, the simulations show that, with a model based on an unobserved mean-reverting diffusion, and even with a stochastic volatility, technical analysis investment is more robust than the optimal trading strategy. The empirical tests on real data confirm this conclusion. | 2016-04-30T15:09:16.000Z | 2016-04-30T00:00:00.000 | {
"year": 2016,
"sha1": "6d881e7d49c704fa7101efc696ba9ef9902f96f6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1605.00173",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6d881e7d49c704fa7101efc696ba9ef9902f96f6",
"s2fieldsofstudy": [
"Mathematics",
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Economics"
]
} |
119202176 | pes2o/s2orc | v3-fos-license | Exotic Decays of Heavy B quarks
Heavy vector-like quarks of charge $-1/3$, $B$, have been searched for at the LHC through the decays $B\rightarrow bZ,\, bh,\,tW$. In models where the $B$ quark also carries charge under a new gauge group, new decay channels may dominate. We focus on the case where the $B$ is charged under a $U(1)^\prime$ and describe simple models where the dominant decay mode is $B\rightarrow bZ^\prime\rightarrow b (b\bar{b})$. With the inclusion of dark matter such models can explain the excess of gamma rays from the Galactic center. We develop a search strategy for this decay chain and estimate that with integrated luminosity of 300 fb$^{-1}$ the LHC will have the potential to discover both the $B$ and the $Z'$ for $B$ quarks with mass below $\sim 1.6$ TeV, for a broad range of $Z'$ masses. A high-luminosity run can extend this reach to $2$ TeV.
Introduction
Massive vector-like quarks exist in many extensions of the Standard Model (SM), e.g. extradimensional models (both warped and flat), little Higgs theories, and composite Higgs models, and they are being actively searched for at the LHC. Because these massive states are vectorlike they need not have the same SM quantum numbers as states in the SM, but in many instances they do. We focus on that case here. In particular, we consider massive quarks, B, that have the same SM charges as the right-handed bottom quark.
These new particles can be produced through their QCD couplings and are presently searched for through the decays B → hb/Zb/W t [1][2][3]; similarly, heavy top partners are searched for in decays T → ht/Zt/W b [3][4][5][6][7]. The present bounds on the B mass vary from ∼ 750 GeV if the decay is purely to Zb, to ∼ 900 GeV if the decay is purely to hb. The bound is ∼ 790 GeV in the Goldstone limit where the branching ratios are B(B → Zb) : B(B → W t) : B(B → hb) = 1 : 2 : 1. The bounds can be weakened if the B quark decays to alternative final states. In this paper, we devise an LHC search strategy appropriate for one such exotic decay and estimate its potential sensitivity.
The B quark can be part of a larger extension of the SM and in particular could be charged under additional gauge groups. Here we consider a simple extension where the B quark, which mixes with the SM b quark, carries an additional U (1) charge. Such a scenario has a simple realisation within the context of "Effective Z models" [8]. These models introduce, in addition to the massive vector-like quark, a new U (1) gauge group and a scalar to break it. Although we focus on the case where only the vector Z is lighter than the B, our collider analysis will be effective provided that one or both of the Z and the scalar φ are lighter than the B. In Section 2 we describe in more detail the particle content, parameter space, and phenomenology of this class of models. We demonstrate that it is natural for the new decay chain B → bZ → b(bb), shown in Figure 1, to dominate over the modes that are currently being searched for. We also outline other interesting final states, involving SM bosons, leptons or missing energy, that can occur in some regions of parameter space and which are also interesting to search for at the LHC.
There may be other states charged under the U (1) , and if any are stable and electrically neutral they can be a dark matter (DM) candidate. The annihilation products of such a DM candidate would be rich in b quarks. This presents an intriguing possibility since it is well known that the excess of high energy gamma rays seen coming from the proximity of the Galactic center [9,10] can be explained by a 30 -50 GeV DM particle annihilating to bb, or a heavier DM particle annihilating to a pair of resonances, with mass near 50 GeV, that decay to bb. Thus, there is a possible connection between an astrophysical signal in gamma rays and a collider search in multi-b final states. We will discuss the phenomenology of the model once DM is added, and we will include as one of our collider benchmarks a scenario where the Z has a mass of 50 GeV.
Having motivated B → (Z /φ)b, Z /φ → bb as a search channel for heavy B quarks we propose a new search strategy at the LHC, described in detail in Section 3. The final state contains six b quarks but due to the kinematics may not contain six b-jets. For this reason, and to be conservative, we only require three b-tags in each event. To further suppress background we find it beneficial to place a cut on the total hadronic activity in the event, H T ≡ jets p T , that scales with the B mass being searched for.
To maximize our sensitivity over a broad range of B and Z masses we apply three approaches to event reconstruction, which use the hardest four, five, and six jets, respectively. A given event is subjected to all reconstruction methods for which it qualifies, e.g. if the event has six or more hard jets all three methods are applied. Each reconstruction method first tries to form Z candidates, keeping only those pairs of candidates whose masses are within 10% of one another. If Z candidates are found we then attempt to form B candidates by pairing Z candidates with an extra jet, and again keep only those that are within 10% in mass. The six-jet analysis reconstructs Z candidates as dijet pairs, the four-jet analysis reconstructs Z candidates as single jets with sub-structure, using the N -subjettiness variable [11], and the five-jet analysis reconstructs one Z candidate as a dijet system and the other as a single jet with substructure. For signal events the distribution of (M Z , M B ) pairs has a clear concentration close to the expected values. The background distribution, coming dominantly from tt and QCD multi-jet, has a different shape, allowing separation of signal and background over a broad range of masses.
In Section 4 we present our results, which show that discovery at the 5σ level is possible for a broad range of M Z , with M B 1250 GeV for 30 fb −1 , with M B 1600 GeV for 300 fb −1 , and with M B 2000 GeV for 3000 fb −1 . Accurately modelling the QCD background is a fraught enterprise. In a full experimental analysis the background needs to be estimated from data, and we describe one approach to doing so in Section 4. By relaxing the number of b-jets required for an event to pass the cuts one can determine the expected shape of the (M Z , M B ) distribution for background alone. The normalisation of the distribution can be estimated by comparing the total number of events with and without the b-tags, before the analysis cuts requiring B and Z candidates. We show that this approach works well when tested out on Monte Carlo data and propose other sidebands that may be available to estimate the QCD background from data.
An Effective Z Model
In this section we describe a particular effective Z model [8] and identify parameter space that realizes the phenomenology we wish to study. Although we add a relatively modest number of new fields beyond those of the SM, several new interactions are allowed and multiple new phenomena can arise. We introduce a pair of vector-like quarks, (B, B c ), which are charged under a new U (1) and also charged under the SM in a similar way to the RH bottom quark, i.e. B has quantum numbers (3, 1, 1/3, −1) under (SU (3), SU (2), U (1) Y , U (1) ) and B c has (3, 1, −1/3, 1). Because the new quarks enter as a vector-like pair, there are no issues with gauge anomalies. In addition we introduce a new complex scalar Φ that has charge +1 under Figure 1: The BB production and decay process that is the primary focus of our analysis.
the U (1) , but which is otherwise neutral. We assume that Φ gets a vev that breaks the U (1) , leading to a mass for the U (1) gauge field, For the collider phenomenology that interests us, this is the minimal model. If there are also vector-like fermions (χ, χ c ) that are neutral under the SM but charged under the U (1) , they can provide a viable DM candidate, as we investigate below. An analogous setup with a vector-like top quark, T , in place of B has been considered in Ref. [12]. The QCD cross section for BB pair production depends only on the mass of the vectorlike quarks, but the resultant final states for these pair-production events depend upon the sizes of the various possible couplings between the SM and the new sector. In Section 2.1 we consider these couplings and the mixings they induce. In Sections 2.2-2.4, we study the decays of B, Z , and φ. We find that the decay chain that we use for our collider studies, B → bZ → b(bb), can easily dominate, although the analysis we develop is equally effective if B → bφ → b(bb) dominates. We discuss DM phenomenology in models that incorporate the (χ, χ c ) fields in Section 2.5. If the only interactions of the vector-like quarks were their gauge interactions, there would be an unbroken Z 2 parity under which the new fermions are odd. However, the gauge symmetries of the theory allow a so-called Φ-kawa interaction, λΦBb c , which breaks the Z 2 and allows B to decay. Including this Lagrangian term, the B and b masses arise from More generally, ΦB can couple to a linear combination of d c , s c , and b c , but to be consistent with flavor constraints we assume that this linear combination is dominated by b c . Alternatively, we could introduce three copies of the heavy vector-like quarks that couple in a flavor symmetric fashion to the SM down-type quarks, but with a hierarchy in the masses of the heavy quarks, such that the only sizable effective coupling of the Z is to the b quark. Either way, once Φ acquires a vev it induces B − b mixing. This mixing is largest in the RH quark sector. The mass-eigenstate RH quark fields arẽ with the mixing angle determined by is the physical mass of heavier eigenstate, and we work in the approximation that the mass of the bottom quark can be neglected.
The mixing in the LH quark sector is related to the RH mixing by where as above we denote the physical mass of a field f by M f . One consequence of b − B mixing is that the coupling y b differs numerically from the SM bottom Yukawa coupling, y SM b : where v 246 GeV.
Gauge kinetic mixing
Another renormalizable interaction allowed by the symmetries of the theory is kinetic mixing between the U (1) gauge field (b µ ) and the hypercharge gauge field (B µ ), This operator allows the Z to decay to SM fields. If this operator is absent at some high scale Λ (for example, this could be the scale at which SU (2) breaks to U (1) ), it will be generated by B and b loops. Taking M B to be somewhat above the U (1) breaking scale, we can approximate the value of κ at the scale M B by ignoring the quark mixing, giving Provided Λ is not too far above M B , we expect κ ∼ 10 −3 − 10 −2 for g ∼ g Y . Significantly smaller values of κ are possible for smaller g , or if contributions from additional states partially cancel contributions from b and B loops.
Working to first order in κ, we obtain diagonal kinetic terms and mass terms with the field redefinitions where s W and c W are sine and cosine of the weak-mixing angle, and the mixing angle θ z is introduced to remove mass mixing induced by the kinetic mixing. This mass mixing is required to be small by precision studies, and for κ 1 it is guaranteed to be small unless M Z and M Z are very close. Assuming θ z 1 and using the leading-order result the couplings of the Z to SM fermions can be determined from to first order in κ. In Section 2.3 we consider the competition between quark mixing and kinetic mixing in determining Z branching ratios.
Scalar mixing
With the addition of Φ the scalar potential is The mixed quartic term leads to a mass mixing between the Higgs and φ fields, producing mass-eigenstate scalarsh determines the mixing angle. Scalar mixing leads to corrections to the partial widths of the SM Higgs boson of the form Γ → c 2 h Γ SM , with the exception of the partial width to b quarks, which is also altered by the b − B mixing. At tree level we have In the absence of scalar mixing, the correction factor is 19) and the deviation from the SM result is tiny due to the smallness of M b . If the Z is light enough, scalar mixing also induces a new decay mode, This could lead to many interesting signatures depending on how the Z decays, e.g. h → 4b, h → invisible (if Z decays to DM), or h → 4 without a Z resonance. Furthermore, the Higgs may be produced in B decays (as discussed in Section 2.2), resulting in a final state from BB production with as many as 10 b's. Exotic Higgs decays, e.g. h → ZZ , can also be induced by kinetic mixing. The effects of scalar and kinetic mixing on Higgs decays have been widely studied in the literature, see for example Ref. [13].
Beyond its effects on the Higgs particle, scalar mixing also impacts φ decays. In Section 2.4 we consider the competition between the Φ-kawa interaction and scalar mixing in determining φ branching ratios.
Heavy quark decays
As discussed above, the λΦBb c interaction term breaks the Z 2 parity acting on the new fermions and allows the B to decay. At tree level, the possible two-body final states are Z b, Zb, W − t, φb, and hb.
For decays of B into a vector boson v and a fermion f , the relevant interaction term has the form and the tree-level partial width is Here we define Neglecting corrections induced by kinetic mixing, the relevant couplings for B → Z b, B → Zb, and B → W − t are where s L , c L , s R , and c R describe the mixing in the fermion sector, with the left-and righthanded mixings related through Equation (2.6). For decays of B into a real scalar s and a fermion f , the relevant interaction term has the form 26) and the tree-level partial width is We allow for the possibility of mixing in the scalar sector, with s h and c h determined by Equation (2.17).
The comparison between the various B partial widths simplifies if we neglect scalar mixing (s h → 0) and work to leading non-vanishing order in (M b /M B ) 2 . In this approximation we find In the regime where M B is much larger than all other masses, we have consistent with Goldstone equivalence.
Our collider studies will focus on the decay of B to Z b. As shown in the left-hand plot of Figure 2, this decay can easily dominate over decays into SM states, due to the smallness of y b . In fact, using Eqn. (2.5), the quantity appearing on the vertical axis can be rewritten as We neglect the masses of all SM particles, which overestimates the partial widths into SM states. Right: which goes to M B /(y SM b Φ ) in the λ → 0 limit. It is not therefore not necessary for λ to be large for B → Z b to dominate. Given ample phase space for the decay, B → Z b dominates over decays to SM states for small λ, unless Φ is much larger than M B .
The remaining competing decay, B → φb, can be forbidden kinematically by raising M φ above M B . A light Z is consistent with M φ > M B because g can be taken to be small. The opposite scenario is also possible: one can have a light φ with M Z > M B if the quartic coupling λ φ is small. In this case B → φb can be the dominant decay. The right-hand plot of of Figure 2 shows how the ratio Γ(B → Z b)/Γ(B → φb) depends on M φ and M Z when both channels are kinematically accessible.
If B → φb dominates, the results of our collider studies apply essentially unchanged, provided that φ decays dominantly to bb (φ decays are studied in section 2.4). If instead both B → Z b and B → φb have sizable branching ratios, the analysis we develop below is flexible enough to reconstruct both BB → φφbb events and BB → Z Z bb events, even if M Z and M φ are very different. Two invariant mass peaks at distinct values of M Z /M φ would be found, with reduced strength compared to the case with just one dominant channel. Our analysis is not designed to reconstruct BB → φZ bb events efficiently, unless the Z and φ happen to be close in mass.
Z decays
At tree level, and neglecting kinetic mixing, the potential two-body channels for Z decay are bb, bB, bB, and BB, some of which might be kinematically forbidden. Kinetic mixing allows for decays into other fermions, including leptons, and decays to bosons. If DM is charged under U (1) and is sufficiently light, there will also be invisible decays of the Z , as discussed in Section 2.5.
For decays of the Z into fermions f 1 f 2 , the interaction term leads to the tree-level partial width where N c is the fermion color multiplicity and y 1,2 = M 2 f 1,2 /M 2 Z . Neglecting corrections induced by kinetic mixing, the relevant couplings for Z → bb, Z → bB/Bb, and Z → BB are Our collider studies will focus on scenarios with M B > M Z , in which case Z → bb is the only allowed decay among those above. Kinetic mixing modifies the Z widths given in (2.44)-(2.46) and opens up new Z decay modes. If only kinetic mixing is present, the couplings of Z to SM fermions can be summarized as where we work to leading order in κ and where y Z = M 2 Z /M 2 Z . These couplings can be used with Equation (2.40) to calculate the Z partial widths into SM fermions induced by kinetic mixing. For fermions that can be approximated as massless, the result simplifies to Kinetic mixing also opens up decays of the Z to boson pairs, if kinematically allowed, with partial widths If present, scalar mixing modifies Γ(Z → Zh) and, for sufficiently light φ, induces a partial width for Z → Zφ . Large values of κ allow for abundant Z production through its couplings to light quarks. The Z can then decay to leptons, and LHC constraints on dilepton resonances potentially become relevant [14]. For smaller κ the Z is mainly produced through its interactions with b and B quarks, but interesting leptonic signatures can still be induced by κ, e.g. one or two dilepton resonances produced in association with b-jets. In If we require R Z to be small, we get the relatively weak constraints on κ shown in Figure 3.
φ decays
In our discussion of φ decays we will consider the effects of scalar mixing, but we will neglect kinetic mixing. If the scalar mixing vanishes, then at tree level, the potential two-body channels for φ decay are Z Z , bb, BB, bB, and Bb. Because we are mainly interested in how φ will decay if it happens to be produced in B and B decays, we will take M φ < M B for this section, kinematically forbidding decays to BB, bB, and Bb. Scalar mixing allows the φ to acquire the decay channels of the SM Higgs. For any decay channel X open to a SM Higgs of mass M φ , excluding channels involving b quarks, we have The decay width to bb depends on the quark mixing. Working to leading order in M b , the tree-level width is The remaining two-body, tree-level partial widths are If the heavy B quarks decay mainly to φb, the results of Section 4 will apply when φ particles decay mainly to bb. Taking M B = 1 TeV and w = M φ , we show in Figure 4 the parameter space where φ → bb dominates. When φ is sufficiently heavy to decay to W , Z, and h pairs, the mixing in the scalar sector must be very small for bb to dominate over these modes. We assume M Z > M φ /2 to make Figure 4, but φ → Z Z can easily be the most important decay mode if it is kinematically accessible.
Dark Matter
In this work we focus mainly on the LHC phenomenology of the B and the Z . However, our model, over part of the parameter space, also provides a natural explanation for the excess of high energy gamma-rays seen coming from the proximity of the Galactic center, the so called Galactic Center Excess (GCE), or Gooperon [9,10,[15][16][17][18][19][20][21][22][23][24][25]. The spectrum of the excess photons is well fit by a 30 -50 GeV DM particle annihilating directly to bb, as well as by a 10 GeV DM particle annihilating to τ 's. It may also be fit by cascade annihilations of DM to light mediators which in turn decay to pairs of SM particles [26][27][28][29]. In particular, the spectrum of the GCE is better fit for annihilations of the form χχ → Z Z → (bb)(bb) than We introduce a pair of vector-like fermions, χ, χ c , with charges Q χ and −Q χ under the U (1) but no SM charge. Provided Q χ = 0, these fermions are stable at the level of renormalizable interactions. Recall that we have normalized the U (1) gauge coupling g so that Φ, B, and B c have charges +1, −1, and +1. If Q χ is not an integer, an unbroken global, abelian symmetry guarantees the stability of χ, χ c . Even if χ, χ c are not absolutely stable, they can easily be stable on cosmological time scales if any non-renormalizable operators that induce their decays are generated at the Planck scale or some other very high scale. Provided Q χ = ±1, ±2, there are no operators at dimensions five or six leading to χ decays.
For M χ > M Z or M χ > (M Z + M φ )/2, the annihilation processes χχ → Z Z or χχ → φZ are accessible. This allows for a secluded DM scenario [30], in which the couplings that determine the relic abundance are independent of those that determine the DM's coupling to the SM. Focussing on χχ → Z Z , the non-relativistic DM annihilation rate is (2.57) For masses that fit the GCE the correct relic abundance is achieved for g Q χ ∼ 0.2. We have checked this and other results from this section using micrOMEGAs [31]. If χχ → φZ is also a relevant annihilation channel, slightly smaller values of g Q χ work. If M χ is too light to annihilate into final states involving Z and φ, the correct relic abundance can still be achieved through χχ → bb, mediated by s-channel Z exchange.
Neglecting mixing in the LH quark sector, the non-relativistic DM annihilation rate for this process is (2.58) With the help of Eqn. (2.5), it is useful to rewrite this as Unlike the case where the relic abundance is set by χχ → Z Z /φZ , achieving the correct relic abundance through χχ → bb requires that M B not be too large. The annihilation rate is resonantly enhanced for M Z close to 2M χ , but the correct relic abundance can also be obtained far off resonance. For example, taking M B = 1 TeV, M χ = 40 GeV (as preferred for the GCE), and M Z = 250 GeV, we need Q χ λ 2 4. Taking λ = 1 and maximal mixing in the RH quark sector, we get g = (λM Z )/( √ 2s R M B ) = 1/4, and the coupling of the Z to DM is not too large: g Q χ 1.
For M χ < M Z /2, the presence of DM coupled to the Z opens up an invisible decay mode with partial width where y χ = M 2 χ /M 2 Z . The invisible width can easily dominate over the width into bb, Eqn. (2.44). In this case BB pair production at the LHC can lead to bb + / E T events targeted by standard SUSY searches [32,33].
Because the nucleus has no net b-charge, direct detection rates are highly suppressed in the absence of kinetic mixing. Kinetic mixing leads to a spin-independent coupling of DM to the proton, and to a cross section per nucleon where we normalise to scattering off xenon. For M χ ∼ 50 GeV, LUX has probed down to σ 8 × 10 −46 cm 2 [34]. Parameters chosen to explain the GCE in the secluded DM scenario thus require κ 3 × 10 −4 to evade direct detection. Values of κ this small are not unreasonable, especially given that g can be small. Taking κ to be given by Equation (2.9) with the log set to one, the constraint is satisfied for g ∼ 1/20, which requires Q χ ∼ 4 for the relic abundance. The χχ → bb explanation of the GCE is consistent with values of M Z larger than those preferred by the secluded DM explanation, meaning that LUX constraints can be satisfied with larger values of κ.
We have been assuming that χ and χ c form a Dirac fermion of mass M χ , but it is possible that the mass eigenstates are Majorana fermions. For example, if Q χ = −1/2, the interactions L ⊃ λ χ χχΦ + λ χ c χ c χ c Φ * + h.c.
(2.62) are allowed, leading to Majorana masses when Φ gets a vev. If these Majorana masses are much smaller than the Dirac mass, the relic density calculation does not change much, but the cross section for direct detection is dramatically reduced. Larger values of λ χ and/or λ χ c can change the phenomenology in various ways, e.g. scalar mixing can induce a Higgs-mediated contribution to the cross section for direct detection, final states involving φ can become more important for the relic abundance calculation, and φ can potentially decay invisibly to DM.
Searching at the LHC
Traditional searches for heavy vector-like B quarks have focused on decays to SM bosons and quarks [2,35,36]. As we have seen, the presence of Z and φ (and χ if DM is included), can significantly alter the phenomenology. Which of the many possible search channels dominates depends upon the masses of the new particles and upon the relative sizes of the various mixings, namely kinetic mixing, quark mixing, and scalar mixing. We will consider the situation where the dominant decays are B → Z b followed by Z → bb. As discussed in Section 2, B → Z b tends to dominate for M φ > M B > M Z , unless Φ is much larger than M B (see Figure 2), while Z → bb dominates for M B > M Z and sufficiently small kinetic mixing (see Figure 3). It will be possible to infer from our final results the effect of branching ratios smaller than one. If B decays to both Z b and φb our analysis would find both resonances but at reduced significance, as long as both Z and φ decay to bb.
The sizeable QCD production rate of BB, shown in Figure 5, makes our primary channel of interest pp → BB → (bZ )(bZ ) → b(bb)b(bb), which is not presently being searched for. Before describing in detail the search strategy we advocate, we briefly discuss other interesting channels that are worthy of investigation.
Although their couplings are suppressed by the quark mixing angle, the Z and φ can be singly produced in association with b quarks, which may be forward boosted. If these states decay to bb, their existence is probed by LHC searches for bb resonances produced in association with b quarks [37].
With kinetic mixing the Z will have a di-leptonic branching ratio, but unless κ is sufficiently large the usual Z bounds are weakened by the necessity of producing it in association with b quarks. The dilepton resonance can also show up in decays of the B, in which case the final state would be a pair of dileptonic resonances and two b quarks, which can be paired up into two b resonances.
If φ is sufficiently heavy it can decay to Z Z . Or, if φ → Z Z is kinematically forbidden but the scalar mixing is sufficiently large, then φ can decay to hh, W W , and ZZ if it is heavy enough. When B → φb dominates, BB production can therefore lead to events with as many as ten b quarks, with various sub-resonances among the b-jets. Finally, if we incorporate DM into the theory the Z and/or the φ might decay invisibly, leading to events with b-jets and MET. This light Z benchmark is motivated by the secluded DM explanation of the GCE if, as discussed in Section 2.5, the DM mass is around 60 GeV. Larger values of M Z are consistent with the GCE if the dark matter annihilates directly to bb. This benchmark requires jetsubstructure techniques because the large mass difference between B and Z means that the bb from the Z decay will typically form a single massive jet.
It is not difficult to find parameters consistent with M B = 1 TeV, M Z = 50 GeV, and Br(B → Z b) Br(Z → bb) 1. For example, start with g = 1/20, corresponding to Φ = M B / √ 2 and s R = λ/ √ 2. For this value of g , B → φb is forbidden if the Φ quartic coupling satisfies λ Φ > 1/2 (here we neglect scalar mixing), in which case Figure 2 shows the B decays dominantly to Z b (unless s R 1). Figure 3 shows that for κ 10 −2 λ 2 , Z will mainly decay to bb. If we incorporate Dirac fermion dark matter with M χ = 60 GeV, the relic abundance requires g Q χ ≈ 0.2 in the secluded DM scenario, or Q χ ≈ 4. Then we need κ 3 × 10 −4 to satisfy direct detection constraints. This "medium mass" point can be discovered with high significance after 300 fb −1 of data, even with sizable systematic uncertainties, and will have hints after 30 fb −1 (see Figure 10). An example set of model parameters for this point starts with Φ = 1500 GeV (corresponding to g = 0.35 and s R = λ). With this choice of parameters, M φ > M B is realized for λ φ 1/4, in which case B → Z b typically dominates. For Z → bb to dominate only requires κ 0.16λ 2 . Because of its small production cross section, this "high mass" point may require as much as 3000 fb −1 to be discovered. For an example set of parameters we can again start with Φ = 1500 GeV (corresponding to g = 1/ √ 2 and s R = 3λ/4). To have M φ > M B we need λ φ 4/9, and for Z → bb to dominate we need κ 0.19λ 2 .
Simulation
We implement the model in Feynrules [38]. Our signal simulations use MadGraph5_aMC@NLO [39] for parton-level event generation, PYTHIA 8.2 [40] for showering and hadronization, and Delphes3 [41] for detector simulation. The dominant background comes from QCD multijet production, followed by tt production. We simulate these background processes with PYTHIA 8.2 and Delphes3. Jets are clustered with FastJet [42] using the anti-kt algorithm [43] with R = 0.5. For Delphes settings we use the default "CMS" parameter card that comes with the distribution. This card sets the b-tagging efficiencies for the high-p T jets that will be important for our analysis at approximately 0.5 (|η| ≤ 1.2) and 0.4 (1.2 < |η| ≤ 2.5) for b-jets, 0.2 (|η| ≤ 1.2) and 0.1 (1.2 < |η| ≤ 2.5) for c-jets, and 10 −3 for light jets.
We use Hathor [44] to calculate vector-quark production cross sections at NNLO [45] with MSTW2008 NNLO parton distribution functions [46]. For the tt production cross section we take σ = 954 pb, based on Ref. [45]. For the QCD background we adopt the LO cross section reported by Pythia. Pythia8 with default settings has been found to give reasonable agreement, at a level better than ∼ 50%, with 7 TeV LHC data on multijet production [47,48]. The difficulty in modeling the QCD background requires that it be estimated from data in an actual analysis. We discuss one approach to this estimation in Section 4.
To reduce the statistical uncertainty associated with our QCD simulations, we bias the event generation to favor high-p T events and record the event weights. We estimate the statistical uncertainties of our QCD Monte Carlo sample as where the w i are the individual event weights. This uncertainty is less than 10% for most of the signal windows we use to obtain the results of Section 4.
Analysis
Only jets with p T > 100 GeV and |η| < 2.5 are considered in our analysis. In the discussion that follows, "jet" refers to an object satisfying these criteria, and we calculate the scalar sum of jet p T 's, H T , using only these jets. To be selected, an event must have at least four jets (n j ≥ 4), three or more of which must be b-tagged (n b ≥ 3). The probabilities to have various n b , among events with n j ≥ 4 and H T > 500 GeV, are shown in Figures 6 and 7 for the backgrounds and for the three benchmark points introduced above. Figure 6: Probabilities to have 0, 1, 2, and 3 or more b-jets, among background events with at least four jets and H T > 500 GeV. For QCD events the probability to have at least 3 b-jets is 1.2 × 10 −3 . Figure 7 shows a lower probability to satisfy the n b ≥ 3 requirement for Benchmark 1, because B decays produce highly boosted Z particles for this parameter point, leading to Z decays that typically produce a single jet. A more sophisticated analysis might attempt to keep track of the number of b-tags associated with individual jets. Figures 6 and 7 also suggest that it may be advantageous to require more than three b-jets, especially if one adopts a looser b-tag algorithm with a higher efficiency than we assume. For examples of how requiring a high number of b-tags (≥ 5) may be able to reduce backgrounds and allow discovery of certain signals, see Ref. [49]. We present results for an analysis based on n b ≥ 3 to be conservative, and we will see that with this analysis there is discovery potential for M B = 2 TeV at the HL-LHC.
For each selected event we apply three separate reconstruction strategies. These strategies differ in how many of the jets in the event are used in the reconstruction and in how Z candidates are identified. Once Z candidates are found the identification of B candidates proceeds identically for all three approaches.
The four-jet reconstruction uses only the four hardest jets in the event. Among these four, two jets are identified as Z candidates if their jet masses match to within 10% and both jets have τ 2 /τ 1 < 0.5, where τ N is the N -subjettiness variable defined in Ref. [11] . This Table 1, optimized for δ = 10%. approach is effective for M B M Z , in which case the Z particles are produced with a large boost. The six-jet reconstruction uses the six hardest jets in the event. Among these six jets, two dijet pairs (comprising a total of four jets) are identified as Z candidates if the dijet masses match to within 10%. The five-jet reconstruction uses the hardest five jets and takes a composite approach. Among the hardest five jets, a single jet and a dijet pair are identified as Z candidates if their masses match to within 10% and the single jet has τ 2 /τ 1 < 0.5.
Regardless of which reconstruction method is applied to a particular event, there remain two available jets after two Z candidates are identified. These jets are paired with the Z candidates in both possible ways. For each pairing, if the jet-Z invariant masses are within 10% of each other, then the jet-Z systems are identified as B candidates, and the two (M Z , M B ) pairs are recorded. If Z candidates cannot be used to find B candidates, then the Z candidates are discarded along with their associated masses.
A single event may yield numerous (M Z , M B ) pairs, produced by any and all of the three reconstruction methods. Once we establish a range of M Z and M B values as a useful window for a particular signal parameter point, we count an event as being in the window once and only once if any of its (M Z , M B ) pairs falls in that window. This single counting allows for a more straightforward statistical interpretation of results. Figure 8 shows the distribution of signal events in the M Z − M B plane for our three benchmarks. To make these plots we divide the M Z − M B plane into 10 GeV × 20 GeV pixels. A given event can count at most once in a given pixel but is allowed to be counted in multiple pixels. Similarly, Figure 9 shows the distribution of QCD and tt events in the M Z − M B plane. In the tt plot, we see a concentration of events near (M W , M t ) due to successful reconstruction of the W and top resonances. We get much larger counts in a bulk region whose position is set by the H T , jet p T , and jet multiplicity requirements. These are combinatorially favored "mispairings" produced by the six-jet reconstruction. where S and B are the expected numbers of signal and background events in the window, and where δ represents the systematic uncertainty associated with the background in the window. Once a window is chosen, we quantify the expected significance of the signal using for B ≥ 50. For smaller B, we take the background-model probability of observing n counts as P (n) = dλ f (λ|B, δB) g(n|λ), (3.4) where g(n|λ) is the Poisson distribution with mean λ and f (x|µ, σ) is the normal distribution with mean µ and standard deviation σ, and we quantify the significance as Table 1: Benchmark signal windows optimized for L = 300 fb −1 , with all units in GeV. The first entry is for 0% systematics (δ = 0), and the second entry is for 10% systematics (δ = 10%).
We will present results for δ = 0 and δ = 10%. In the following section, we argue that estimating background from data at a 10% level or better is a realistic goal for this analysis.
Results
We have studied the discovery prospects for 45 signal parameter points in all. Tables 1 and and 2 provide detailed results for our three benchmark points. Table 1 gives the M Z − M B selection windows used for each benchmark, optimized for an integrated luminosity of L = 300 fb −1 , and for either δ = 0 and δ = 10%. The windows for δ = 10% are shown in Figures 8 and 9. Table 2 shows the numbers of events that pass the various cuts in our analysis, for background and for the three signal benchmarks.
In an actual experimental analysis it will be important to estimate the QCD background from data. The background in a given window can be estimated using events with fewer b-tagged jets. For the δ = 10% selection windows of Table 1, Table 3 compares the numbers of background events that pass the full analysis with the estimate n j ≥4 # with n j jets and n b ≥ 3 # with n j jets and n b = 0 × (# in window, with n j jets and n b = 0) . (4.1) In the first factor, the events must pass the H T cut (which differs for the different benchmarks, as the H T cut is set to be H T > 1.5M B ), but the events are not required to yield Z or B candidates. In the second factor, the events must pass the full analysis, with at least one pair of Z and B candidates with masses in the window, except that the usual requirement n b ≥ 3 is replaced with n b = 0. Instead of using n b = 0 events for the estimate, one could instead use n b < 3, n b = 1, or n b = 2 events, which might be more accurate. However, Table 3 shows that using n b = 0 events works rather well for the benchmark windows, and the signal contamination of the background in the n b = 0 samples is less than 1% for all three windows.
For most of the signal points we investigated, the accuracy of the estimate using n b = 0 events is comparable to the level of agreement shown in Table 3 Table 3: For L = 300 fb −1 , actual background counts (top row) and the associated estimates using events with zero, one, or two b-tagged jets. The actual counts come with Monte Carlo uncertainties, and the estimates come with Monte Carlo uncertainties followed by statistical uncertainties associated with the estimation method. Also shown are signal-to-background ratios for each window and n b requirement.
heavily signal-dominated, i.e. they have a large S/ √ B. If the background estimation is not quite as good as we assume, the discovery potential changes very little. Furthermore, other handles for estimating the background will be at experimentalists' disposal, including events with reconstructed M Z and/or M B values outside the window, or perhaps events for which the mass-matching that identifies Z and/or B candidates fails at 10% but satisfies some less stringent requirement. Figure 10 shows the projected discovery potential in the M Z − M B plane for L = 30 fb −1 , 300 fb −1 , and 3000 fb −1 . Discovery at the 5σ level is possible for a broad range of M Z ,
Conclusions
The hunt for new colored fermions is an integral part of the broad search strategy employed at the LHC. To date almost all searches for new vector-like partners of the top or bottom quarks have been in final states containing SM bosons (W, Z, or h). We have pointed out that, by virtue of being vector-like, it is straightforward for the heavy quarks to be charged under additional gauge groups, and that these couplings may dominate their decays. We have focussed on the simple case of a new U (1) group which a vector-like B quark is charged under. We have described a simple realisation of this scenario, based around the concept of the "Effective Z ". We have outlined the wide range of new phenomena and interesting search channels that exist in this class of simple models, which contain only three new particles. If the kinetic mixing between U (1) and hypercharge is small the new channels all involve multiple b quarks. We demonstrated that there is a broad region of parameter space in these models where the new decay B → bZ /φ → b(bb) dominates.
We have presented a search method that can simultaneously observe the new quark and the new gauge boson in final states containing up to six b quarks, by carrying out a two-dimensional mass reconstruction of events. The large QCD and smaller tt backgrounds can be effectively reduced by requiring pairs of resonances whose masses are close, which in turn contain sub-resonances whose masses reconstruct to be the same. Although there are many b quarks in the final state we take a conservative approach and require only three b-tags. A better understanding of b-tagging efficiencies may allow this requirement to be strengthened, leading to a further suppression of background. The kinematics of the process are sensitive to the mass splitting between B and Z and we account for this be varying our reconstruction technique with the number of final state jets and employing the techniques of N -subjettiness to uncover merged jets from the Z decay. We find that discovery at the 5σ level is possible for a broad range of M Z , with M B 1250 GeV for 30 fb −1 , with M B 1600 GeV for 300 fb −1 , and with M B 2000 GeV for 3000 fb −1 .
It is intriguing that the recently observed Galactic center excess can be explained by weak scale DM annihilating into b quarks. If this takes place through a new mediator one might expect new b-quark partners which may themselves decay into the mediator. We have provided one such example of this and have shown that the LHC has the capability to test this DM scenario over much of its parameter space.
Finally, the technique we describe is not unique to the model we analyse and will be widely applicable to many models where a new particle is pair produced and decays to a lighter new state, finally decaying to SM particles. For instance, the approach we advocate has an obvious extension to vector-like top quarks, T → tZ → t(bb)/(tt). It would also enhance RPV gluino searches [50,51] in the case where the squarks are lighter than the gluinos.
Note Added
While this work was in the final stages of completion CMS released details of a search for T in the exotic mode T → bW with W decaying leptonically [52]. The CMS analysis also searches simultaneously for two new particles and carries out a two-dimensional mass reconstruction of events, but the final state and particle content are different from what we consider. | 2015-09-01T20:47:16.000Z | 2015-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "ec0d69caabe63aebc1da0986ece82c58426c7b3c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2016)038.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f7baf4a56b0eaa5c572d461816daecd9e72b32a6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245248826 | pes2o/s2orc | v3-fos-license | Prevention of Stress-Induced Depressive-like Behavior by Saffron Extract Is Associated with Modulation of Kynurenine Pathway and Monoamine Neurotransmission
Depressive disorders are a major public health concern. Despite currently available treatment options, their prevalence steadily increases, and a high rate of therapeutic failure is often reported, together with important antidepressant-related side effects. This highlights the need to improve existing therapeutic strategies, including by using nutritional interventions. In that context, saffron recently received particular attention for its beneficial effects on mood, although the underlying mechanisms are poorly understood. This study investigated in mice the impact of a saffron extract (Safr’Inside™; 6.25 mg/kg, per os) on acute restraint stress (ARS)-induced depressive-like behavior and related neurobiological alterations, by focusing on hypothalamic–pituitary–adrenal axis, inflammation-related metabolic pathways, and monoaminergic systems, all known to be altered by stress and involved in depressive disorder pathophysiology. When given before stress onset, Safr’Inside administration attenuated ARS-induced depressive-like behavior in the forced swim test. Importantly, it concomitantly reversed several stress-induced monoamine dysregulations and modulated the expression of key enzymes of the kynurenine pathway, likely reducing kynurenine-related neurotoxicity. These results show that saffron pretreatment prevents the development of stress-induced depressive symptoms and improves our understanding about the underlying mechanisms, which is a central issue to validate the therapeutic relevance of nutritional interventions with saffron in depressed patients.
Introduction
Depressive disorders are among the most common and debilitating psychiatric illnesses, affecting over 322 million people worldwide [1]. To make matters worse, their prevalence is constantly rising despite currently available treatment options, thus complicating patient management and care. Highly prevalent in people afflicted with chronic inflammatory conditions [2][3][4] or exposed to stressful life events [5][6][7], depression is often characterized by chronic relapse. Moreover, a significant proportion of patients does not respond to conventional antidepressants (ADs), while still developing aversive side effects [8]. These major health concerns emphasize the need to expand treatment options by identifying new therapeutic strategies able to effectively target the complex pathophysiological mechanisms of depression.
As the majority of studies carried out to decipher the neurobiological underpinnings of depression started by highlighting the role of brain monoamine deficiency, most conventional ADs primarily aim to increase the synaptic availability of neurotransmitters, mainly serotonin (5-HT), but also dopamine (DA) and noradrenaline (NA), by specifically acting on their receptors, transporters, and/or catabolic enzymes [9][10][11]. However, it is now known that many of these medications, particularly those inhibiting monoamine reuptake, can also act by targeting other pathophysiological mechanisms of depression [12], including dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis, which is classically characterized by increased cortisol levels and the desensitization of glucocorticoid receptors (GR) resulting in impaired glucocorticoid negative feedback [5,[13][14][15] and alterations of hippocampal neuroplasticity, reflected in decreased brain derived neurotrophic factor (BDNF) levels [16][17][18].
More recently, other theories about the etiology of depression have emerged, notably related to the involvement of inflammatory processes [19]. In this context, growing attention has been paid to the critical role of two metabolic pathways, the kynurenine (KYN) and tetrahydrobiopterin (BH4) pathways, whose alteration in inflammatory conditions ultimately impairs monoaminergic neurotransmission, while inducing depressive symptoms [20,21]. Upon inflammatory activation, the indoleamine 2,3-dioxygenase (IDO) degrades the 5-HT precursor tryptophan into KYN, at the expense of 5-HT. Concurrently, the inflammation-driven activation of downstream enzymes of the KYN pathway promotes glutamate-related neurotoxicity through the synthesis of several KYN neurotoxic derivatives [18,22]. Inflammatory cytokines also dysregulate the BH4 pathway, particularly by acting on the GTP-cyclohydroxylase-1 (GTPCH1) and in turn favoring the production of toxic derivatives at the cost of BH4. Since BH4 is an essential cofactor for monoamine synthesis including DA, its disruption ultimately impairs DA neurotransmission likely to contribute to depressive symptoms [22]. Accordingly, the KYN and BH4 pathways have been proposed as potential therapeutic targets for the treatment of depressive symptoms occurring notably in contexts of inflammation [22,23].
Based on this knowledge, research aiming to identify new treatment options for depression has been mainly directed towards the possibility of improving if not all, at least several of the neurobiological alterations just mentioned. For this purpose, and keeping in mind the need to concomitantly reduce side effects accompanying pharmacological treatments, special interest has been recently paid to alternative therapeutic strategies. They particularly include nutritional interventions using essential nutrients or bioactive plant extracts with potential neuromodulatory and/or immunomodulatory properties [24,25]. In that context, saffron, a spice extracted from Crocus sativus L and used for centuries for its positive impact on health, appears as a promising candidate [26,27]. Over the last decades, saffron bioactive compounds have received more and more attention for their multiple valuable therapeutic properties, including antioxidant, anti-inflammatory, anxiolytic, or antidepressant properties [28][29][30][31]. Interestingly, compelling clinical studies have already shown that saffron administration improves mood in patients suffering from mild to major depression [26,27,32,33]. Several preclinical studies support these findings by reporting a reduction in depressive-like behaviors following saffron extract administration [34][35][36][37]. Nicely extending these findings, we recently demonstrated in naive mice that this behavioral improvement is associated with modulation of monoaminergic neurotransmission [38]. Beyond this neuromodulatory impact [38,39], saffron was also found to modulate the redox and inflammatory status [29,40], as well as HPA axis activity [41,42], although it has been suggested that it may preferentially interfere with the HPA axis under stressful rather than basal conditions [43,44]. However, the contribution of these different mechanisms to the behavioral effects of saffron remains to be confirmed, particularly in stressful conditions that play an essential role in the etiology of depression [5][6][7].
In order to address this issue, the present study aimed to assess the effects of an oral administration of Safr'Inside, a standardized saffron extract, on stress-induced depressive-like behavior and related neurobiological alterations in mice. Saffron was provided either before or just after stress exposure, in order to dissociate the potential preventing effect from a treatment effect. The acute restraint stress (ARS) paradigm was chosen because it elicits depressivelike behaviors, together with the dysregulation of most of the main neurobiological systems underlying their induction and suspected to be directly or indirectly targeted by conventional ADs [45][46][47]. This study shows that only Safr'Inside pretreatment prevents stress-induced depressive-like behavior and highlights the improvement of inflammatory processes and monoamine neurotransmission as potential underlying mechanisms.
Animals and Treatment
Eight-week old male C57BL/6J mice were obtained from Janvier labs (Le Genest-Saint-Isle, France). Upon arrival, they were randomly allocated to the different experimental groups and housed collectively (7-8 mice/cage) in an enriched (cardboard rodent homes and cotton nestlets) and controlled environment (22 ± 2 • C, 40% of humidity), with a 12 h/12 h light/dark cycle (light on at 7:30 a.m.) and free access to water and food (Standard Rodent Diet A04, SAFE, Augy, France). All animal procedures were conducted in strict compliance with the European Union recommendations (2010/63/EU) and were approved by the local ethical committee (approval ID A16873). Maximal efforts were made to reduce the suffering and number of animals used.
On the day of the test, a freshly prepared solution of saffron extract and its vehicle (water) were orally administered using a mouse-adapted feeding probe (ECIMED 1.33 × 30 mm). The saffron extract (Safr'Inside™; Activ'Inside, Beychac-et-Caillau, France) was a standardized extract obtained according to the patent FR 3054443 and containing more than 25 active compounds, including crocins (>3%), safranal (>0.2%), picrocrocin derivatives (>1%), and kaempferol derivatives (>0.1%), as measured by the U-HPLC method. The dose of Safr'Inside™ used (6.25 mg/kg per os), as well as its route (gavage) and volume of injection (10 mL/kg), were chosen based on previous studies [38,48]. In order to minimize stress reaction, mice were handled and habituated to the gavage procedure for several days before the test.
Experimental Design
The experimental design is summarized in Figure 1. Control mice (n = 14 mice) were unstressed and received only water. Stressed mice were submitted to a 4-h acute restraint stress (ARS) and randomly distributed into 3 groups (n = 15/group) administered with Safr'Inside™ (30 min before the stress onset or 10 min after its end), or with water (at the two timepoints) for mice non-treated with saffron. The ARS procedure was essentially conducted as previously described [45]. Briefly, stressed mice were immobilized using polypropylene conical tubes (29.1 mm in diameter × 114.4 mm long) pierced with multiple holes to allow breathing and to limit the rise in body temperature. Four hours later, they were removed from the restraint tubes and put back to their respective home cage. Control mice stayed in their home cage during the entire stress procedure. All mice were tested in the forced swim test (FST) 30 min after the last administration of saffron extract or water and euthanized immediately after the behavioral test.
Behavioral Measures
Behavioral characterization was carried out during the light phase in a devoted soundproof room equipped with a recording device that allows the behaviors to be analysed later by a trained observer blind to experimental conditions, using an ethological software ("The Observer XT 15", Noldus, The Netherlands). The FST was used here to measure depressivelike behaviors. This well-validated test is routinely employed in pharmacological studies to screen drugs based on their possible ability to reduce these behaviors [49].
As previously described [38,50], mice were placed individually in a cylindrical glass tank (diameter: 16 cm; height: 31 cm) containing warm water (25 • C +/−1 • C) for 6 min during which the duration of swimming, climbing, and immobility was measured. Water was changed between each session. Increased immobility time is believed to reflect a state of helplessness that is reduced by conventional ADs. To further evaluate the impact of saffron extract on depressive-like behavior, we also determined within each experimental group the proportion of mice that displayed longer immobility than an immobility threshold, which was defined as the average percentage of time spent immobile by the control group [38].
Tissues Sampling
At the end of the FST, mice were euthanized with terminal pentobarbital/lidocaine anesthesia (300/30 mg/kg, intraperitoneally). Blood samples were immediately collected from the heart into tubes coated with an anticoagulant (EDTA 10%) and centrifuged (2000× g) for 20 min at 4 • C. Supernatants containing plasma fraction were next aliquoted and stored at −80 • C until corticosterone content was assayed. A transcardiac perfusion with chilled PBS 1X (2 min, 10 mL/min) was then rapidly performed in order to clean tissues from all traces of blood. Brains were extracted from the skulls and carefully dissected to hemilaterally collect structures of interest, i.e., the frontal cortex (FCx), striatum (STR), and hippocampus (HPC), which were immediately placed in sterile tubes, dry ice frozen, and stored at −80 • C for further analysis.
Enzyme Immunoassays (EIA)
The Corticosterone-HS kit (ImmunoDiagnostic System, Pouilly, France) was used to measure plasma corticosterone levels following the manufacturer's instructions. All samples were diluted 10× and run in duplicate. The absorbance at 450 nm was measured by spectrophotometry (Victor3V, PerkinElmer, Villebon-sur-Yvette, France). Corticosterone concentrations (expressed in ng/mL) were calculated according to the standard range provided by the supplier.
High Performance Liquid Chromatography Coupled to Electrochemical Detection (HPLC-EC)
Concentrations of monoamines (DA, 5-HT) and their metabolites (dihydroxyphenylacetic acid (DOPAC), homovanillic acid (HVA), and 5-hydroxyindoleacetic acid (5-HIAA)) were measured in the FCx, STR, and HPC by HPLC-EC, essentially as previously described [38]. Briefly, 600 µL of extraction buffer were added to the different brain structures, which were then homogenized in a TissuLyser system (3 × 1 min at 30 Hz, 4 • C; Qiagen, Courtaboeuf, France). After 20 min of centrifugation (16,000× g, 4 • C), the supernatant containing the analytes to be measured was collected and divided into two aliquots. The first was immediately frozen at −80 • C for protein analysis by Western blotting (WB), while the second was centrifuged further for 2 min in filter tubes (1600× g, 4 • C) before being stored at −80 • C until use for HPLC-EC. For this purpose, 20 µL of each sample were injected into a high-performance liquid chromatograph equipped with an electrochemical detector coupled to a Chromeleon integration 6.8 software (Dionex, Sunnyvale, CA, USA), which allows the detection of the different analytes based on their respective retention time. Final concentrations were calculated against external standards, which were injected twice daily, and expressed per g of fresh tissue.
RT-qPCR was performed as previously described [38]. Briefly, total RNAs were extracted from half brain structures using Trizol (Invitrogen, Life Technologies, Villebonsur-Yvette, France) and reverse-transcribed into complementary DNA using Superscript III (Invitrogen, Life Technologies, Villebon-sur-Yvette, France). For the amplification, 2 µL of cDNA at 20 µg/µL were run in duplicate with Taqman LightCycler ® 480 Probes Master mix (Roche Diagnostics, Meylan, France) and appropriate FAM-labeled Taqman primers (ThermoFisher Scientific, Waltham, MA, USA). Fluorescence was measured by a Light cycler 480 II system (Roche Diagnostics, Meylan, France). Results were normalized using Beta-2-Microglobulin (B2M) as a house-keeping gene and expressed as relative expression compared to the control group. All primer references are given in Supplementary Table S1.
Western Blotting (WB)
Protein levels of DA and 5-HT receptors and transporters were assessed by WB in the same brain areas as gene expression. In order to optimize these measures while avoiding the management of many samples at the same time, which can unspecifically increase the interindividual variability, they were performed in 2 steps. The first aimed to compare Safr'Inside-treated stressed mice with their untreated counterparts. The second step, only carried out for proteins differentially expressed between these two groups, was then dedicated to compare the control and ARS groups, to determine if saffron selectively acts on stress-induced protein level alterations or independently from stress.
Statistical Analyses
Statistical analyses were performed using Statistica 6 software (StatSoft, Tulsa, OK, USA), and possible outliers were identified with Graphpad Outlier Calculator [54] to be removed from the data. First, normality was assessed using the Shapiro-Wilk test. Parametric statistics with groups as between-factor were used when distribution was normal using one-way ANOVA, followed by Fisher LSD post-hoc test when necessary. For non-normal distribution, statistical validity was assessed with non-parametric test (Kruskal-Wallis H test followed by multiple comparison of ranks when appropriate). WB were analyzed using an unpaired t-test or Mann-Whitney U test depending on the normality. Immobility index was analysed with the Fisher's exact test on contingency tables. The statistical level of significance was set at p ≤ 0.05. All data are presented as means ± SEM.
Safr'Inside Administration Does Not Modify ARS-Induced Weight Loss
We first measured body weight changes as a classical physiological index of stress impact. Although all groups displayed similar body weight before stress exposure, we observed differences in weight loss at the end of the procedure (F (1,55) = 348; p ≤ 0.001). Indeed, the ARS procedure significantly induces weight loss in all stressed mice compared to controls, regardless of pre/post-ARS Safr'Inside treatment, as revealed by post hoc analyses ( Figure 2).
Safr'Inside Administration Reduces ARS-Induced Depressive-like Behavior Only When Given before Stress Exposure
The impact of pre-stress and post-stress Safr'Inside administration on ARS-induced depressive-like behavior was assessed in a classical rodent test of depression, the FST [49]. A one-way ANOVA revealed a significant overall difference between groups in the immobility time (F (1,55) = 620; p ≤ 0.05; Figure 3A). Additional post hoc analyses showed that this parameter was differentially increased by ARS depending on treatment conditions. Specifically, mice treated after stress exposure are significantly more immobile than controls (Safr'Inside post-ARS vs. Control: p ≤ 0.05), while it is not the case for those receiving a pre-administration of saffron. Although the difference between untreated stressed mice and controls does not reach significance when analysed with a global post-hoc test (p = 0.07), a direct group-by-group comparison revealed a significant effect of ARS (ARS vs. Control: t (1,14) = 2.49; p ≤ 0.05). Importantly, this was confirmed by the immobility index that is similar in the pretreated and control groups ( Figure 3B), but different from that of untreated stressed group (ARS vs. Control p ≤ 0.05; Safr'Inside pre-ARS vs. ARS: p = 0.06). This index reflects, for each experimental group, the proportion of mice spending more time immobile than the average percentage of time spent immobile by control mice (35.7%). Interestingly, this proportion is drastically increased in untreated-stressed mice (73.3%) and mice treated after stress (66.7%), while it remains very close to that of controls in the saffron pretreated group only (40.0%). Lastly, ARS also tends to decrease swimming time (one-way ANOVA F (1,55) = 716; p = 0.06; Figure 3C), while climbing time is unchanged and very short regardless of the group ( Figure 3D).
Safr'Inside Administration Only Slightly Changes HPA Axis Function and Related Neurobiological Targets
In order to identify the neurobiological mechanisms potentially underlying the behavioral improvement induced by saffron extract when administered before ARS exposure, we assessed the impact of this treatment condition on stress-related neurobiochemical changes, starting with one of the main mediators of stress, the HPA axis. As shown in Figure 4A, circulating corticosterone levels measured just after the FST were similar in the Control, ARS, and Safr'Inside pre-ARS groups. Nevertheless, these groups differ regarding GR gene expression in the FCx (F (1,41) = 4165; p ≤ 0.001) and the HPC (F (1,36) = 864; p ≤ 0.05), but not the STR ( Figure 4B Since the behavioral effects of stress have been previously related to its negative impact on HPC neurogenesis, notably through corticosterone-induced impairment of BDNF expression [16], this was measured in the different experimental conditions. The oneway ANOVA analysis showed a difference in hippocampal BDNF gene expression among groups (F (1,37) = 855; p ≤ 0.001; Figure 4C). As expected, exposure to ARS downregulates BDNF transcripts levels (ARS vs. Control: p ≤ 0.001). However, this downregulation was not changed by the pre-administration of Safr'Inside (Safr'Inside pre-ARS vs. Control: p ≤ 0.001), suggesting that its behavioral impact is unlikely related to a reduction of the deleterious effect of stress on hippocampal neurogenesis, at least as assessed through the local gene expression of BDNF.
Safr'Inside Administration Positively Regulates the Kynurenine Pathway
Several studies report that activation of the KYN pathway, which is known to contribute to inflammation-related depressive-like behavior [20,22,55], is also found in different stress models of depression [45,46,56,57]. In line with these findings, we assessed KYN pathway activation by measuring brain expression levels of its key enzymes. The statistical analysis revealed differential effects of ARS and treatment depending on the enzyme and brain area considered. In the FCx, we observed significant differences between groups regarding the expression of two important enzymes of the neurotoxic side of the KYN pathway, namely KMO (F (1,20) = 181; p ≤ 0.05) and HAAO (Kruskal-Wallis analysis: p ≤ 0.05; Figure 5A). Interestingly, they are both decreased by Safr'Inside administration as compared to controls (Safr'Inside pre-ARS vs. Control; KMO: p ≤ 0.05 and HAAO: p ≤ 0.01). Additionally, the expression of KAT, the enzyme conversely promoting neuroprotection, is also changed in this brain area (F (1,15) = 385; p ≤ 0.01). Specifically, ARS drastically downregulates KAT expression (ARS vs. Control: p ≤ 0.001), but this effect is partially blunted by Safr'Inside administration (Safr'Inside pre-ARS vs. Control: p ≤ 0.05). Akin to these findings, the neurotoxicity ratio, as reflected by the KMO/KAT ratio, is also different depending on the group considered (F (1,13) = 96,3; p ≤ 0.05; Figure 5B). The post hoc analysis showed that this ratio is significantly lower in mice pretreated with Safr'Inside than in untreated stressed mice (Safr'Inside pre-ARS vs. ARS: p ≤ 0.01). In the STR, KAT was also differentially expressed among groups (Kruskal-Wallis analysis: p ≤ 0.05; Figure 5C), this expression being significantly higher in saffron-treated mice than in untreated-stressed group (Safr'Inside pre-ARS vs. ARS: p ≤ 0.05). Accordingly, the neurotoxicity ratio tends to be reduced by Safr'Inside pretreatment (Kruskal-Wallis analysis: p = 0.06; Figure 5D). In the HPC, this ratio was similar in the different groups. However, the one-way ANOVA showed a significant effect on the hippocampal expression of HAAO (F (1,36) = 960; p ≤ 0.01; Figure 5E). Indeed, ARS decreases HAAO expression (ARS vs. Control: p ≤ 0.05), with this reduction being even stronger in Safr'Inside-treated mice (Safr'Inside pre-ARS vs. Control: p ≤ 0.001). Overall, these data suggest that saffron extract administration reduces KYN-related neurotoxicity in a brain area dependent manner. Together with the KYN pathway, the BH4 pathway, whose activity is changed by immobilization stress [58], also participates to the induction of depressive symptoms [22]. Therefore, it may similarly play a role in Safr'Inside-induced behavioral improvement. This does not seem however to be the case, as revealed by assessment of gene expression of several key elements of the BH4 pathway, including GTPCH1, the first and limiting enzyme of the pathway that, together with PTS and SPR, leads to BH4 synthesis [22,23]. Indeed, although GTPCH1 expression is significantly altered by the experimental conditions in the three brain areas of interest (FCx: F (1,40) = 438; p ≤ 0.001; STR: F (1,25) = 282; p ≤ 0.01; HPC: Kruskal-Wallis analysis p ≤ 0.001; Figure 5A,C,E respectively), the post hoc analyses revealed that ARS increases GTPCH1 expression regardless of saffron administration (FCx: ARS vs. Control: p ≤ 0.001; Safr'Inside pre-ARS vs. Control: p ≤ 0.001; STR: ARS vs. Control: p ≤ 0.05; Safr'Inside pre-ARS vs. Control: p ≤ 0.001; and HPC: ARS vs. Control: p ≤ 0.05; Safr'Inside pre-ARS vs. Control: p ≤ 0.001). In addition, gene expression of PTS and SPR is unchanged whatever the brain area and the experimental condition.
Safr'Inside Administration Partially Prevents ARS-Induced Alterations of Neurotransmission
Based on the current knowledge of the mechanisms of action of conventional ADs and our recent data showing that Safr'Inside administration improves monoaminergic neurotransmission in basal conditions [38], we next measured its potential impact on ARS-induced monoamine alterations. For this purpose, we first assessed whole tissue contents of 5-HT, DA and their metabolites in the three structures of interest (Table 1). Neither ARS exposure nor administration of Safr'Inside significantly changed 5-HT and DA levels. However, they differentially alter their metabolite concentrations depending on the brain area, except for DOPAC whose levels, when detectable, were similar in all mice. Regarding 5-HIAA levels, statistical analyses revealed differences between groups in the FCx (Kruskal-Wallis analysis: p ≤ 0.01), STR (F (1,24) = 253; p ≤ 0.01) and HPC (Kruskal-Wallis analysis: p ≤ 0.001). Indeed, ARS increases 5-HIAA concentrations in the three brain areas of untreated stressed mice (ARS vs. Control: FCx and HPC: p ≤ 0.001; STR: p ≤ 0.01), while the pre-administration of Safr'Inside only prevents this increase in the FCx (Safr'Inside pre-ARS vs. Control: STR and HPC: p ≤ 0.01). Consistent with this, 5-HT turnover ratio (5-HIAA/5-HT) was different between groups in the FCx (F (1,40) = 349; p ≤ 0.01; Figure 6A) and HPC (Kruskal-Wallis analysis: p ≤ 0.001; Figure 6C). Indeed, ARS significantly augments this ratio in the FCx of untreated (ARS vs. Control: p ≤ 0.001; Figure 6A), but not treated, stressed mice (Safr'Inside pre-ARS vs. ARS: p ≤ 0.05). On the other hand, it was enhanced in the HPC of all stressed mice, regardless of saffron administration (ARS vs. Control: p ≤ 0.001; Safr'Inside pre-ARS vs. Control: p ≤ 0.001; Figure 6C). The one-way ANOVA also showed increased levels of HVA, the final DA metabolite, in the FCx (F (1,38) = 188; p ≤ 0.05) and the STR (Kruskal-Wallis analysis: p ≤ 0.05; Table 1). In the FCx, this is related to an ARS effect independent from treatment (ARS vs. Control: p ≤ 0.01), although the local DA turnover ratio (HVA/DA) is enhanced in saffron-treated mice (Kruskal-Wallis analysis: p ≤ 0.01; Safr'Inside pre-ARS vs. Control: p ≤ 0.01; Figure 6A). On the contrary, the ARS-induced enhancement of HVA levels reported in the STR of untreated stressed mice (ARS vs. Control: p ≤ 0.05, Table 1) is abolished by Safr'Inside pretreatment, as revealed by the multiple group analysis. Consequently, the striatal DA turnover ratio is significantly increased in stressed mice (Kruskal-Wallis analysis: p ≤ 0.05; ARS vs. Control: p ≤ 0.01; Figure 6B), unless they were pretreated with Safr'Inside.
Regarding the 5-HT pathway, statistical analyses revealed differences between groups in MAO-A expression in the FCx (F (1,40) = 1226; p ≤ 0.05; Figure 7A), STR (F (1,24) = 962; p ≤ 0.01; Figure 7B) and HPC (Kruskal-Wallis analysis: p ≤ 0.01; Figure 7C). As compared to controls, this expression is indeed decreased in stressed mice, whether they are treated (Safr'Inside pre-ARS vs. Control: p ≤ 0.05 in the FCx and HPC and p ≤ 0.01 in the STR) or not with saffron (ARS vs. Control: p ≤ 0.05 in the FCx and HPC and p ≤ 0.01 in the STR). Concerning 5-HT receptors and transporter, no differences were observed at the protein level in the FCx and HPC ( Figure 8E,F), but their gene expression does change among groups in the HPC (F (1,37) = 1635; p ≤ 0.01 and Kruskal-Wallis analysis: p ≤ 0.05 for 5-HTR1a and SERT respectively; Figure 7C), a particularly important brain area for the therapeutic effect of serotoninergic ADs [65,66]. Specifically, ARS increases the expression of 5-HTR1a in all stressed mice (ARS vs. Control: p ≤ 0.001; Safr'Inside pre-ARS vs. Control: p ≤ 0.05; Figure 7C), while that of SERT is only upregulated by ARS in the absence of saffron pretreatment (ARS vs. Control; p ≤ 0.05; Figure 7C), which prevents this effect.
Concerning the DA pathway, no difference between groups was reported whatever the brain area for the gene expression of the enzymes more specifically involved in DA catabolism (COMT and MAO-B), as well as dopaminergic receptors and transporter (Figure 7), meaning that neither ARS nor saffron administration change the expression of these factors. These results were confirmed at the protein level for DRD2 and DAT (Figure 8), but differ regarding DRD1. Indeed, Safr'Inside-treated stressed mice display decreased DRD1 protein levels in the FCx (t (1,22) = 2.14; p ≤ 0.05; Figure 8A) and STR (Mann-Whitney U test: p ≤ 0.05; Figure 8C) compared to untreated stressed mice, although ARS does not significantly change these levels ( Figure 8B,C). Taken together, these results show that Safr'Inside administration modulates KYN pathway activation, as well as dopaminergic and serotonergic neurotransmission, which may contribute to its preventive effect on ARS-induced depressive-like behavior (Figure 9).
Discussion
Due to the high failure rate and associated side effects of classical ADs, more and more studies search for natural alternatives to improve the management of mood disorders. If promising results are increasingly reported regarding saffron supplementations, the underlying mechanisms remain largely misunderstood. Here, we show for the first time that saffron extract interferes with ARS-induced depressive-like behavior when administered before, but not after, stress exposure. Importantly, we also report that Safr'Inside pretreatment concomitantly reduces KYN-related neurotoxicity and improves stress-induced monoamine system dysregulation in a brain area-dependent manner. Hence, this study highlights the ability of saffron extracts to improve depressive-like behavior under stress conditions, which are recognized predictors of depression, while finely regulating the function of key systems in the pathophysiology of the disease.
The ARS is a well validated paradigm to study stress-induced depressive-like behavior, since it causes several emotional and neurobiological alterations modeling those reported in depressive disorders [45][46][47]67]. Accordingly, untreated mice submitted to ARS in the current study displayed increased depressive-like behavior, as particularly shown by the high proportion of mice from this group spending significantly more time immobile than controls in the FST, which is widely used to preclinically test candidate compounds for their antidepressant activity [68]. As expected, this is associated with HPA axis dysregulation and alterations of neurogenesis, KYN pathway activation, and monoamine neurotransmission. Exposing saffron-treated mice to the ARS procedure allowed testing whether the antidepressant-like properties of saffron previously reported in unstimulated conditions (i.e., unstressed mice) [34,35,38] extend to stress conditions, as recently reported for some of its bioactive compounds [68,69]. Interestingly, several lines of compelling evidence strongly suggest that this is the case. Indeed, we showed here that administrating Safr'Inside before ARS onset normalized the proportion of mice being highly immobile, which was doubled in untreated stressed group compared to the control group. Consistent with this, stress-induced increase of immobility was not detected in mice pretreated with saffron, which behaved as control mice in the FST, therefore supporting the fact that saffron is effective in reducing stress-induced depressive-like behavior. It could be argued that saffron pretreated mice were not significantly different from untreated stressed mice either, which might suggest that the lack of increased immobility might instead simply reflect a non-specific response. However, this is unlikely, since saffron pretreatment also concomitantly targeted the neurobiological processes known to underly the reported behavioral alterations. Moreover, the behavioral effect of stress remained significant in mice treated after stress, whereas the two saffron-treated groups only differed by the time of saffron administration. In addition, the current results fit with compelling clinical and preclinical studies reporting its ability to improve mood and depressive symptoms [26,31,34,35,70,71], including in stressful conditions, although the number of studies is much less in this case [68,69,72]. Taken together, these findings highlight the need of investigating further the behavioral impact of saffron under stress conditions. Meanwhile, the current study already provides new and valuable information on the antidepressant-like properties of saffron in that context. Importantly, it shows that a behavioral effect was detected despite the very low dose used here (6.25 mg/kg per os), as compared to those reported in the literature [34,35,71,73]. It is worth mentioning that this dose was initially calculated based on that classically administered to humans (30 mg/day) by using the guidelines for doseequivalence calculation provided by the FDA [74]. Altogether, these findings support the translational relevance of the present study.
Since the HPA axis is one of the main mediators of stress and the first to be activated upon stress exposure, it may appear as a likely target of saffron to drive its behavioral impact. In line with this assumption, a few studies previously reported that saffron reverses stress-induced increase in corticosterone levels [42,44], but this is not always the case [38]. Here, we cannot definitively conclude about the potential effect of Safr'Inside administration on stress-induced increase in corticosterone levels, since this increase, although expected based on other studies [45][46][47], was not detected in the current experiment. It is noteworthy, however, that corticosterone levels were measured almost 5 h after ARS onset and in blood samples collected right after the FST, which may have stressed control mice, as suggested by their corticosterone levels. Measuring corticosterone at different time points during ARS exposure and/or right at its end, rather than after the FST, should help address this issue. It was however not possible to carry out this time-course in the present study. Meanwhile, the fact that saffron extract pretreatment does not reverse the ARS-induced decrease of GR expression that is reported in the FCx in agreement with previously published data [45,46], argues against a main role of HPA axis modulation by Safr'Inside in its protective behavioral effect. Similarly, it does not seem to act by reducing the impact of stress on hippocampal neurogenesis, at least as assessed through the expression of BDNF, which is one of the main intermediates between the impairment of stress-induced hippocampal neurogenesis and development of related depressive symptoms [16,18,75]. In addition, the gene expression of this important neurotrophic factor is well-known to be under GR-mediated regulation in stressed conditions [76]. Consistent with the literature [45,46], we show here that ARS decreases hippocampal BDNF expression. Importantly, this down-regulation is not prevented by saffron extract, despite its protective effect against ARS-induced depressive-like behavior. Of note, however, if these data argue against a main role of BDNF, they do not discard the involvement of other neurotrophic factors. Supporting this, different saffron extracts have been recently shown to upregulate protein and transcripts levels of several neurotrophic factors, including BDNF, although this was reported in other experimental conditions and after chronic administration [73,77,78]. On the other hand, BDNF hippocampal alterations related to depression seem to be preferentially associated with cognitive rather than emotional symptom dimensions [79]. Together, these findings highlight the need to deeply study the impact of saffron extracts on neurogenesis, particularly by considering other neurotrophic factors and behavioral endpoints, but this is beyond the scope of the present study.
Mounting evidence points to inflammation-driven alterations of the KYN and BH4 pathways as key players in the induction of depressive symptoms reported in inflammatory and/or stress conditions, due to their overall impact on 5-HT and DA metabolism, as well as increased oxidative damages and glutamate-related neurotoxicity [20,22,55,58]. Accordingly, they are increasingly considered as potential targets for the development of new therapeutic strategies in those conditions [22,80]. In agreement with previously published data [45,46,[56][57][58], the two pathways are altered by ARS in untreated mice. However, we show for the first time that these alterations are differentially impacted by saffron. Increased BH4 synthesis resulting from upregulation of GTPCH1 activity has been previously shown to play a key role in ARS-induced oxidative damages [58]. We did not assess BH4 levels nor indices of oxidative stress, but our data on the impact of ARS on the BH4 pathway suggest that the same could likely happen here. However, this assumption, as well as the potential link between these alterations and increased depressive-like behavior have yet to be demonstrated. Meanwhile, the fact that saffron pretreatment did not reduce ARS-induced changes of the BH4 pathway suggests that it unlikely mediates behavioral improvement. On the contrary, we report that saffron targets different KYN enzymes depending on the brain area, which could in turn contribute to reducing the imbalance between the neuroprotective and neurotoxic sides of the KYN pathway, as suggested by calculation of the neurotoxicity ratio. Indeed, saffron decreases the expression of enzymes promoting oxidative stress and glutamate-related neurotoxicity in the FCx, and rather increases KAT expression in the STR, therefore favoring the local synthesis of the neuroprotective KYN metabolite, kynurenic acid (KYNA). It could be argued that changes of gene expression do not necessarily imply concomitant changes of enzymatic activity. However, several studies previously reported that it is actually the case for KYN pathway enzymes [55,81]. Taken together, our results suggest that Safr'Inside-induced modulation of KYN-related glutamate-neurotoxicity may contribute to reduce associated depressive-like behavior. This assumption fits with mounting studies highlighting the link between generation of neurotoxic KYN metabolites, particularly quinolinic acid (QUIN) that promotes excitotoxicity by binding to NMDA glutamatergic receptors, and the severity of depressive symptoms [22,80,82]. It is also supported by preclinical studies reporting that different phytochemical compounds contained in Safr'Inside, particularly safranal and crocins, protect against brain oxidative damages induced by QUIN administration [83] and behavioral alterations associated with direct manipulations of NMDA receptor activation [84]. Interestingly, a recent study reports that counteracting QUIN effects by pharmacologically blocking these receptors with ketamine prevents the induction of depressive-like behaviors in a murine model of inflammation [85]. In line with this preclinical data, ketamine infusion in depressed patients resistant to ADs has been shown to improve their depressive symptomatology [85]. Moreover, their KYNA/QUIN ratio predicts their response to ketamine. These findings point to a modulation of KYN pathway-driven neurotoxicity as a promising new strategy of treatment and, together with the present study, arouse the interest of testing therapeutic approaches using saffron in this context.
The 5-HT system is a well-known target of stress and an important player in the etiology and treatment of depressive disorders [9][10][11]86]. Conversely, most conventional ADs aim to restore serotonergic neurotransmission, mainly by acting on 5-HT reuptake or catabolism [12,87]. Here, ARS increases SERT expression in the HPC and 5-HIAA concentrations in all brain areas assessed. It also increases 5-HT turnover, as assessed through the 5-HIAA/5-HT ratio, in the FCx and HPC. These results agree with previous studies using other stress paradigms [88][89][90][91][92][93]. ARS also downregulates MAO-A expression, the enzyme responsible for 5-HT degradation. Although this result may appear as counterintuitive, it is consistent with previously published data [76]. In addition, it is worth mentioning that the impact of acute stress on MAO-A expression has been shown to change over time, as for most GR-responsive genes [76]. Here, MAO-A expression and 5-HIAA levels were assessed only once and at the same time point, which was not necessarily suitable to see the causal link between the two measures. Similarly, the fact that we simultaneously assessed the impact of stress on the hippocampal gene expression of 5-HTR1a, which is increased in stressed mice as previously shown [94], and its local protein levels, yet unchanged by that time, also likely explain this apparent discrepancy. Further studies would be required to obtain a dynamic overview of the effect of stress on each of these 5-HT factors and their potential interdependence, but this was not the question addressed in the present study. Importantly, we show that the pre-administration of saffron prevents ARS-induced impairment of 5-HT neurotransmission, including by acting on the same targets than conventional ADs [72]. Indeed, it blocks the increase of 5-HIAA levels and 5-HIAA/5-HT ratio in the FCx, as well as SERT upregulation in the HPC. This last result is consistent with our previously published data [38] and particularly interesting in light of the key role of the HPC in the therapeutic properties of serotonergic ADs [65,66]. Also in agreement with our earlier study [38], saffron extract does not change the hippocampal 5-HTR1a expression, whose increase by ARS would be more related to an adaptive response to stress than to the behavioral alterations it elicits [95]. Altogether, these results point to saffron-induced modulation of 5-HT neurotransmission as an important player in the associated improvement of depressive-like behavior.
Along with the serotonergic system, the dopaminergic mesolimbic and mesocortical pathways are altered upon stress exposure, as well as in depression [11,12,96,97]. Specifically, DA oxidation products that result from its increased turnover and catabolism have been suggested to contribute to the pathophysiology of neuropsychiatric disorders and stress-induced depression [93]. In accordance with findings reporting high HVA concentrations in rodents exposed to acute stress [98], here we show that ARS significantly increases the levels of this metabolite in the FCx and STR, although with the values being much higher in this last brain area, suggesting local enhancement of DA turnover. Supporting this assumption, the HVA/DA ratio is augmented by ARS in the STR, this increase not reaching significance in the FCx. Importantly, saffron extract pretreatment prevents the effect of ARS on both HVA levels and HVA/DA ratio in the STR, but not the FCx. Of note, this ratio is even enhanced a little more in the FCx of saffron treated mice, although with an important interindividual variability. The underlying reasons for this are not clear at this time, emphasizing the need to further study saffron-induced modulation of DA metabolism. Meanwhile, it is important to note that differential, or even opposite, modulations of the dopaminergic mesolimbic and mesocortical pathways have been already reported, including in the context of depression [99,100]. Actually, confirming the differential impact of saffron on DA neurotransmission according to the pathway should be particularly interesting considering their preferential involvement in different depressive symptoms, the mesolimbic pathway being for example particularly critical to those related to reward processing and motivation [12,100,101]. Interestingly, we also reported that DRD1 protein levels measured in the STR and FCx, two brain areas where this receptor is highly expressed [98], are lower in stressed mice receiving saffron than in their untreated counterparts. Akin to these data, a nutritional supplementation with n-3 polyunsaturated fatty acids, which are well-known for their beneficial effects on mood, has been recently shown to improve DA-related behavioral alterations by downregulating DRD1 levels [102]. It may be tempting to similarly propose a link between saffron-induced DRD1 downregulation and improvement of stress-induced depressive-like behavior. However, the lack of detectable impact of stress on DRD1 levels argues so far against this hypothesis. The significance of the impact of saffron on this receptor has yet to be elucidated, as well as its potential effect on other key markers of dopaminergic activity.
Overall, the present results already point to saffron-induced modulation of dopaminergic neurotransmission, together with serotonergic neurotransmission and activity of the kynurenine pathway, as likely mediators to improve stress-induced depressive-like behaviors. A limitation to this work is that the potential links between these different neurobiological systems cannot be deduced from the present findings. This study represents an essential first step, and upcoming experiments should overcome this limitation. The broad screening of the neurobiological effects of saffron carried out here will serve to refine the avenues to be explored regarding the mechanisms of action of saffron in the context of depressive disorders, as well as the symptoms to preferentially study in humans and/or to experimentally model in rodents.
Conclusions
In conclusion, this study provides valuable information on the protective impact of saffron against the development of depressive symptoms related to stress exposure. It also provides important cues on how to use saffron-based alternative nutritional strategies, considering the preventive treatment as a first-line solution to tackle the induction of depressive symptoms in that context. In addition, by highlighting the ability of saffron to differentially target several pathophysiological bases of depression, this work may provide insights concerning the symptom dimensions for which nutritional interventions with saffron might be effective, and by extension, the clinical profile of patients who may benefit from these alternative therapeutic approaches. Altogether, these findings represent useful information for the better management and treatment of depressive disorders with saffron supplementation. | 2021-12-17T16:30:09.164Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "4a3dc2db00fba39e05f5601098bef719d8d4e980",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/13/12/2155/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5ada5218e677fef53ce32bd186167cf2d59beda",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216190086 | pes2o/s2orc | v3-fos-license | Consumption attachments of Brazilian fans of the National Football League A netnography on Twitter interactions
Purpose – The National Football League (NFL), the most lucrative sports league in the world, has its second largest foreign audience in Brazil. Its Brazilian broadcasts stimulate the audience to extrapolate television reception and interact through a social media platform, seeking to integrate a collective consumption. Thus, attachments are established between consumers and league. Based on this, this study aims to analyze how the interaction in social media of the Brazilian NFL audience, during the transmissions of its games, results in consumption attachments. Design/methodology/approach – The method undertaken was Netnography, commonly used to investigate cultural practices occurring in online environments. The research corpus consisted of messages posted on Twitterhashtags created by the ESPN Brazil channels to reverberate its broadcasts of the league between 2016-2017 and 2017-2018 seasons. Findings – The findings of this study indicate that Brazilian audience interaction in social media establishes consumer attachment with the NFL by means of the brand elements and aspects of social life, mediated by the league. Research limitations/implications – The research observed only the part of the Brazilian audience of the NFL that engages in the broadcasts of the games through social media. Practical implications – The research of this study demonstrates how brands can use social media to enable social interactions that create or improve consumer attachments with them. Originality/value – The study presents how a media brand imbricated in the American culture has been the target of attachment by Brazilian fans through social media interactions.
Introduction
Sports are one of the most important modern entertainments, because they are massively mediated, but no longer limited by physical and geographical constraints. In addition, they also disseminate characteristics of the culture they are inserted in (Jackson, 2014;Toffoletti, 2015). According to Whannel (2014), sports can be divided into two categories: practice sports, which has popular commotion and a significant number of practitioners in a given place or culture, and media sports, which is popularized by worldwide broadcasts and has repercussion among viewers. However, these categories should not be seen as disjointed, because a sports category with a large number of practitioners can gain room in the media, whereas a widely broadcasted sports category can attract new practitioners. Thus, media sports can turn into practice sports and vice-versa.
Some media sports are intrinsically associated with the brand of the league or confederation that popularized them; thus, consumers get attached to them due to their notoriety among the worldwide audience. However, although some of these media sports are broadcasted in several countries, their practice and wide popular commotion are restricted to the cultural context they belong to (Wenner, 2012;Whannel, 2014). It is the case of American football and its main league, which is known as National Football League (NFL). Despite its growing audience in several countries (ESPN Brasil, 2017;TSN, 2016), the popular and cultural commotion of this sports category are fundamentally restricted to its country of origin (Oatesl, Furness, & Oriard, 2014;Wenner, 2012).
The NFL has changed its rules over the years, and it may have happened because American football was established as media sports. The league often improves and innovates in broadcasting to privilege fans that primarily enjoy the games on TV and in other platforms, rather than the ones who go to stadiums. Thus, it is crafted as a package of brand elements (e.g. teams, players, and experiences) that enable viewers' attachment, regardless of where they watch the games (Spinda, Wann, & Hardin, 2015).
NFL brand elements accumulate great value in the financial market; besides, they show relevance in three out of four characteristics indicated by Forbes Magazine as the determinants of sports brand value: events, business and teams (Ozanian, 2017). NFL organizes the most lucrative sports event in the global media industry, namely: The Super Bowl, whose revenue is estimated to reach US$663m. In addition, its 32 teams are among the top 50 most profitable teams among all sports modalities (Badenhausen, 2017). Still, the commotion of its audience influences the recall of advertisements broadcasted during the games (Pavelchak, Antil, & Munch, 1988) and shapes public perception about the brands (Jenkins, 2013). However, NFL brand value is not limited to the economic sphere. The league is also influential at cultural sphere (Wenner, 2012(Wenner, , 2014Whannel, 2014), as American football represents the principle of effort and meritocracy, which is one of the national maxims in the USA (Schimmel, 2013); it also works as a way to present the US culture to foreign people and countries (Ha et al., 2014).
Although the American football tradition is not established in Brazil, the NFL has managed to beat audience records in the country (ESPN Brasil, 2017). Nowadays, the country is the second largest foreign consumer market of the league; it only stays behind Mexico (ESPN Brasil, 2015;Francischini, 2018). It was no surprise when, in 2018, the championship game -Super Bowl LIIhit a record audience in the country on Pay TV for the third year in a row (Firmino, 2018;UOL Esportes, 2018). As Brazil has one of the largest consumer markets on social media interaction (Yokoyama & Sekiguchi, 2014), ESPN Brazil mobilizes viewers through exclusive hashtags launched on the Twitter platform, and it generates a community of fans interacting on this social media (ESPN Brasil, 2017). Such interactions are encouraged by Brazilian ESPN channels. During game breaks, narrators and commentators show and make comments about messages, answer sport-related questions posted by the audience on Twitter and highlight community's participation in the broadcast (Firmino, 2018;Mesquita, 2017).
Consumer communities are forged based on a collective consumption principle, due to its members' attachment to brands and to each other (McAlexander, Schouten, & Koenig, 2002;Muñiz & O'Guinn, 2001). Consumer attachment is the value attributed by consumers to connections established with goods, services or brands (Kaiser, Schreier, & Janiszewski, 2017;Wallendorf &Arnould, 1988). Such attachment is underpinned by the symbolic value and sense of belonging fostered by the league (Cova & Cova, 2002;Moraes & Abreu, 2017).
As brand consumption is influenced by the link established between consumers (Aiken, Campbell, & Koch, 2013;Muñiz & Schau, 2005), it gives them the opportunity to express themselves by sharing their reasons to attach to a given brand (Scaraboto, Vargas, & Costa, 2012). Therefore, when consumers interact for this purpose, they no longer limit themselves to passive consumption (Ritzer, 2014). Many consumers in nowadays technological society search for tips and suggestions from other consumers in virtual environments (Franco & Leão, 2016;Ritzer & Jurgenson, 2010). Social networks have increasingly become a meeting point for consumers who exchange information about their consumer experiences (Kozinets, 2010;Nunes & ArrudaFilho, 2018). Thus, monitoring and encouraging interaction on virtual brand communities (VBC) is an innovative strategy that brands have been adopting to establish strong relationships with consumers (Guschwan, 2012).
Consumers in the social media context are productive toward the brands they are attached to, as they give these brands meaning and enhance their reach and visibility. Therefore, they escape the traditional dualistic consumption and production model to become prosumers (Ritzer, 2014;Zajc, 2015). Prosumers attached to media products can be seen as fans (de Souza- Leão and Costa, 2018), they intensively collaborate to resonate their practices in a specific consumer culture, in a coordinated and organized way (Costa & Leão, 2017;Jenkins, 2006;Rodrigues, Chimenti, & Nogueira, 2015). Thus, fans' attachment to a given brand enables emotional bond, social responsibility and community interaction (Aiken et al., 2013;Kozinets, 2006).
Based on this line of reasoning, it is possible inferring that the NFL collective consumption through social media in Brazil produces consumer attachments. Thus, the current study asks the following research question: how does the interaction of Brazilian fans in social media during NFL broadcast result in consumption attachments?
The primary aim of the present research was to investigate consumer attachment in the entertainment industry context, based on bonds established between fans and brands in the social network, by taking into consideration the prosumerist practices. The study takes part in the consumer culture theory agenda, which focuses on consumer practices, articulations and social organizations in cultural contexts (Arnould & Thompson, 2005).
Fan's productive consumption
The Web 2.0 context allows understanding of how media ubiquity affects consumer practices (Franco & Leão, 2016). Brands have been using digital technologies to interact with actual and potential customers (Hackley & Hackley, 2018). The media-consumer interface has popularized innovative marketing formats and models to promote relationships between brands and consumers, mainly in the present decade (Castro, 2012).
Social networks are consolidated as a cultural space for spontaneous interaction and, therefore, as a means to promote brands and maintain their relationships with consumers (Guschwan, 2012). These spaces illustrate how cultures are symbolic, but they can also result from market actions. Cultures encompass social groups who use consumption as a Netnography on Twitter interactions way to elaborate symbols or to attribute collective meanings to their environments (Arnould & Thompson, 2005;Askegaard, 2014). Consumers interact with each other in a productive way to reconfigure consumer cultures they belong to and to integrate a participatory culture (Jenkins, 2006).
Participatory cultures are all about how collective engagement renews the cultural participation of social individuals. They are potentiated by media convergence, which is possible due to technological changes. Thus, individuals move from sociocultural isolation to communal participation (Jenkins, 2006;Rodrigues et al., 2015). The ones who spontaneously engage in a particular participatory culture escape the passive content receptor position and start archiving, signifying and reproducing the cultural content appropriated by them (Jenkins, 2014;Tombleson & Wolf, 2017).
Media text contents are intensely appropriated by fans, who re-create and re-signify them (Guschwan, 2012;Sandvoss, 2005); this practice differentiates fans from regular consumers (Sandvoss, 2005). Most consumer practices adopted by fans can result in consumer attachment (Tumbat & Belk, 2011). According to Jenkins (2008), fans are the most active members of the audience who receives media texts. They recreate mediatic texts because they feel as much responsible for them as their producers (Guschwan, 2012;Jenkins, 2006). Therefore, they constantly express their attachment to media products by spreading such content to consolidate the culture they take part in; this is the practice through which they assume their responsibility (Hills, 2013;Jenkins, Ford, & Green, 2013).
The performance of fans can be considered, in several aspects, as prosumption acts (de Souza- Leão & Costa, 2018). Prosumers are productive consumers who do not fit the dual model that separates production from consumption; thus, they assume functions attributed to both sides (Ritzer, 2014;Zajc, 2015). This practice has become even more evident in the Web 2.0 context, as the interaction between prosumers has been intensified by the ease of information exchange (Ritzer & Jurgenson, 2010) and content sharing (Collins, 2010;Ritzer, Dean, & Jurgenson, 2012). When prosumers share inferences and feelings about media products and try to spread and popularize them, they assume a task that is often up to producers (Ritzer, 2014;Zajc, 2015).
Sports fans can be seen as prosumers, as they play the role of additional players in their teams and setup the atmosphere that defines the sports consumer experience (Andrews & Ritzer, 2017;Price & Palmero, 2014). According to Andrews and Ritzer (2018), sports prosumers in the Web 2.0 context are not only productive fans for their teams; but consumers who generate contents that can enhance and enrich the consumption experience.
The performance of fans often takes place in the fandoms (Jenkins, 2008), which are social spaces where they establish relationships and feel comfortable and safe to express their feelings and opinions about media products of common interest (Hills, 2013). Fans legitimize new forms of collective consumption in these social spaces; they configure fandoms as consumer communities responsible for perpetuating and expanding collectivity concepts (Kozinets, 2006).
Attachments in collective consumption
Consumption practices are one of the most visible cultural expressions of modern society, as they can establish consumer attachment (Arnould & Thompson, 2005;Belk & Casotti, 2014). Individuals living in contemporary society start materializing feelings and valuing their connection with others (Bauman, 2007;Schroll, Schnurr, & Grewal, 2018). Thus, understanding the linking value set by consumption is one of the main tasks of marketing studies nowadays (Schau, Gilly, & Wolfinbarger, 2009;Kaiser et al., 2017).
Consumer attachment is forged in collective consumption and it can change the cultural behavior of a given society (Cova & White, 2010). Collective consumption connects consumers belonging to the same micro-social level (e.g. family and friends) to the ones belonging to the macro-social level (e.g. market relations and the society as a whole) (Carú & Cova, 2003); this phenomenon can be observed in encounters happening in consumer communities (Cova & Cova, 2014).
Consumer communities are micro-social groups configuring modern society; individuals can simultaneously belong to several consumer communities (Cova & Cova, 2002;Moraes & Abreu, 2017). These groups are formed and maintained based on attachments established among their members, who identify themselves with brands and share consumption activities (Henriques & Pereira, 2018;McAlexander et al., 2002;Muñiz & O'Guinn, 2001). Therefore, consumer attachment is a social interaction experience between members of a consumer community, which is based on internal codes of a given culture and could also be presented to individuals who do not belong to such culture (Pihl, 2014).
Because the consumption of a given brand is influenced by the relationship between consumers and the brand itself (Aiken et al., 2013;Muñiz & Schau, 2005), consumer attachment makes consumers consider themselves loyal to brands. Consequently, they incorporate brands' goods and services into their routine (Humphreys & Wang, 2018;Sharma, Kumar, & Borah, 2017) and humanize their characteristics to enable identification processes (González & Francisco García, 2013). Discrepant feelings such as excitement and abstinence are often associated with consumer attachment to brands (Ahuvia, 2005;Albert, Merunka, & Valette-Florence, 2013;Masset & Decrop, 2016).
Consumers connect to others who share their consumption preferences (Schroll et al., 2018;Wallendorf & Arnould, 1988). The interaction between them influences their perception about the consumed product, as well as their relationship with it (Parmentier & Fischer, 2015;Muñiz & Schau, 2005). On the other hand, the consumer-brand relationship often influences other consumers engaged in social relationships (Epp & Price, 2010). Consumer attachment to brands is so strong that opinions and feelings about brands can get new consumers to attach to them (Batra, Ahuvia, & Bagozzi, 2012;Scaraboto et al., 2012). Wallendorf and Arnould (1988) indicate three types of consumer attachment in an approach widely adopted in marketing studies. Possessive attachment refers to the material connection between consumers and brands. Social attachment, in its turn, results from relationships established between consumers on social media during, or due to, consumption practices. Finally, favorite attachment is based on personal memories about the importance of consumer practices.
Virtual communities have emerged in the digital culture context. According to Castells (2003), these communities have emerged from individuals' desire for freedom. Most of these groups do not meet each other in the physical world and many of their participants choose to remain anonymous (Kozinets, 1999). This relationship is based on computer-mediated communication; thus, its members behave differently from members of offline communities. In addition, each virtual community presents a different cultural composition, a fact that generates a unique feeling of collectivity in its members (Muñiz & O'Guinn, 2001).
This context has opened room for VBC, that is, for consumer communities of shared involvement to a certain brand (Freitas & Leão, 2012). According to Muñiz and O'Guinn (2001), VBCs are based on three central features, namely: rituals and traditions, shared consciousness and sense of moral responsibility. Rituals and traditions indicate how social interactions are reproduced to constitute and establish a given brand, mainly if one takes into consideration the historical perspective about the relationship between brand and society. Shared consciousness concerns the social and affective identity established between the community and its participants, including the emotional involvement resulting from the relationship between them and the culture connecting each other. Finally, the sense of moral responsibility is determined by VBC members' perception that there is a purpose in such belonging, which is evidenced in the way they are integrated, retained and assisted in such communities, as well as in their suitable behavior (Henriques & Pereira, 2018). Kozinets (2006) advocates that the connection between fans driven by their common interest in a certain media product can create a brand fandom; it may happen through their interactions and through the gathering and production of contents associated with such product. Their spontaneous and productive actions help in spreading brand content, mainly on social media, which stands out for lack of physical (i.e. geographical) barriers.
Methodology
Netnography was the research methodology adopted in the current study. It is commonly used in studies focused on investigating online-mediated interactions (Kozinets, 2010) and on better understanding virtual cultures (Guesalag, Pierce, & Scaraboto, 2016;Hamilton & Alexander, 2017;Izogo & Jayawardhena, 2018). The aforementioned method can reveal the richness of online social interactions and relationships (Henriques & Pereira, 2018), as well as the way social and cultural perceptions about the world are maintained and modified depending on the observed community (Nunes & ArrudaFilho, 2018). Furthermore, the method reveals how structures and social relationships in online cultures can support relationships in the physical world (Underberg & Zorn, 2013).
The approach adopted in the current study was the one suggested by Kozinets (2001), which is strongly applied in marketing research. It was adapted from the classical ethnography used to investigate online consumer cultures. The method enables analyzing consumer interactions taking place in virtual environments (Bartl, Kannan, & Stockinger, 2016;Henriques & Pereira, 2018). It is also often used in studies about media sports and online relationships established among sports fans (Filo, Lock, & Karg, 2015;Naess, 2017;Stavros, Meng, Westberg, & Farrelly, 2014).
Netnographic studies differ from regular ethnography because they focus on better understanding online consumer communities, by taking into consideration relationships between, and practices adopted by, members of such communities (Gammarano & ArrudaFilho, 2014;Kozinets, 2010). However, as an ethnographic study, netnography is also based on observation; thus, it requires researchers to have the ability of perceiving details and particularities of the investigated culture and of analyzing its modus operandi and meaning construction process (Barboza & ArrudaFilho, 2014;Underberg & Zorn, 2013). Table I explains netnography stages, definitions and the way they were implemented in the present research, based on guidelines by Kozinets (2001Kozinets ( , 2006Kozinets ( , 2010Kozinets ( , 2015. The way people interact in the social network and the variety of topics addressed in this virtual environment have guided the codification process involved in the data analysis. The spontaneous use of Twitter hashtags integrates its users in a cultural community representative of the investigated phenomenon (Kozinets, 2010;Nunes & ArrudaFilho, 2018).
The current study also took into consideration research quality criteria based on what Kozinets (2015) highlights as appropriate for a netnography: rigorous method application, axiologically demarcated cultural entreé, commitment to achieve data saturation, data interpretation in compliance with the available literature or theorization, researchers' reflexivity and respect for the observed praxis. In addition, analysis triangulation was implemented among researchers; research findings were presented in a clear, rich and detailed description (Paiva, Leão, & Mello, 2011). This process was implemented in the inductive and deductive phases of the analysis. Provisional codes were suggested and discussed by the authors of the current study on a weekly basis during the first NFL season to minimize the likelihood of unnoticed aspects. Marketing literature was explored during the offseason period between seasons to meet the empirical results. Finally, categories and their relationship with adjusted codes were discussed during the second NFL season to validate the results.
Results and discussion
The analysis performed in the current study has identified seven codes, which were organized into two categories, as shown in Table II. In addition, Figure 1 explains the relationship between codes and categories, as well as how it converges to the main result of Attachment to the National Football League based on brand elements Brazilian NFL fans become attached to the league due to its unique brand features. Code 1 evidences the link established between NFL Brazilian fans and league teams (a.k.a., Brazilian NFL fans consider players vital to league enjoyment, since they understand that players make sports spectacles complete. Fans admire players for their contribution to develop the sport, the league and the teams, as well as they emphasize players' outstanding athleticism [3] Euphoria and melancholy toward Super Bowl The final match of NFL seasons triggers different emotions in fans: on the one hand, Brazilian fans feel ecstatic to witness the most important game of the year; on the other hand, they feel sorrowful due to the offseason (period without NFL games)
Attachment to NFL through social life aspects [4] Fans' daily life is changed by the NFL
The routine of some Brazilian NFL fans changes during the league season. They avoid scheduling appointments at match times and stay awake at dawn to watch the games, even on the eve of business days [5] Family-life fitted to the NFL Brazilian NFL fans are so involved with the league that they compromise moments often devoted to family activities (e.g. weekends) to watch the games during the season. Therefore, they strive to introduce their families to the sport and turn the game broadcasts into family events [6] Friendships forged and maintained through the NFL Brazilian NFL fans enjoy the league games in the company of friends. They turn the game broadcasts into social events to be shared with friends, a fact that deepens their relationship. Enthusiasts even invite friends who are not familiar to this modality as an attempt to introduce them to the NFL. Some of these friends invite other friends and so forth, and it expands their friendship network [7] Spreading the league through social media Brazilian NFL fans use other social media platforms such as WhatsApp, Facebook and Instagram to spread the league content to others. They feel responsible for promoting the league in their social networks to win new fans Source: Prepared by the authors Figure 1. franchises). When fans use tweets to announce their support to NFL teams, they consolidate their attachment to the brand, as they experience a deep emotional bond to such teams. An example of it lies on @CheeseheadsBr; this Twitter profile is managed by a local supporter of Green Bay Packers, which is one of the most popular NFL teams in the country. On October 16, 2016, the fan posted a message with a photo of himself at Lambeau Field, Packers' Stadium (one of the most iconic stadiums in the American football history), in which he declared to have accomplished one of his biggest dreams in life.
Relationship between codes and categories
The preference for one of the NFL franchises acts in a complementary way to Code 2. It reveals the identification of Brazilian fans with league players, who constitute another brand element. The most admired players are the main stars of the teams. In opposite, although complementary, direction, the admiration for certain players can make fans support the teams they play for. This bond can be seen in a message posted by a viewer who reported his girlfriend's involvement with New England Patriots, which is another franchise that is quite popular among Brazilian fans. According to his tweet from September 28, 2016, his girlfriend had named her newborn kittens after her favorite Patriots' players: "Danny Amendola" and "Jimmy Garopolo".
Fan's admiration for players also shape the way they experience NFL games, as they admire players' athletic performance. Tweets released on October 2, 2016, during the game between Atlanta Falcons and Carolina Panthers, demonstrated how impressed Brazilian NFL fans were with the performance of Julio Jones (the main receiver of Falcons' team). These messages highlighted how watching the player in action was spectacular and how it improved their game-watching experience.
With respect to Super Bowl, Code 3 reveals a mix of enthusiasm and frustration experienced by Brazilian viewers who watched the final NFL game. They were simultaneously excited about the most anticipated game of the season and distressed for realizing that it was the last game for the following seven months. This mixture of feelings was evident on tweets published on the first Sunday of February in the two NFL seasons. A flood of messages highlighted the excitement of watching the Super Bowl, while expressing "saudade", a Portuguese term of unique meaning referring to a blend of nostalgia and missing something/someone. Attachment to the National Football League based on social life aspects Brazilian fans' attachment to NFL is reinforced by the sociability involving the enjoyment of the league. Code 4 illustrates this bond and reveals how fans prioritize watching the games. A tweet published on September 8, 2016, well expressed this bond. A viewer asked the Brazilian ESPN broadcast team to help him explain to his girlfriend how important it was to stay home and watch the league games live.
Another facet of this code indicates how watching the games justifies sleep deprivation and even low productivity in the morning following the match days, as the broadcasting of the games enters the dawn. During the first game of the 2016-2017 season, a viewer happily wrote that sleepless nights' season had begun.
Code 5 concerns brand attachment in a more intimate form of social interaction, i.e. the one involving family members. Part of the Brazilian NFL audience influences family members to follow the broadcasts of the games; they act as league ambassadors who present and explain the rules and peculiarities of American football. In doing so, they try to reconcile the moments that would often be devoted to family activities with the enjoyment of American sports games by incorporating the social and affective relationship they have with their loved ones to NFL consumption. Tweets posted in several moments reveal how Brazilian fans of the league tried to introduce their parents, boyfriends and girlfriends, siblings and other relatives to the rules and particularities of American football by relying on the charisma of Brazilian ESPN broadcasts.
Friendship is another type of effective relationship associated with NFL consumption in Brazil. Code 6 shows how the transmission of league games become moments of friendship among fans. More than that, NFL broadcasts in Brazil bring together sports fans who do not know each other and who have become friends due to their consumption of league games. Similar to what happens to family members (Code 5), some fans introduce the sport to their friends. Messages revealing collective league consumption among friends and how league consumption was responsible for establishing new friendships were posted at different times of the two observed seasons.
Finally, Code 7 reveals how Brazilian NFL fans play the role of spreading NFL information, characteristics and rules to a broader audience in different social media platforms to expand the community of fans. According to a tweet published during the 2018 Super Bowl, a fan shared his WhatsApp use to, in his own words, "catechize" his girlfriend to watch the event with him.
Final considerations
Findings in the current study indicate that Brazilian audience interactions in social media establish consumer attachment to the NFL based on brand elements and social life aspects mediated by the league. These findings illustrate how the observed interactions enable relationships that are, simultaneously, social-and market-based. It is possible saying that Brazilian fans' attachment to NFL is established at a brand level due to relationships established in social, affective and even moral dimensions.
The direct relationship between Brazilian fans and brand elements depicts the attachment type defined by Wallendorf and Arnould (1988) as a relationship between, and nurtured by, consumers and their favorite entities or objects. On the other hand, sports fans often extend their involvement with the game to sports' brand elements (Harris & Ogbonna, 2008;Hoegele, Schimdt, & Torgler, 2016). Choosing the team to cheer for is essential to sports fans and it may drive them into sporting brands' attachment (Harris & Ogbonna, 2008;Hoegele et al., 2016). The admiration for players' athleticism, in its turn, generates fan identification with athletes (González & Francisco García, 2013;Healy & McDonagh, 2013), who are given a celebrity or even "human brand" status (Thomson, 2006). Finally, the involvement with Super Bowl shows the emotional bond established between Brazilian fans and the NFL. The consumption of one's favorite brand feeds the feeling of possession, which can equally lead to the excitement for enjoying it (Ahuvia, 2005;Albert et al., 2013), as well as to abstinence for losing it (Masset & Decrop, 2016).
The league also establishes attachment with Brazilian fans based on the sociability among its viewers. People interacting about certain consumption objects often nourish the feeling of experiencing a special event together (Schroll et al., 2018;Wallendorf & Arnould, 1988). Besides, consumers become loyal to a certain brand when they incorporate it into their everyday lives (Humphreys & Wang, 2018). The enjoyment of the games among family members and friends shows how this experience happens in the midst of social integration. The existence of a consumption object shared by social groups intensifies, materializes and transforms the relationship between its members and between them and products and/or brands (Epp & Price, 2010). In addition, routine changes implemented to allow fans watching the games indicate the establishment of a ritual linked to one's feelings toward the brand or consumption momentum (Batra et al., 2012;Sharma et al., 2017). Finally, spreading the league evidences the moral commitment of fans toward the NFL, as they act as brand ambassadors. It happens because some consumers use to consider themselves responsible for propagating the frontiers of brand communities (Jenkins et al., 2009;Muñiz & O'Guinn, 2001). All these aspects corroborate every characteristic pointed out by Muñiz & O'Guinn (2001) as fundamental for VBC establishment and maintenance. Furthermore, brands consider VBC and social media as environments fit to promote products or services in the Web 2.0 context and, mainly, to implement strategies that can generate identification and attachment to them. According to Jenkins (2006), cultural spaces are capable of turning consumers into brand fans.
The current study has some limitations such as the observation of only part of the Brazilian NFL audience. Only fans who engaged the broadcasts of the games through social media were observed, leaving aside the whole part of the audience who did not use this resource. However, such choice corresponded to the definition of the research scope. Another limitation lies on the coverage of only two NFL broadcast seasons. This clipping was justified by time constraints to perform the investigation. However, research conduction and results have indicated that the study achieved its purpose.
The current study addressed how NFLa media brand imbricated in the American culturehas been the target of attachment by Brazilian fans through social media interactions. Thus, the research contributed to better understanding how these platforms can show consumer attachments in contemporary society. The research evidenced that social media is an environment where brands can encourage prosumerist acts and social interactions to create or improve consumer attachments to them.
As the potential unfolding of the research, investigating Brazilian fans' attachment to other American sports leagues, such as the Major League Baseball and the National Basketball Association, would enable extend and compare results to findings in the present study. These leagues are broadcasted in Brazil in the same way as the NFL, a fact that would make this new research possible. | 2020-04-02T09:09:33.736Z | 2020-04-30T00:00:00.000 | {
"year": 2020,
"sha1": "5a9629b6daf57aee8739598beba5df016220b272",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1108/inmr-02-2019-0015",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fa064dce49977ee46993379956d1e7fff82f2b81",
"s2fieldsofstudy": [
"Sociology",
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
} |
245576317 | pes2o/s2orc | v3-fos-license | Effect of interaction from the reaction of carboxyl/epoxy hyperbranched polyesters on properties of feedstocks for metal injection molding
The purpose of this study is to improve the properties of the feedstocks and shape retention of debinded parts by the reaction between 17-4PH stainless steel powders. Carboxyl-terminated hyperbranched polyester (CTHP) and epoxy-terminated hyperbranched polyester (ETHP) were used to treat the powders, and termed as CTHP-m and ETHP-m with carboxyl and epoxy group, respectively. Comparing with pristine, CTHP-m and ETHP-m, feedstock prepared from equal amount of CTHP-m and ETHP-m (CTHP-m/ETHP-m) possessed more excellent properties. The experimental results showed that the critical solids loading, flexural modulus, density and melt flow index of CTHP-m/ETHP-m feedstock were 63.8 vol.%, 2800 Mpa, 5.06 g cm−3 and 62 g/10 min, respectively, which were obviously higher than that of others. Also, the shape retention of CTHP-m/ETHP-m debinded parts was the best of all the samples. The improved properties of CTHP-m/ETHP-m feedstock were attributed to the powder interaction between CTHP-m and ETHP-m formed by the chemical reaction between epoxy and carboxyl group.
Introduction
Metal injection molding (MIM) integrates the advantages of conventional plastic injection molding and powder metallurgy. This is a good way to produce parts with complicated shapes and high dimensional accuracy [1]. Metal injection molding procession was used in various application fields, e.g., automotive technology, medical devices and consumer markets for many years [2]. Better homogeneity and rheological property of feedstocks [3], higher critical powder solids loading [4], mixing of appropriate types of powder [5], the stronger interfacial bonding strength between powder and binder [6] and the enhanced mechanical properties such as hardness and tensile strength [7] were all helpful to improve the shape retention of debinded parts and dimensional accuracy of sintered parts.
To obtain excellent shape retention of debinded parts and a high dimensional accuracy of sintered parts, high critical solids loading is a basic requirement in MIM. Many researchers found different influencing factors, which can increase powder loading. Ma et al [8] investigated the influences of ball milling technique on the characteristics of the carbonyl iron and carbonyl nickel mixed powders and the powder injection molding procession. They found that the homogeneity and dispersion of the mixed powders had improved after ball milling treatment, the critical powder solids loading of the feedstock was increased from 52% to 62%. Mukund et al [9] researched the effect of particle size and shape differences of water atomized 17-4PH stainless steel powder on the critical solids loading of PIM parts. The results indicated that the increasement in the population of finer particle fraction accompanied with relatively more regular shape improved the powder loading. Hausnerova et al [10] revealed that for coarser 17-4PH powder particles, the critical solids loading was better in case of gas atomized powders, but water atomized powders showed better performance for finer powders. Choi et al [11] designed a trimodal powder feedstock using Fe micro-powder and nano-powder agglomerates consisting of a bimodal particle distribution and the powder solids loading was increased to 72 vol%.
The mechanical property of green bodies and sintered parts are strongly influenced by different backbone polymers and powder compositions, and it was necessary to know the injection-molding behavior at different powder-binder compositions [12][13][14]. The higher strength is critical for producing MIM products with excellent dimensional accuracy. Huang et al [15] compared the influences of different types of backbone polymers on the mechanical performance of 316 L stainless steel PIM green compacts and sintered parts. They found that the bending strength of the PIM compacts using high-density polyethylene as the backbone polymer in binder was stronger than that of low-density polyethylene, and the dimensional accuracy of sintered parts was better than the latter. Oh et al [16] researched the effects of nanoparticles in bimodal powder that contains both nanoparticles and microparticles on the strength of powder injection molding parts. The results showed that the powder loading and homogeneity of the feedstock decreased slightly owing to the existence of nanoparticles, but greatly enhanced the mechanical strength including Vickers hardness, the tensile strength and the flexural strength and strain of the PIM parts.
The surface treatment agents including surfactant and coupling agent are usually used as additives when mixing powder and binder to improve the interfacial strength and compatibility, which can also make the powder evenly disperse into binder and increase the flowability of feedstock. Wen et al [17] modified the surface of the zirconia powders by adding a small amount of oleic acid (OA), and the polarity of powder surface was changed from hydrophilic to hydrophobic when mixing the binder and powders, the surface modification could improve the compatibility between binder/powder to obtained the sintered part with excellent properties. Wongpanit et al [18] studied the influence of acrylic-acid grafted HDPE (AAHDPE), which acted as one portion of HDPE-based binder on the properties of the MIM parts. The results showed that the mechanical property, ductility of the green bodies and the distortion of the debinded parts had been enhanced, which indicated that AAHDPE used as compatibilizer in the binder improved the compatibility and packing stability between binder and powder. Liu et al [19] modified zirconia powder by titanate coupling agent, which interacted with the powder surface by a chemical bonding and used them for ceramic injection molding. The results revealed that the powder modification was beneficial to improve the homogeneity and dispersity between powder and organic binders, and improved the performance including the flowability of feedstock, mechanical strength of sintered parts, densification and grain refinement of the water-based binders ceramic injection molding system. Deng et al [20] treated ZrO 2 ceramic powders by silane coupling agent (A151) and investigated the injection molding and debinding procession. The results revealed that the melt flow rate (MFR) of the ZrO 2 feedstocks increased, the bending strength of the green bodies enhanced, and the weight removal ratio of soluble binders and the rate of the solvent debinding were both increased after using the method of powder modification by the addition of A151. Lindqvist et al [21] modified the silicon nitride powder by silane coupling agent and titanate coupling agent, and applied them in powder injection molding. Glycidoxytrimethoxysilane was used in conjunction with tetrabutyl titanate and successfully decreased the viscosity of feedstock drastically in the process of injection molding. Qi et al [22] used the coupling agent of vinyltrimethoxysilane(VTMS) to modify the Ba (Mg 1/3 Nb 2/3 )O 3 (BMN) ceramic powders, and investigated the influence of powder modification on the compatibility between modified filler (BMN) and pure polytetrafluoroethylene (PTFE). The results indicated that the particle modification method was beneficial to improve the compactness and uniformity of the composites. Raji et al [23] prepared polypropylene/clay nanocomposites in which the clay was modified by organosilanes namely 3-aminopropyltriethoxysilane (APTES) and vinyltrimethoxysilane(VTMS), and the results revealed that the process of silylation of clays had high efficiency to improve the proprieties of silane grafted clays nanocomposites, in terms of interfacial adhesion strength between the organoclay and the PP matrix, and greatly enlarged the spatial dispersion-distribution.
Hyperbranched polymer, which own high branching density with strong branching potential in each repeating unit [24], is one type of dendritic polymers different from common ones, and they have attracted comprehensive attention for their good solubility in different types of organic solvents, good dispersity as fillers, relatively lower melt viscosity, multifunctionality of end active groups [25]. Hyperbranched polymers can be used as additives or matrix to treat organic polymer or inorganic filler to improve the compatibility of composite materials or dispersibility of inorganic filler. Shi et al [26] found that the covalent grafting of hydroxylterminated hyperbranched polymer (HTHBP) on the carbon fiber (CF) surface can significantly enhance the interfacial properties of composites. Jiang et al [27] used a high molecular weight, highly active terminal hydroxyl hyperbranched polymer based on an aromatic polyamide hyperbranched polymer (HPN202) as a modifier of POM-based binder and investigated the catalytic debinding process and properties of feedstock in MIM. They found that the flowability of feedstock, mechanical strength of the green parts and dimensional accuracy of sintered parts were all well improved.
In our previous work, silane coupling agent of KH550 and KH560 was used to modify the 17-4PH gas atomized stainless steel powders, respectively. The residual silicon in the green parts may affect the properties of the sintered parts. In our present study, terminated hyperbranched polyesters containing only carbon, hydrogen and oxygen elements are used as surface treatment agent for powder, and the carboxyl group and epoxy group were grafted onto the surface of 17-4PH stainless steel powders, respectively. The strong interaction among powders comes from the reaction as well as hydrogen bonding between carboxyl group and epoxy group. The influence of powders interaction on the properties of feedstocks in especial the shape retention of debinded parts after debinding was investigated.
Experimental raw materials
The 17-4PH stainless steel powders (atomization atmosphere: nitrogen, microscopic shape: spherical, density:7.88 g cm −3 ) was supplied by Sandvik Osprey Company of the United Kingdom. Table 1 showed the main chemical compositions of the powder which were received from the manufacturer's data explanation sheets. Figure 1 showed the morphology of the powder. The size distribution of 17-4PH powders used in this experiment was analyzed by a particle size laser analyzer (MS-2000, Malvern, UK) and was shown in
Powder surface modification
Firstly, CTHP (0.3 wt% by the weight of 17-4PH powders) was poured into a beaker containing acetone, then stirring repeatedly until CTHP fully dissolved. Added an appropriate weight of 17-4PH powders to a high-speed mixer with a rotation rate of 800 rpm. The dissolved CTHP solution and the catalyst of stannous octanoate (1 wt% based on the weight of CTHP) were both added gradually and uniformly to the high-speed mixer when the temperature of powders reached 135°C, the powders were stirred for 45 min. In order to eliminate the unreacted CTHP and extra solvent, the modified powder was firstly washed by acetone and filtered for three times, and then dried in a vacuum oven at 85°C for 6 h. Secondly, the ETHP (0.3 wt% by weight of 17-4PH powders), absolute ethanol and toluene (the molar ratio of absolute ethanol to toluene was 2:1) were added to a beaker and stirred till the ETHP fully dissolved. The 17-4PH powders were poured into the high-speed mixer (850 rpm). When the temperature of 17-4PH powders reached 140°C, the ETHP solution and the catalyst 2,4,6-tris-(dimethylaminomethyl) phenol (1.2 wt% by the weight of ETHP) were both added gradually and uniformly to the high-speed mixer and stirred for 1 h, then the modified powders were washed with absolute ethanol and toluene for three times, and then dried in a vacuum oven at 110°C for 10 h. Finally, surface modified 17-4PH powders with CTHP and ETHP were obtained, which were termed as CTHP-m and ETHP-m, respectively.
Feedstock preparation
A batch mixer (XSS-300, Shanghai Science and Technology Rubber Machinery Corporation, China) was used to fabricate the feedstocks. The chamber of mixer owned the capacity of 54 cm 3 with in-built thermocouples to monitor a melt temperature during mixing. The rotor speed and temperature when mixing the powder/binder was set at 50 rpm and 160°C, respectively. The binder ingredients composed of HDPE and paraffin wax (the volume ratio was 7:3) were premixed in the mixing chamber, followed by the addition of 17-4PH powders and mixed for 30 min. Especially, the catalyst 2,4,6-tris(dimethylaminomethyl) phenol (0.5 wt% by the weight of binder) was also put into the chamber when mixing the feedstock that filled with CTHP-m/ETHP-m (the mass ratio was 1:1). The mixing torque development with time was recorded. When the modified powders were used, the organic treatment agent connected with the modified powders was calculated as part of the binder. (BCP), and the surface grafting can improve the interfacial compatibility of BCP bioceramic with biopolymer-PLLA. In this study, the surface elements of powders were evaluated by XPS spectra and the results were summarized in table 3. The surface of powders was mainly composed of C, O and negligible Si element. The content of carbon element increased greatly from 37.31% to 56.03% and 53.1% for CTHP-m and ETHP-m, respectively. Figure 4 showed the XPS wide-scan and C1s curve fit spectra of powders. The peaks for Si 2p, C 1 s and O 1s were centralized at near 102 eV, 285 eV, and 530 eV, respectively. The C1s (1) peak at 284.8 eV for the fit spectra C1s curve of pristine originated from the sp 2 hybridized graphitic carbon [30], the C1s (2) at around 288.6 eV originated from a small amount of carbon on the ester group [31]. After surface modification, CTHPm and ETHP-m appeared a new characteristic peak at 286.2 eV in C1s high resolution spectra, which was assigned to −C-O-(ether) and -C-O-(alcohol) [32]. The above results indicated that CTHP and ETHP had been successfully connected to the surface of 17-4PH powders, respectively. Figure 5 was the thermogravimetric analysis results of powders. The residual mass percent at 800°C of pristine powders, CTHP-m and ETHP-m was 99.73%, 99.67% and 99.65%, respectively. The hydroxyl groups and little hyperbranched polyester residue on the surface of CTHP-m and ETHP-m were removed at 800°C. The grafting ratio of hyperbranched polyester on the surface of 17-4PH powders could be calculated by the difference of residual mass percent between pristine powders and modified powders. Therefore, the organic residue for CTHP-m and ETHP-m were 0.06%, 0.08%, respectively, which indicated in the modified 17-4PH stainless steel powders, a chemical bonding is formed between the surface treatment agent CTHP or ETHP and the 17-4PH stainless steel powders. When preparing the feedstocks, the organic treatment agent connected with the modified 17-4PH powder was calculated as part of the binder. The values of zeta potential represent the potential state of the particle surface and indicate the degree of stability or dispersion of powder particles. Generally, the greater the absolute value, the more stable and better dispersion the system tend to be. Huo et al [33] used long-chain surfactant sodium dodecylsulfate (SDS) and cetyltrimethylammonium chloride (CTAC) to modify the colloidal particles and weak agglomerations formed which was beneficial for the stability of the colloidal suspension system. The results also showed that high absolute values of zeta potential (>40 mV) resulted in instability system due to the strong particle-particle repulsion force. Table 4 listed the zeta potential values of powders in ethanol solvent, the absolute zeta potential value of ETHP-m was relatively lower than that of pristine powders and CTHP-m, which indicated that there was a certain interaction among ETHP-m. It was worth noting that the zeta potential values of CTHP-m/ETHPm decreased to −1.16 mV when equal amount of CTHP-m and ETHP-m were mixed in ethanol solvent, which indicated that the interaction in CTHP-m/ETHP-m was higher and the hydrogen bonding existed between carboxyl group and epoxy group in the surface of CTHP-m and ETHP-m, respectively.
Feedstock properties
Higher critical powder solids loading is beneficial to improve the performance of feedstock and final parts. Hnatkova et al [34] studied the influence of stearic acid (SA) as a part of organic binders on the critical solid loading (CSL) and corresponding flowability of aluminum oxide (Al 2 O 3 ) feedstocks. It was found that apparent viscosities of the CIM feedstocks reduced by adding SA. Su et al [35] investigated the effects of powder solids loading on the properties of ceramic parts by soft molding. The results revealed that the density of green and sintered parts increased with the increasement of powder solids loading, while the linear shrinkage showed the opposite result. Figure 6 showed the values of mixing torque of feedstock versus increasing powder solids loading. It can be seen from the figure that the mixing torque of feedstock performed by using CTHP-m or ETHP-m was higher than that of pristine powders. This might be because the organic hyperbranched polyester residue in the powder particle surface was regarded as a part of binder, and the viscosity of the organic residue was higher than the binder. It was worth emphasizing that the mixing torque values of feedstock obtained by adding equal amount of CTHP-m/ETHP-m was lower than that of others. The result might due to the reaction between the carboxyl group and epoxy group in CTHP-m and ETHP-m, respectively. Therefore, CTHP-m and ETHP-m was more tightly packed through reaction so that some binder between powders released as shown in figure 7, and the close combination of powders reduced the contact area between the powder and the binder [36], thereby reducing the flow friction of powders in binder, and the flowability of feedstock increased. In our previous study [37], the interaction between powders improved the fluidity of the feedstock. However, the silicon element in the surface treatment agent may not be conducive to the sintering process of the products. Shin et al [38] indicated that mixing torque increased with increasing added powders, and increased sharply at the point of critical solids loading. Therefore, the change of mixing torque of feedstocks prepared from CTHP-m/ETHP-m versus the increasement of powder volume loading was plotted in figure 8 and the values of critical solids loading of feedstocks were listed in table 5. The critical solids loading of feedstocks prepared by adding pristine powders, CTHP-m and ETHP-m was 62.7 vol%, 63.3 vol% and 63.2 vol%, respectively, while that of CTHP-m/ETHP-m feedstock was up to 63.8 vol%. This might due to the reaction between carboxyl group on the surface of CTHP-m and epoxy group on the surface of ETHP-m. Therefore, the interaction makes the powders connect more closely, what's more, hyperbranched polyesters themselves had a lot of free volumes and spaces [26], and the powders could avoid a large extent of agglomeration, thus the flowability of feedstock and the loading capacity of powder was both increased. These results were consistent with figure 7.
In order to conveniently study the effects of powder reaction on the properties of feedstock, according to table 5, the optimal powder loading could be determined as 62 vol% which was little lower than the critical solids loading. The density and flexural modulus of as-prepared feedstocks were listed in table 6. Comparing with the feedstock fabricated from pristine powders, the density and flexural modulus of the feedstock prepared by CTHP-m or ETHP-m obviously increased. It could be concluded that the method of powder surface modification with CTHP or ETHP was helpful to improve the compatibility and enhanced the interfacial interaction strength between powder/binder. Especially, the green density and flexural modulus of CTHP-m/ ETHP-m feedstock was up to 5.06 g cm −3 and 2800Mpa, respectively, the reaction between CTHP-m and ETHP-m increased the density and enhanced the mechanical strength of green parts.
As shown in figure 9, the melt flow index (MFI) of feedstock fabricated from CTHP-m and ETHP-m decreased compared to that of pristine powders. It was because the flowability of CTHP or ETHP residue in the modified powder particles surface was lower than that of selected binder when mixing the feedstocks. Particularly, the flowability of feedstock prepared by adding CTHP-m/ETHP-m was obviously higher than that of others, it was concluded that CTHP-m and ETHP-m were tightly packed when mixing the feedstock to release some binder as shown in figure 7 due to the reaction of CTHP-m/ETHP-m, thus the rheological property improved. MIM feedstocks generally tend to exhibit pseudo-plastic behavior over the range of shear rates, and the shear thinning phenomenon is necessary to reduce the powder-binder segregation of feedstocks to be injected. Regarding the pseudo-plastic fluid, the general relationship between viscosity and shear rate at different temperatures can be described by equation (1) [7]: where η is the viscosity of the feedstock, K is the material constant, γ is the shear rate, and n is the flow behavior index that is smaller than 1. The value of n or (n-1) indicated the degree of shear sensitivity for evaluating the rheological properties of feedstock. A relatively lower absolute value of n or a higher absolute value of (n-1) is desirable for injection molding of complex precision parts because the viscosity of feedstock shows higher dependency on the shear rate [17], which revealed stronger pseudo-plastic. Figure 10 shows the relationship between viscosity and shear rate of feedstock at the temperature range from 140°C to 170°C. As seen in figure 10, with the increasement of shear rate, the viscosity of all feedstocks at different temperatures decreased rapidly, which showed pseudo-plastic fluid behavior, and the viscosity of each feedstock decreased linearly from the temperature of 140°C to 170°C, which indicated that higher temperature is more suitable for MIM. The values of flow behavior index (n) of each feedstock at different temperatures can be calculated from the slope of the h lg versus g lg curve (value of n-1) by the linear fitting and summarized in table 7. It can be seen from table 7 that all the flow behavior index(n) at the temperatures from 140°C to170°C were lower than 1, which indicated that four feedstocks owned strong pseudoplastic behavior. And it was known that the lower value of n was important for producing complex MIM parts due to the higher shear sensitivity of feedstock, this fact can ensure better flowability and filling during molding process. Comparing with other feedstocks, the n values of CTHP-m/ ETHP-m feedstock was calculated to be 0.425, 0.431, 0.460 and 0.484 at the temperature of 140°C, 150°C, 160°C and 170°C, respectively, which were lower than feedstocks prepared from pristine, CTHP-m and ETHPm. This experimental result might be caused by the strong interaction between CTHP-m and ETHP-m, and it could be expected to obtain higher quality parts with fewer defects.
In addition to the shear rate, the effect of temperature on the viscosity of feedstock is another significant factor for feedstock in MIM. Normally, the relationship between feedstock viscosity and temperature can be expressed by the Arrhenius equation shown in equation (2): where B is the reference viscosity, E is the flow activation energy, R is the gas constant, and T is the absolute temperature. In Arrhenius equation, the E value suggests the sensitivity of viscosity to the temperature variation of feedstock, which is an important influencing factor for the molding process in MIM. Lower value of E indicates that the viscosity is not so sensitive to the change of temperature so that the feedstock could be flowed smoothly into the mold and broaden temperature ranges. If the value of E is too large, it means that the viscosity is very sensitive to temperature, too large value of E would lead to some problems during injection molding process, such as stress concentration which would result in cracks and distortions in the molded parts [39]. Therefore, weaker temperature dependence is critical to produce MIM parts with high quality. The relationship between feedstock viscosity and temperatures was shown in figure 11, better the rheological property. Comparing with other feedstocks, the E value of CTHP-m/ETHP-m is the lowest. This should be due to the interaction from the reaction of carboxyl/epoxy hyperbranched polyesters, which resulted in the closer connection of CTHP-m and ETHP-m in the feedstock, thus reducing the friction between powder and binder, increasing the fluidity of feedstock and decreasing the flow activation energy. Figure 12 showed SEM micrographs of the morphology of fractured surfaces of the green parts, as seen, the compatibility between the modified powders and the binder were better than that of pristine. Especially, the interparticle of CTHP-m/ETHP-m feedstock connected more tightly (figure 12(d)) than others (figures 12(a)-(c)), which was consistent with the results of the zeta potential values (table 4) and the flexural modulus and density of the green parts (table 6). What's more, it could be observed that the number and size of holes circled in the images decreased from figures 12(a) to (d), that is, the fractured surfaces of the green parts of CTHP-m/ETHP-m has the least number of holes and the smallest holes diameter, which indicated that the method of powder modification and interaction from CTHP-m/ETHP-m can improve the compactness of the green parts and reduce defects. Figure 13 showed SEM micrographs of surface of the green parts. As shown in figure 13, there were more obvious holes on the surface of the green parts prepared by pristine, which indicated that the poor compatibility between the pristine and HDPE/paraffin wax binder. However, there were less holes on the surface of the green parts prepared by CTHP-m and ETHP-m, which revealed that the method of powder surface modification could improve the compatibility of powder and binder. Moreover, the surface of the green parts prepared by CTHP-m/ETHP-m was smoother than that of others, and there were no holes on the surface. The above results demonstrated that the strong interaction between CTHP-m and ETHP-m could not only promote the compatibility of powder and binder, but also make the connection between the powder and binder more closely and easy to form uniform microstructure. Figure 14 showed SEM pictures of the cross-sectional fractured surfaces of debinded parts, these images had little difference among the debinded parts, and the phenomenon indicated that powder modification did not affect the state of the powders. Compared with other debinded parts (figures 14(a)-(c)), the powders connected more closely and the surface of debinded part had fewer holes as seen in figure 14(d). The results demonstrated that the reaction between CTHP-m and ETHP-m was beneficial to improve the property of feedstock and reduced defects of debinded parts.
Shape retention after thermal debinding
In the field of MIM, it is an important requirement for good shape retention for debinded or sintering parts to get final products with higher quality. Figure 15 was the front and side views of the debinded parts, and the change rate of the length of debinded parts was shown in table 9. The length of green parts prepared by a compression mold was lower than that of corresponding debinded parts after thermal debinding, and the green components appeared the expansion after debinding, but the method of powder modification greatly decreased the extent of expansion, what's more, the CTHP-m/ETHP-m green parts had the minimum length change rate after thermal debinding. The front view of all debinded parts had no significant bubbling and defects on the surface. However, the bulges and rough surface appeared in pristine and CTHP-m debinded parts, besides, the degree of distortion of pristine and CTHP-m debinded parts was higher than that of ETHP-m and CTHP-m/ ETHP-m debinded parts. The slighter degree of deformation and distortion of ETHP-m debinded parts was the results of the better compatibility between modified powders and binder. Particularly, because the interaction of carboxyl/epoxy hyperbranched polyesters helps to make the modified powder closely connected, and then reduce the debinding deformation of CTHP-m/ETHP-m debinded parts, CTHP-m/ETHP-m debinded parts had the lowest length change rate, and this would be critical to improve the precision and quality of the final products.
Conclusion
The objective of this work presented herein is to analyze and compare the influence of interaction from the reaction of carboxy/epoxy hyperbranched polyesters on properties of feedstocks which were prepared by pristine powders, CTHP-m, ETHP-m, and CTHP-m/ETHP-m. The XPS and TG results demonstrated that carboxyl group and epoxy group were successfully connected with the 17-4PH powders surface by CTHP and ETHP modification, respectively. Due to the reaction between CTHP-m and ETHP-m, the critical powder solids loading, flowability, green density, flexural modulus of feedstocks prepared by CTHP-m/ ETHP-m were higher than that of pristine, CTHP-m and ETHP-m. Comparing with pristine, CTHP-m and ETHP-m, the feedstock prepared by CTHP-m/ ETHP-m possessed the critical solids loading, flexural modulus and density and melt flow index of 63.8 vol.%, 2800 Mpa, 5.06 g cm −3 and 62 g/10 min, respectively, which were higher than that of others. It is worth emphasizing that the shape retention of debinded parts prepared by CTHP-m/ ETHP-m was the best of all, and this will help to improve the properties of MIM feedstocks and the precision and quality of the final products. | 2021-12-31T16:07:25.027Z | 2021-12-29T00:00:00.000 | {
"year": 2022,
"sha1": "7a06f5598e6308ca5ca2fe1396148c3bb0d52cd9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ac46e5",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a8120c78f6fbddaeb717055acd39e7710f8093ba",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
263956556 | pes2o/s2orc | v3-fos-license | Haloperidol-Midazolam vs. Haloperidol-Ketamine in Controlling the Agitation of Delirious Patients; a Randomized Clinical Trial
Introduction: Agitation management in delirious patients is crucial in a crowded emergency department (ED) for both patient and personnel safety. Benzodiazepines, antipsychotics, and newly derived ketamine are among the most commonly used drugs in controlling these cases. This study aimed to compare the effectiveness of haloperidol-midazolam with haloperidol-ketamine combination in this regard. Methods: In this double-blind randomized clinical trial, delirious patients with agitation in ED were randomly assigned to a group: group A: haloperidol 2.5 mg IV and midazolam 0.05 mg/kg IV or group B: haloperidol 2.5 mg IV and ketamine 0.5 mg/kg IV. Sedative effects as well as side effects at 0, 5, 10, 15, 30 minutes and 1, 2, 4 hours after the intervention were compared between the 2 groups. Results: We enrolled 140 cases with Altered Mental Status Score (AMSS)≥+2 and mean age of 52.819.4 years (78.5% male). Agitation was significantly controlled in both groups (p<0.05). In group B, AMSS score was more significantly and rapidly reduced 5 (p = 0.021), 10 (p = 0.009), and 15 (p = 0.034) minutes after drug administration. After intervention, oxygen saturation was significantly decreased in group A 5 (p = 0.031) and 10 (p = 0.019) minutes after baseline. Time required to the maximum effect was significantly lower in group B versus group A (p=0.014). Less patients in group B had major side effects (p=0.018) and needed physical restraint (p=0.001). Conclusions: Haloperidol-ketamine can control agitation in delirium more rapidly than haloperidol-midazolam. This combination had lower adverse events with lower need for physical restraint.
Introduction
Delirium designates an acute, transient clouded state of mind with cognitive disruption and confusion (1).Disturbance in consciousness and inattention are the hallmarks of delirium (2,3).Thus, many such patients are referred to emergency department (ED) for an urgent intervention in controlling agitation (4).Generally, agitated patients can manifest overtly violent behaviors leading to injuries to themselves, other patients, medical staff, and their surrounding environment (5,6).This extreme restlessness accounting for 2.6% of ED encounters, is an obstacle to provision of timely and appropriate medical * Corresponding Author: Elnaz Vahidi; Emergency department, Dr Shariati hospital, Jalal-Al-Ahmad Street, Tehran, Iran.Tel: + 989125948762, Email: evahidi62@yahoo.com,ORCID: https://orcid.org/0000-0002-4580-1456.services (7).The most important initial steps in controlling such patients are: verbal de-escalation techniques, and physical and chemical restraints (8,9).Administrating parenteral sedatives can decline agitation more rapidly and facilitates more efficient control of agitated cases (10).Severe agitated/excited delirium if left untreated can cause metabolic derangement, cardiac arrest and death (11).Benzodiazepines, antipsychotics, and their combination are commonly used in EDs as the main drugs in controlling agitation (12).Both classes have major side effects (13,14).Midazolam, a short-acting anxiolytic agent, has amnestic, hypnotic, and sedative effects with different routes of administration (intravenous (IV), intranasal (IN) and intramuscular (IM)) and provides desirable sedation in less than 20 minutes.Haloperidol is a first-generation antipsychotic with oral, IV and IM administration routes.It takes almost 30 minutes to show its sedative effect (15,16).Ketamine, a highly dissociate sedative, provides rapid and safe control of agitated and violent cases in ED with lower rates of adverse events (17)(18)(19)(20).It's low dose (1-2 mg/kg) is usually used as a second line agent when the previous tranquilizers fail (21).It has a rapid onset of action of around 2 minutes (IV) and 5 minutes (IM) (22).Research in this field recommends that further studies should be performed to exactly determine the best drug option when facing agitation in an emergency situation.Many factors are involved in making the best decision; patients' situation, age, initial medical diagnosis, underlying diseases, and available resources.Considering the fact that midazolam can cause respiratory apnea, haloperidol can cause extrapyramidal reactions and ketamine can cause emergence phenomenon (13)(14)(15)(16)(17), we used the combination formula to see whether we can reach the best combination with the least adverse events in controlling agitation in delirious cases mostly in elderly age range.Since the data in dealing with agitation in delirious patients in ED is scarce, we designed this study to evaluate the effectiveness of combination drugs of haloperidol-midazolam with haloperidol-ketamine in controlling agitation in delirious patients in ED.
Study design and setting
The present study, a double-blind randomized clinical trial, was performed on delirious patients with agitation in EDs of Shariati, Sina, and Imam Khomeini Hospitals from January to December 2020.The study protocol was approved by the Ethics committee of Tehran university of medical sciences (Ethics code: IR.TUMS.MEDICINE.REC.1397.532)and registered in Iranian Registry of Clinical Trials with code: IRCT20120130008872N13.Informed written consent was obtained from patients' guardians.
Participants
Patients older than 18 years with delirium and agitation (Altered Mental Status Score (AMSS)≥+2) (23)(24)(25) were enrolled in our study and randomly allocated to either group A: haloperidol 2.5 mg IV and midazolam 0.05 mg/kg IV (max dose 3 mg) or group B: haloperidol 2.5 mg IV and ketamine 0.5 mg/kg IV (max dose 75 mg).Pregnant cases and patients with history of severe head trauma, suspicion of high intracranial pressure, history of epilepsy, shock status and hemodynamic instability, cases with unwillingness to participate in the study and prior tranquilizer administration in out of hospital settings were excluded.
Data gathering
Basic demographic data, past medical and habitual histories (underlying diseases such as: cardiovascular, cerebrovascular, diabetes mellitus, neurologic diseases, and allergy), vital signs, intervention side effects, time to maximum effect, number of repeated doses required, number of cases needed physical restraint, and AMSS score were assessed during the study.Vital signs and AMSS score were recorded at 0, 5, 10, 15, 30 minutes and 1, 2, 4 hours after the intervention.AMSS score is an ordinal scale of agitation from 4 (unresponsive) to +4 (combative).Severe agitation is defined as AMSS score of +2 or +3, and profound agitation is defined as AMSS score of +4.We defined "time required to the maximum effect", as the time needed to reach AMSS score below +2 and also to decrease AMSS by at least 1 unit.The presented side effects in this study included: respiratory apnea, hemodynamic instability (drop in systolic blood pressure (SBP)<90 mmHg), extrapyramidal reactions (occurrence of stiffness, restlessness, and tremor) and emergence phenomenon (occurrence of new agitation, hallucinations, and illusions).An emergency physician examined patients and recorded all these adverse effects.
Procedure
Method of sampling was block-randomization, based on random numbers table; two blocks of 35 were created from zero to 70 in a random way.Patients were randomly assigned to one of the two blocks based on the order of numbers.Study was double blinded, neither the patient nor the emergency physician was aware of randomization and the prescribed sedation in each group.Drugs' syringes were covered in order to hide the color and volume differences.Triage nurse administered the drug and the emergency physician diagnosed and evaluated the patient and recorded all study variables at a specific time.If the patient remained agitated (AMSS≥+2) despite drug administration after 15 minutes, repeated dose of the same combination was prescribed in both groups.If a patient did not achieve the optimum goal of sedation after 4 hours, alternative sedatives (such as diazepam, etomidate, . . . ) would be used to control agitation.All patients were closely and continuously monitored for side effects, apnea, and hemodynamic changes.
Outcomes
Primary outcomes were comparing AMSS score and vital signs within and between the 2 groups.Secondary outcomes were comparing the side effects, time to maximum effect, and number of repeated doses between the 2 groups.Patients' surveillance and follow-up for side effects and other secondary outcomes such as physical restraint and repeated dose requirement were continued up to 6 hours.
Statistical analysis
With an assumed average baseline AMSS score of 3 with SD=1, α=0.05 and β=0.1 (26), we calculated the sample size and 50 patients in each group were required to detect a 1point difference in AMSS scores between the 2 groups.All data were analyzed using SPSS V.25 software.All the descriptive data are presented as mean ± standard deviation (SD).We conducted a Kolmogorov-Smirnov (KS) test and all data had normal distribution.Analytical statistical tests included two-tailed t-test for continuous variables.Chi-square and Fisher's exact tests were used to compare proportions of the qualitative variables.Repeated measures analysis of variance (ANOVA) was used to determine the difference within each group.The level of significance was 0.05.We performed analyses on an intention-to-treat basis.For presenting the effects, number needed to treat (NNT), number needed to harm (NNH), absolute risk reduction (ARR), and relative risk reduction (RRR) with 95% confidence interval (CI) were calculated and reported.
Baseline characteristics of studied cases
Overall, 140 patients with delirium and agitation were included in this study based on emergency physician diagnosis and study inclusion criteria (flow diagram of the study is shown in figure 1).The mean age was 52.8±19.4(range : 31-78) years (78.5% male).Baseline characteristics of patients showed no significant differences between the 2 groups (Table 1).
Outcomes
Comparison of studied outcomes between groups is shown in tables 2 and 3.
Vital signs
Pulse rate (PR) significantly improved within each group (group A (p=0.046) and group B (0.019)).In group B, PR reduction was more significant than group A 5 (p=0.049) and 10 (p = 0.050) minutes after drug administration.All these variables declared that agitation was more rapidly controlled in group B. After intervention, oxygen saturation (SPO2) was significantly lower in group A in comparison to group B 5 (p=0.031) and 10 (p = 0.019) minutes after baseline.
Time to maximum effect
Time required to the maximum effect was significantly lower in group B versus group A (p=0.014).Incidentally, half of patients (50%) in both groups needed repeated doses to achieve agitation control (p=0.068).None of our cases needed alternative sedatives after 4 hours.Less patients needed physical restraint in group B (p=0.001).
Side effects
More cases in group B had no side effects in comparison to group A (p=0.018).In group A, 11 patients faced hemodynamic changes, 4 experienced extrapyramidal reactions, and 9 cases had apnea (mostly transient and resolved with oxygen, non-invasive modalities, and airway maneuvers and only 3 cases need intubation).In group B, 5 patients experienced emergence phenomenon and 1 extrapyramidal reaction.NNH of experiencing a side effect was 3.8 (95%CI: 2.5 to 7.8).
Discussion
In the present study, we compared the sedative effectiveness of haloperidol-midazolam versus haloperidol-ketamine in controlling agitation in delirium state in ED.We realized that the latter combination decreased AMSS score more rapidly than the first 5, 10, and 15 minutes after drug administration.Time required to maximum effect (lowering AMSS score below +2 and at least by 1 unit) was significantly lower in group B versus group A (p=0.014).Side effects and physical restraint were less common in group B versus group A. Emergency physicians often encounter acute agitation in different groups of patients, who can harm themselves and cause chaos in ED.A wide array of factors is involved in disorganized and violent behavior including: drug overdose, chemical intoxication, psychiatric disorder, and acute medical illnesses like delirium (5,6).Similar studies evaluating agitation control in ED, concluded that time to adequate sedation for ketamine alone is 4.2 to 7.7 minutes (27)(28)(29).In our study, time to maximum effect for IV haloperidol-ketamine was 3.190.7 minutes.Many studies confirmed the faster sedative effect of ketamine in ED in agitation control and even suggested the possibility of using ketamine as the first line agent (30).
Heydari et al. in 2018, compared the effects of IM ketamine versus IM haloperidol on acutely agitated patients in ED.
They revealed that mean time to adequate sedation (AMSS score<+1) in ketamine group (7.73±4.71minutes) was significantly lower than haloperidol group (11.42±7.20 minutes) (p=0.005).15 minutes after intervention, the sedation score did not differ significantly in the two groups (p=0.167)(29).Our results with IV combination administration were the same.Cole et al. in 2016, conducted a prospective study on agitation control in the prehospital setting and announced that IM ketamine was significantly superior to IM haloperidol in terms of time to adequate sedation.The median time to adequate sedation was 5 minutes for ketamine and 12 minutes for haloperidol (p<0.0001).In their study, more patients in haloperidol group needed additional sedation with midazolam.While, ketamine was associated with higher intubation rate of 39% versus 4% (p<0.0001)(22).In our study, only 3 cases all in the haloperidol-midazolam group needed intubation.Li et al. in 2020, determined the effect and safety of 1mg/kg IV and 2 mg/kg IM ketamine in excited delirium.They perceived that ketamine significantly reduced agitation (Richmond Agitation Sedation Scale) (p=0.001).They reported a lower incidence of adverse events (including intubation) in comparison to previous studies.It seemed that most of these effects occurred at higher doses (31).We administered lower doses of ketamine as sedative agent and also considered a maximum dose in order to avoid major side effects.Lin et al. in 2020, compared the efficacy and safety of ketamine (4 mg/kg IM or 1 mg/kg IV) versus haloperidol (5-10 mg IV or IM) plus lorazepam (1-2 mg IV or IM) for initial control of acute agitation.They found that more patients in ketamine group were sedated at 5 and 15 minutes (p=0.001 and <0.001, respectively).The median time to sedation was lower in the former group in comparison to the latter, 15 versus 36 minutes (p<0.001)(32).Their findings were similar to ours.Despite few emergence phenomena in our study, authors in the mentioned study did not report any major side effects even in higher doses of ketamine.They also detected that ketamine was related to tachycardia and hypertension, and a nonsignificant increase in hypoxia.In our study, we did not discover such findings, rather in group B, PR reduction was more significant than group A 5 and 10 minutes after drug administration (p=0.049 and 0.050, respectively) and SPO2 was significantly decreased in group A in comparison to group B 5 and 10 minutes after baseline (p=0.031 and 0.019, respectively).
Limitations
Most of our patients were in older age range compared to previous studies.We tried to compensate for most limitations in previous studies like larger sample size and prospective de-sign.
Conclusion
Our study discovered that haloperidol-ketamine can control agitation in delirium more rapidly than haloperidolmidazolam.This combination had lower adverse events with lower need for physical restraint.
Conflict of interest
The authors had no conflicts of interest.
Funding Sources
None.
Authors' contribution
EV, ZN and MS conceived the study, designed the trial, supervised the conduct of the trial and data collection.MA and HA undertook recruitment of participating centers and patients and managed the data, including quality control.ZN and MS provided statistical advice on study design and analyzed the data.EV drafted the manuscript, and all authors contributed substantially to its revision.EV takes responsibility for the paper as a whole.All authors read and approved final version of manuscript.
Using artificial intelligence chatbots
None.
Data availability
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
2 P
-value of intergroup changes during the study based on repeated measures ANOVA.3P-value of intergroup changes at specific time intervals during the study based on T-test.
Table 3 :
Comparison of secondary outcomes between groups A (Haloperidol-Midazolam) and B (Haloperidol-Ketamine) Data are presented as mean ± standard deviation (SD) or frequency (%).
Figure 1 :
Figure 1: Flow diagram of the study.
Table 1 :
Comparing the baseline characteristics between the two groups Data are presented as mean ± standard deviation (SD) or frequency (%).AMSS: Altered Mental Status Score.Group A received Haloperidol-Midazolam and group B received Haloperidol-Ketamine.
Table 2 :
Comparison of primary outcomes within and between groups A (Haloperidol-Midazolam) and B (Haloperidol-Ketamine) Data are presented as mean ± standard deviation (SD).AMSS: Altered Mental Status Score.Min: minute. 1 P-value of intragroup changes during the study based on repeated measures ANOVA. | 2023-10-14T05:11:51.884Z | 2023-08-26T00:00:00.000 | {
"year": 2023,
"sha1": "7f264fbef136311880b9d565175b582352172b40",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f264fbef136311880b9d565175b582352172b40",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262203209 | pes2o/s2orc | v3-fos-license | AmpC β -Lactamases in Enterobacteriaceae - A Mini Review
: Beta-lactamases are enzymes that confer resistance to β -lactam antibiotics. Both Gram-positive and Gram-negative bacteria produces these enzymes. There are about 3000 enzymes that initially emerged from bacteria found in the environment to protect themselves from the natural β -lactam. After the 1980s, many transmissible enzymes were detected periodically, resistant to cephalosporins, monobactam, and carbapenems. These enzymes were classified based on function and molecular structure. Among them AmpC β -lactamases were found to be resistant to β -lactams and β -lactamases inhibitors. They are class C cephalosporinases that confer resistance to the first, second, third generation cephalosporins and cephamycin, and also resistance to beta-lactamases inhibitors such as sulbactam, tazobactam, and clavulanic acid. Family Enterobacteriaceae comprises many organisms that cause community and nosocomial infections, such as Escherichia coli, Klebsiella pneumoniae, Citrobacter spp, Enterobacter aerogenes, and Salmonella species . Beta-lactamases are produced by Enterobacteriaceae, where AmpC beta-lactamases are found to be one of the mechanisms. Different types of AmpC beta-lactamases: mutation/attenuation in the chromosome, induced plasmid-mediated AmpC beta-lactamases. Some Enterobacteriaceae, like Enterobacter , carry it on their chromosome, and some other Enterobacteriaceae has plasmid-mediated AmpC beta-lactamases. This type of resistance has led to increased mortality and morbidity. It is challenging to detect these AmpC beta-lactamases in diagnostic settings. Still, the detection of AmpC β - lactamases is cumbersome, and no approved methods are found in CLSI guidelines. But the prevalence of AmpC beta-lactamases has increased drastically in Asia. The review aims to give an overview of AmpC β -lactamases. The objective of this review is to review the evolution, types, detection methods, recent world epidemiology, treatment options, and current updates of the AmpC beta-lactamases.
ORIGIN
The history of these β-lactamases is way back 2 million years ago, even before the introduction of antibiotics in medicinal use, reflecting the advancement of resistance components to natural β-lactams created by organisms for survival (Hall B G., 2004).The first enzyme was reported in E. coli, the capability of degrading penicillin in 1940. 2 It was classified as molecular class C by Ambler molecular classification of β-lactamases.The active site of the AmpC was serine residue in the protein; However, the serine is an active site for ESBL.Also, the protein sequence is different, leading to differences in the degradation of β-lactams. 3They belong to group 1 in the functional classification of βlactamases. 4The chromosomal genes were found in some enteric microbe which got transferred to other enteric microbes through plasmids leading to the formation of plasmid-mediated β-lactamases (Table 1 & Table 2).The prevalence of AmpC is less prominent than ESBL. 1 illustrates the names of different plasmid-mediated AmpC beta-lactamases and the genetic origin of these genes from the chromosomal-induced organisms.2 illustrates the origin of the different AmpC beta-lactamases from different countries, the origin it was identified, and from which nation and organism it was isolated.There are about six families of plasmid-mediated AmpC beta-lactamases.
HYDROLYTIC PROPERTIES OF AmpC BETA LACTAMASES
AmpC beta-lactamases are class C cephalosporinases found on many Gram-negative organisms' chromosomes.They are resistant to penicillin, 1st, 2nd, and 3rd generation cephalosporins, aztreonam, and cephamycins but resistant to beta-lactamase inhibitors such as tazobactam, clavulanic acid, sulbactam and generally sensitive to carbapenems and 4th generation cephalosporins.Apart from chromosomal AmpCs, many plasmids-mediated AmpCs have emerged and are grouped into six families.
L173
microorganisms.Genes encoding AmpC β-lactamases are placed on the chromosomal DNA of important Gram-negative organisms, especially in enteric bacteria.The organism which carries the chromosome of AmpC is S. marcescens, Enterobacter spp., C. freundii, and Morganella morganii.The level of expression of chromosomal AmpC genes differs in different species.This type of expression is common in clinical settings.
The AmpC genes are overexpressed by different mechanisms: constitutive and inducible.Organisms like E. coli do not have an inducible AmpC gene; any mutations in their attenuator or promoter region will lead to overexpression of AmpC.Genes for the production of AmpC are present in the chromosome of many species 5 , but it is produced at very low levels, which is not detectable.Mutation in AmpC promoter regions leads to overexpression of the AmpC enzyme, which results in treatment failure.E. coli can obtain the genes for AmpC enzyme production from other species.But some organisms (e.g., E. cloacae, C. freundii, S. marcescens, or P. aeruginosa) have inducible AmpC genes on the chromosome, which leads to over-expression and are called derepressed mutants.The mechanism of AmpC is closely related to the recycling process of cell walls.During the cell wall synthesis, in Gram-negative, e.g., E. coli degrades 40-50% of the peptidoglycan.The degraded products (1, 6,-anhydromuropeptides) are recycled.These released 1, 6,-anhydromuropeptides are transported from the periplasm into the cytoplasm through a transmembrane permease (AmpG).Consequently, the anhydromuropeptides are hydrolyzed by AmpD, which is cytoplasmic amidase for further recycling, which produces UDP-pentapeptides and transported to periplasm for recycling; only few 1,6 anhydropeptides are available for AmpR to transcribe the blaAmpC to produce AmpC βlactamases.
Induced
In the presence of β-lactams, anhydromuropeptides increase in the periplasm and are transported to the cytoplasm.AmpD recycles anhydromuropeptides present in the cytoplasm.The excess anhydromuropeptides bind to the transcriptional regulator of AmpC (AmpR), which produces the AmpC βlactamases.
Derepressed
1.In the AmpR mutants, the cell wall degradation products behave like AmpR activating ligands, causing AmpC expression.
2.
In AmpD mutant strains, unhydrolyzed anhydromuropeptides accumulate in the cell's cytoplasm, leading to the AmpR activation and resulting in the semiconstitutive or constitutive expression of AmpC.AmpDdependent constitutive over-expression of AmpC occurs at a frequency of 10 -6 in defective strains.The regulation of AmpC production is depicted in figure1.protein, where Tyr150 serves as a momentary catalytic base and Lys67 hydrogen bonds to Ser64.
PLASMID MEDIATED AmpC β LACTAMASES (PMABL)
It was identified first in 1979 from Proteus mirabilis 6 and later recovered in a K. pneumoniae from a wound sample of a patient and named CMY-1 7 .CMY-1 was able to degrade cephamycin and was not inhibited by sulbactam, tazobactam, clavulanate, and MIR-1 8 , identified also had a similar profile.This gene has a 90% identified with the Enterobacter cloacae chromosomal gene.In 1989, another enzyme was isolated from a Pakistan patient in the UK and named BIL-1 9 .Many genes were identified in later years was isolated from E. coli and was subsequently shown to transfer resistance to three different genera.Many plasmid-mediated genes were identified and named.CMY-2 type was the generally reported PMABL on earth.
EPIDEMIOLOGICAL FEATURES 12
Plasmid-mediated LAT-2 (CMY-2) enzymes were discovered in clinical isolates of Enterobacter aerogenes in Greece, and K. pneumoniae and E. coli in France 13 , and plasmid-mediated ACC-1 was discovered in clinical isolates of P. mirabilis and E. coli urine sample in France.13 Ceftriaxone resistant Salmonella arose in the US due to plasmid-mediated CMY-2 -betalactamases.In eight separate states between 1996 and 1998, in symptomatic individuals 14 .There were several Salmonella serotypes found in pigs and cattle 15 .A 12-year-old child from Nebraska was infected with a ceftriaxone-resistant S. enterica serotype Typhimurium spread from calves 16 The widespread dissemination of bacteria that produce plasmid-determined cephalosporinases is a startling trait.They have been discovered in North America, Saudi Arabia, Algeria, Tunisia, India, Japan, Pakistan, South Korea, France, Germany, Greece, Italy, Sweden, the United Kingdom, Argentina, Guatemala, and the United States.Similar to the importation of ESBLproducing strains, AmpC genes were imported between Asian countries and European countries.Most organisms that produce plasmid-determined AmpC enzymes have been obtained from hospitalized patients (Critical care patients, surgery patients, cancer patients, transplantation patients) except a few Salmonella strains and sporadic K. pneumoniae isolates.Mainly the organism was obtained from blood, wounds, sputum, or stool.Cefoxitin, moxalactam, cefmetazole, cefotetan, and imipenem were among the βlactam antibiotics that most patients were treated.Many bacteria with AmpC enzymes also produce ESBL, like TEM-1, TEM-3 and SHV-5, and CTX-M.K. pneumoniae produces sporadic AmpC outbreaks, e.g., MIR-1 (CMY-2)-like enzyme, ACC-1, and ACT-1.
SUSCEPTIBILITY PATTERNS
Strains with plasmid-mediated AmpC enzymes were consistently resistant to aminopenicillins (ampicillin or amoxicillin), carboxypenicillins (carbenicillin or ticarcillin), and ureidopenicillins (piperacillin) and, among the penicillins, these strains were susceptible only to amdinocillin or temocillin.The enzymes provided resistance to cephalosporins in the oxyimino group (ceftazidime, cefotaxime, ceftriaxone, ceftizoxime, cefuroxime) and the 7-α-methoxy group (cefoxitin, cefotetan, cefmetazole, moxalactam).MICs were usually higher for ceftazidime than for cefotaxime and cefoxitin.They are susceptible to cefepime, cefepirome, and carbapenems.Changes in antibiotic accessibility to the enzyme can significantly alter the susceptibility profile.Imipenem MICs can reach 64 g/ml, and meropenem MICs can reach 16 g/ml in K. pneumoniae strains containing plasmids defining AmpC enzymes when outer membrane porin channels are lost.Cefepime and cefpirome MICs in these isolates become inoculum dependent, and at inocula of 10 7 /ml, MICs can approach 256 g/ml 17 .In current days, even cefepime resistance has been developed.Though AmpC enzymes are after resistant the sulbactam/tazobactam, it is susceptible to piperacillin-tazobactam.
ENZYMATIC PROPERTIES
The pI range for plasmid-mediated AmpC-type β-lactamases is 6.4 to 9.4.After isoelectric focusing, AmpC enzymes in clinical isolates with multiple β-lactamases can be detected by differential suppression of nitrocefin reactivity with 5 mg of cefoxitin/ml 18 or by bioassay-based detection of cefoxitin hydrolysis.While a small number of plasmids, such as DHA-1 and DHA-2, include AmpR and AmpC genes and are inducible, the majority of plasmid-mediated AmpC genes, such as MIR-1, are expressed constitutively even in the presence of a complete induction system.The Plasmid-mediated AmpC βlactamases have an apparent molecular weight that ranges from 38 to 42 kDa and contain 378, 381, 29, 41, 382, or 386 amino acid residues.The relative Vmax values for cephalothin and cephaloridine were higher than those for ampicillin and penicillin, and there was greater activity with penicillin than with ampicillin as well as low hydrolysis rates for oxyimino-or methoxy-compounds.The Km values for cefoxitin, cefotetan, cefotaxime, moxalactam, or aztreonam, on the other hand, were often lower than those for penicillin or ampicillin and significantly lower than the Km values for cephaloridine, cephalothin, or cefepime.Like group 1 cephalosporinases, plasmid-mediated AmpC enzymes were inhibited by low concentrations of aztreonam, cefoxitin, or cloxacillin and only by high concentrations of clavulanate.Sulbactam and, particularly, tazobactam were more effective inhibitors.The amino acid sequence of the enzymes revealed an active-site serine in the motif Ser-X-X-Lys (where X is any amino acid) at residues 64 to 67 of the mature protein.A Lys-Ser/Thr-Gly motif has been found at residues 315 to 317 and plays an essential role in forming the tertiary structure of the active site.A tyrosine residue at position 150 forms part of the class C-typical motif Tyr-X-Asn and is also important (but not essential) for the catalysis of β-lactam hydrolysis.
GENETIC FEATURES
AmpC genes are located on different plasmid sizes ranging from 7 to 180 kb.Many of these plasmids are selftransmissible, and only a few are transferred by transformation.Plasmids encoding AmpC enzymes often carry multiple other resistances, including resistance to aminoglycosides, chloramphenicol, sulfonamide, tetracycline, and trimethoprim.FOX-type enzyme containing plasmids carried a gene resistant to fluoroquinolone.AmpC enzymes producing isolates may also produce other beta-lactamases, e.g., TEM and, SHV, CTX.The gene for AmpC enzyme on the chromosome is transposed by a transposon, e.g., ACT-1, MIR.The blc and sugE genes located downstream from AmpC on the C. freundii chromosome are close to the C. freundii-type blaCMY-5 gene according to the mapping of the gene in plasmid pTKH11 19 .A putative insertion element that may have played a role in gene capture has replaced the AmpR gene that was located upstream of AmpC on the chromosome.Other plasmids that encode C. freundii-type AmpC enzymes have a similar organizational structure as far as they have been sequenced.It suggests that they originated directly from the C. freundii chromosome and then underwent an accumulation of mutations in the AmpC gene to produce the current array of CMY and LAT-type enzymes, supported by phylogenetic analysis.Numerous resistance genes, such as those for the Ambler class A, B, and D β-lactamases, are found in gene cassettes with a downstream 59-base region that serves as a particular recombination site for inclusion into integrons 20 .
According to an analysis of published sequences, AmpC genes identified on plasmids are not connected to 59-base elements.A site-specific integrase, two copies of qacE-1sulI, an aminoglycoside resistance gene (aadA2) with its downstream 59-base region, and a putative recombinase (ORF341) are all present in an integron that contains the DHA-1 structural and regulatory genes on plasmid pSAL-1.This integron has characteristics with the In6 and In7 plasmids pSa and pDGO100 that lack the bla gene regarding a genomic organization.
MOLECULAR ASPECTS
Regardless of their genetic origin, AmpC β-lactamases have identical hydrolytic characteristics.These enzymes often have low Vmax and high Km values for the third-generation cephalosporins.The plasmid-encoded AmpC β-lactamases MIR-1 and MOX-1, as well as the kinetic values of the Serratia marcescens.AmpC β-lactamase reported for ceftazidime are significant outliers.4-7These data are commonly used due to the lack of consistency among laboratories in AmpC enzymatic investigations.The exact functions these enzymatic activities play in the overall resistance pattern of organisms will thus not be clear, and comparisons of data obtained on certain enzymatic activities between laboratories would not be possible.Even though various AmpC β-lactamases have slightly varying hydrolytic characteristics, until the AmpC βlactamase is produced at high levels, organisms harboring these enzymes are not resistant to third-generation cephalosporins.1It is well known that chromosomal AmpC gene expression is induced by β-lactam antibiotics like cefoxitin and imipenem but is only marginally (if at all) induced by third-or fourth-generation cephalosporins.[10] The DNA-binding protein AmpR is required for induction, reversible after the inducing substance has been withdrawn.[13][14] A current study documenting AmpC expression in S. marcescens.The contributions of gene copy number and promoter strength on total AmpC gene expression were discussed in a recent study.The relative copy number of many AmpC genes represented by plasmids has been determined using a novel technique.Therefore, investigations did not support the widely held hypothesis that high-copy plasmids mediate the high-level expression of AmpC genes carried by plasmids.After adjusting for copy number, analysis of gene expression revealed that AmpC expression in the absence of AmpR produced much greater.
CLINICAL IMPLICATIONS OF PLASMID-ENCODED AmpC-MEDIATED RESISTANCE
The clinical microbiologists' most pressing issue is finding Gram-negative microbes with plasmid-encoded AmpCmediated resistance.Although there are no established recommendations for detecting this resistance mechanism, clinical laboratories need to address this problem just as much as they do for detecting ESBLs.Cefoxitin-resistant AmpC producers should be distinguished from cefoxitin-resistant non-AmpC producers.Differentiating between these two types of organisms might influence the available treatment choices, with carbapenems being used for cefoxitin-resistant AmpC producers and extended-spectrum cephalosporins being used for cefoxitin-resistant non-AmpC, non-ESBL producers.The distinction between these sorts of organisms would affect the use of cephalosporins and carbapenems, which would also impact the selection pressure driving ESBL, AmpC, or plasmid-encoded class resistance to carbapenems.The emergence of AmpC β-lactamases encoded by inducible plasmids adds another warning sign for difficult detection.It is widely recognized that AmpD mutations, which encode an inducible chromosomal AmpC, are related to the derepressed phenotype of organisms.The fact that most Gram-negative organisms encode AmpD is not widely recognized.Because there is no discernible phenotype in the absence of an inducible chromosomal AmpC, spontaneous AmpD mutations that should occur in clinical isolates of E. coli, K. pneumoniae, and Salmonella spp.have not been identified.When plasmidencoded inducible AmpC genes are produced in the presence of AmpD mutations, it is expected that clinical isolates of E. coli and K. pneumoniae would exhibit considerable increases in ESBL MICs.In addition to isolates from people, plasmidencoded AmpC β-lactamases have also been discovered in isolates from companion animals like dogs and livestock like swine and cattle.The importance of precisely identifying this resistance mechanism is increased by these additional sources of isolates that produce AmpC.To stop the spread of plasmidencoded AmpC-mediated resistance within the hospital, hospital-based clinical labs should screen isolates from community-based patients before hospitalization, according to a source of AmpC-mediated resistance in the community.
Studies to monitor plasmid-encoded AmpC β-lactamase genes obtained from the population are necessary.Using phenotypic susceptibility testing, it is challenging to differentiate between species that produce ESBLs and those that produce plasmidencoded AmpC β-lactamases.Cefoxitin resistance may be a sign of AmpC-mediated resistance but may also be a sign of decreased membrane permeability.To assist in distinguishing between cefoxitin-resistant non-AmpC producers and cefoxitin-resistant AmpC producers, certain phenotypic assays are available, like the three-dimensional test and the AmpC disc test.Furthermore, β-lactamase inhibitors can be used to locate potential producers of AmpC.None of these assays are standardized; therefore, screening a lot of isolates can take some time.To identify cefoxitin-resistant non-AmpC producers from cefoxitin-resistant AmpC producers, a recently developed multiplex PCR for the identification of plasmid-encoded AmpC genes has proven beneficial as a quick screening technique.In order to identify cefoxitin-resistant non-AmpC producers from cefoxitin-resistant AmpC producers, a multiplex PCR for the identification of AmpC genes carried on plasmids has recently been developed.The data produced by the multiplex PCR approach may detect the AmpC gene and identify the family of AmpC genes in the resistant organism, separating potential inducible AmpC producers from non-inducible AmpC producers.This PCRbased technique can also discriminate between E. coli isolates containing an 'imported' AmpC gene and isolates that produce excessive amounts of chromosomal AmpC.The capacity to identify the kind of AmpC or ESBL may help with hospital infection control and the doctor's ability to administer the best antibiotic, reducing the selection pressure that leads to antibiotic resistance.To identify cefoxitin-resistant non-AmpC producers from cefoxitin-resistant AmpC producers, a multiplex PCR for identifying AmpC genes carried on plasmids has recently been developed.The data produced by the multiplex PCR approach may detect the AmpC gene and identify the family of AmpC genes in the resistant organism, separating potential inducible AmpC producers from noninducible AmpC producers.This PCR-based technique can also discriminate between E. coli isolates containing an 'imported' AmpC gene and isolates that produce excessive amounts of chromosomal AmpC.The capacity to identify the kind of AmpC or ESBL may help with hospital infection control and the doctor's ability to administer the best antibiotic, reducing the selection pressure that leads to antibiotic resistance.A priority becomes adequate surveillance when one considers the potential to discriminate between organisms generating ESBLs, plasmid-encoded AmpC βlactamases, or synthesis of both enzymes in a single organism; molecular testing must be used in the clinical laboratory.To regulate the Gram-negative β-lactamase resistance mechanisms we currently confront and, for the first time, to prevent the formation of a new type of β-lactamase, the ESACs, surveillance is essential.There has been a lot of advancement in our understanding of AmpC β-lactamases during the past 25 years.However, reality shows that we have not succeeded in stopping the proliferation of this resistance mechanism because of our ignorance.The clinical consequences of patients infected with organisms generating plasmid-encoded AmpC β-lactamases, as well as AmpC production and detection of resistance mechanisms in the clinical context for both outpatients and inpatients, require more study.
HEALTH THREAT OF ESBL/AmpC β-LACTAMASE
The pAmpC-producing organisms have large plasmids which carry genes that cause resistance to first-line antibiotics such as quinolones, cotrimoxazole, and aminoglycosides.Infections due to pAmpC-producing Enterobacteriaceae have led to few therapeutic options, which increase morbidity and mortality in the patients 21 .
ANIMALS TRANSMISSION
The primary reservoir of the ESBL/AmpC-producing organism is still controversial.Especially plasmid-mediated AmpC betalactamases have increased drastically over the past two decades.The transmission of these organisms in the community, hospitals, and homes suggests the intestinal colonization of this organism serving as a reservoir 22 .These resistant organisms are isolated from animal food 23 , farm animal 24 , vegetables 25 , petting zoo 26 , and in surface water, wastewater, and sea water 27
Rational use of the antibiotics
Limiting the use of hospitalization stay Prevention of usage of antibiotics in food and animal farms.
PLASMID-MEDIATED BETA-LACTAMASES
Due to the selective pressure, there is a wider range of resistance determinants, and antibiotic resistance has increased.The increased use of amoxy clavulanate and tazobactam, sulbactam has exerted additional selective pressure and has various mechanisms to counteract that, and one such response is AmpC beta-lactamases.Though AmpC gene production is beneficial but in a location in chromosomes has limited options.Many plasmid-mediated AmpC βlactamases have increased since the 1900s, and in the 2000s have increased drastically.Klebsiella spp., tends to acquire plasmids, and hence the prominence has increased.The exact nature of the chromosomal AmpC into plasmids is unknown, but transposons and plasmids spread the genes within the species and among the genus through trans conjugation.All resistance determinants carrying plasmids are integrated with integrons and transposons and form the gene cassette.The majority of plasmids are self-transferrable.Few are non-selftransmissible; such plasmids are transferred by conjugation (conjugative transposons) or resistance plasmids.The major site of transfer is the intestinal tract.Many rectal screening multidrug resistance screening test has proven such transfer in E.coli, Proteus mirabilis, and Klebsiella spp.Occurrence of plasmid transfer of plasmids between K. pneumoniae and Salmonella in the intestine.The transfer of plasmid-mediated AmpC β-lactamases has been found in organisms that originally did not possess the genes.Hence the acquisition of these genes among E. coli, Klebsiella, and Salmonella restricts the therapeutic alternatives for treating infections in these organisms.The AmpC beta-lactamases may implicate major health implications.The plasmid-borne enzymes have 6 families.4 descend from enterobacterial AmpC betalactamases, while the fifth is Aeromonas.As a result, it is ijlpr 2023; doi 10.22376/ijlpr.2023.13.6.L171-L181
L177
possible to predict that AmpC β-lactamases carried by plasmids in Enterobacteriaceae will eventually present issues with antibacterial treatment.It may be predicted that the prevalence of AmpC genes carried by plasmids will increase significantly if an AmpC gene is successfully acquired by a transposon similar to the TEM-1 and TEM-2 β-lactamases, which are the most common plasmid-encoded β-lactamases in gram-negative bacteria.
CURRENT CHALLENGES 30
AmpC β-lactamases are a class C enzyme typically produced by bla genes on the bacterial chromosome, though plasmidborne AmpC enzymes are now more common.Penicillins, βlactamase inhibitors like clavulanate and tazobactam, and the majority of cephalosporins, including cefoxitin, cefotetan, ceftriaxone, and cefotaxime, are often ineffective against organisms expressing the AmpC β-lactamase.AmpC enzymes weakly hydrolyze a broad-spectrum cephalosporin called cefepime, and carbapenems easily inactivate them.Now a day's, cefepime resistance also have been found.Avibactam is particularly effective at inactivating AmpC cephalosporinases and other bacteria, K. pneumoniae, Klebsiella oxytoca, Proteus mirabilis, and Salmonella spp., lack these enzymes.Inducers and substrates for AmpC β-lactamase include benzylpenicillin, ampicillin, amoxicillin, and cephalosporins such as cefazolin and cephalothin.Strong inducers include cefoxitin and imipenem.Ceftriaxone, Ceftazidime, Cefepime, Cefuroxime, Piperacillin, and Aztreonam are weak inducers and substrates, but they can be hydrolyzed with sufficient enzyme expression.As a result, AmpC hyperproduction significantly raises the MICs of oxyimino-β-lactams that are weakly inducing.As a result of the high degree of AmpC expression that is already induced by strong inducers, however, the MICs of these drugs do not alter significantly with regulatory changes.A few β-lactamase inhibitors are also inducers, including clavulanate, which strangely can seem to increase AmpC-mediated resistance in an inducible organism despite having no inhibitory effect on AmpC β-lactamase activity.In clinically significant Gramnegative bacteria, AmpC enzyme production is often suppressed (or "repressed") but can be "derepressed" by induction with specific b-lactams, especially cefoxitin, and imipenem.A good inducer of AmpC β-lactamases is sulbactam, but not tazobactam.Although they have been the focus of extensive research, the genetic foundations of this regulation are not the subject of this review.Citrobacter, Salmonella, and Shigella are examples of Enterobacteriaceae members that produce clinically significant amounts of AmpC enzymes that are resistant to inhibition by clavulanate and sulbactam.
EVOLUTION
In the past 80 years, we have discovered that the ongoing evolution of substrate specificity meets every new β -lactam introduced.The "long view" forecasts that these enzymes will continue to change and take on new shapes.Maybe one day, we'll figure out why this happens and how to stop it.Future research in this field must be flexible and open to novel ideas.We still don't fully understand all the correlates of activity and resistance or the mechanism underlying the emergence of novel structural variants.The current problems will require the use of new technology.
Factors determining β-lactamases expression
1. Effective expression of the efflux pump reduces antibiotic accumulation and enhances the resistance enzyme activity.Limited treatment alternatives are usually available since the genes are frequently contained on huge plasmids that also carry other antibiotic resistance genes.There were known to be spread in hospitals, which later transferred to the community.Salmonella enterica serotype Newport which produces CMY-2 has been linked to community-associated acquisition and infections by enterobacteria with plasmidmediated AmpC β-lactamases in Canada and USA.Consuming undercooked meat and handling pet treats have been linked to the Salmonella enterica serotype.Newport infections that produce CMY-2 have been identified.Population-based research from the Calgary Health Region indicated that women had a fivefold greater risk of infection and that 61% of the 369 patients with community-associated infections caused by isolates resistant to cephamycin had AmpC-producing E. coli.Sequencing revealed that these enzymes were CMY-2 after PCR revealed that 34% of the samples were positive for black genes.The study concluded that AmpC-producing E. coli is an emerging pathogen in the population that frequently causes urinary tract infections in older women in this broad Canadian region.It was followed by articles from Washington and Nebraska, respectively, that demonstrated the presence of Enterobacteriaceae in outpatient clinics in the USA that generate the CMY, ACC, and DHA kinds of AmpC βlactamases.
DETECTION OF AMPC β-LACTAMASES
Though are no certain guidelines laid by CLSI for testing AmpC production in Gram-negative organisms.A Cefoxitin screening test was used to screen the isolates, followed by phenotypic and molecular methods.There were various methods devised by different scientists: several Phenotypic tests such as AmpC disk test, AmpC disk test with EDTA, Modified threedimensional test, cefoxitin-agar, beta-lactam inhibitor assay like cloxacillin, boronic acid and its derivatives and E-test strips combination.AmpC disk test and modified three-dimensional test were more sensitive in detecting the AmpC betalactamases in Enterobacteriaceae.It was found to be high in sensitivity and specificity to inhibitor-based tests.The gold standard method for detecting plasmid-mediated betalactamases.There are few methods to detect the production of AmpC β-lactamases that could provide epidemiology.
Cefoxitin screening test: 32 (CLSI guidelines).
The test organism swabbed on Mueller Hinton agar (MHA), cefoxitin disc is placed on it and incubated at 37 ̊ C for 16-18 hours.Reduced Susceptibility to the cefoxitin test is used as a screening test.But it can also be produced by carbapenamases, in E. coli and K. pneumoniae having outer membrane porins defects.
AmpC disk test: 33
A saline or EDTA-impregnated disc is placed adjacent to the cefoxitin disc in the lawn culture of the strain; few colonies are smeared over the impregnated or plain disc.If distortion occurs in the zone of inhibition, it indicates AmpC production.
3.
Three dimensional test: 34,35 E. coli ATCC 25922 is grown in a broth and swabbed as a lawn on MHA agar.Cefoxitin is kept on the plate, and near it, a circular slit of 3 mm is made on the agar, and suspension of test strains is pipetted into a well.Distortion of the zone of inhibition is indicated for the production of AmpC enzyme.Modifications include a radial slit and adding the test organism pellet, which is centrifuged, frozen, and thawed five times.
4.
Modified Hodge test 36 In MHA plates, E. coli ATCC 25922 is swabbed as a lawn.In the center of the plate, Cefoxitin (30μg) is kept, and test strains are inoculated from the periphery of the plate to the edge of the disc.The oblique growth-producing cloverleaf model is positive for AmpC production.Isolates that had no distortion around the cefoxitin zone are considered to be negative for AmpC production.
INHIBITOR BASED METHOD
1. Phenyl boronic acid test: 37 Boronic acid derivatives are added to the beta-lactam disk (cefoxitin), placed near cefoxitin in an MHA plate inoculated with the suspected organism, and incubated for 18hrs at 37 C.
Enhanced zone of inhibition around the boronic acid/cefoxitin compared to the beta-lactam indicates the AmpC production.Philip E. Coudron (2005) 38 has reported that phenyl boronic acid is useful for detecting plasmid-mediated resistance.The sensitivity (60-70%) and specificity (45-98%) are being evaluated.
2.Cloxacillin test: 39
Cloxacillin is placed between cefotaxime and ceftazidime, used to detect AmpC production.
GENOTYPIC TEST
Multiplex PCR 11 Plasmid-mediated beta-lactamases are detected based on the current gold standard multiplex PCR, which targets the six families of AmpC using six primer sets.
Multiplex asymmetric PCR-based array 40 It detects both plasmid-mediated AmpC beta-lactamases and mutants.Ongoing research is trying to automate AmpC betalactamases detection by modification of this array method.
TREATMENT
Organisms producing AmpC β-lactamases are generally resistant to various antimicrobial agents.Hence the selection of the antibiotic for treatment is tough.β-lactam and β-lactam inhibitors must not be used to treat AmpC-producing organism infections because they carry the risk of induction and mutants.Studies have shown poor clinical outcomes for cefotaxime, ceftazidime, and piperacillin-tazobactam. Piperacillin-tazobactam can be used in bacteremia produced by AmpC-producing Enterobacteriaceae, but Pitts's bacteremia score has to be assessed.Generally, cefepime is susceptible when conventional methods are used because it is a poor inducer of AmpC production.But the high inoculum test shows a drastic increase in cefepime MICs 41,42 .Hence the usage of cefepime must be done carefully.Temocillin is active against both chromosomal/plasmid-mediated AmpC βlactamases 43,44 .Carbapenems like imipenem, meropenam, and ertapenam can treat AmpC infections. 45But carbapenamases and porin defect strains producing AmpC βlactamases have emerged.Fluoroquinolone therapy has been used for non-lifethreatening infections such as UTIs.The drugs like fosfomycin, tigecycline, colistin, polymyxin, aminoglycosides, and double carbapenem are used to treat AmpC-producing Enterobacteriaceae infections. 46Tigecycline can be used in isolates like Enterobacter spp, E.coli, Klebsiella Spp, and Citrobacter, which hyperproduce AmpC 47 .
CONCLUSION
AmpC β-lactamases are important cephalosporinases encoded on the chromosome of Enterobacteriaceae, which mediate resistance to 1st and 2nd-generation cephalosporins, penicillin, and βlactam inhibitors.Overexpression in these enzymes in Enterobacter species leads to cefotaxime, ceftriaxone, and ceftazidime resistance.Plasmid-mediated AmpC βlactamases are found in E. coli, Klebsiella spp, and P. mirabilis, which confers resistance to many antibiotics.AmpC β-lactamase detection in pathogens is important for effective antibiotic therapy.Several methods for screening and confirmation are evaluated and evolving.Still, no CLSI guidelines are there for detecting AmpC
Fig1 2
Fig1: Regulation of AmpC β -lactamases 6. THREE DIMENSIONAL STRUCTURE OF AMPC β-LACTAMASES 2 AmpC enzymes have remarkably comparable threedimensional structures.On one side of the molecule is a ɑhelical domain, and on the other, ɑ/β domain.With the reactive serine residue at the amino terminus of the central ɑhelix, the active site is in the middle of the enzyme at the left edge of the five-stranded β-sheet.The R1 site, which accommodates the R1 side chain of the β-lactam nucleus, and the R2 site, which accommodates the R2 side chain, can be
Table 2 : Origin of AmpC beta-lactamases AmpC β -lactamases Year Origin Detected in first Isolate
2. Reduced expression of outer membrane porin activity reduces the intake of antibiotics, and these ESBL isolates have developed resistance to cefepime and imipenem in AmpC isolates.3. Apart from this, resistance patterns also differ depending on the bacterial host and environment.Some ESBL has lost intrinsic resistance, which can be corrected by gene dosage or increased activity of promoter region. | 2023-09-24T16:01:13.139Z | 2023-09-06T00:00:00.000 | {
"year": 2023,
"sha1": "c456991fcca9b0447f71df1c04cfd1d9b1b938a9",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.22376/ijlpr.2023.13.6.l171-l181",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a6354e25635b4329a4de5af50ca9970c331a9fa9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
11280554 | pes2o/s2orc | v3-fos-license | The Stigmatization of Leprosy in India and Its Impact on Future Approaches to Elimination and Control
Traditionally, India holds the unenviable position of the origin of leprosy. The disease is thought to have then spread, via trade and war, to China, Egypt, and the Middle East, and later to Europe and the Americas. From antiquity to modernity, Indian society treated leprosy singularly with respect to custom and law, a response shaped by both scientific knowledge and cultural attitudes. India's future challenges in leprosy control include multiple systems of medicine, stigma, and educational knowledge gaps. By looking through the historical window of leprosy in India, we propose that continued success in elimination and control requires a holistic approach addressing these issues (Image 1).
Traditionally, India holds the unenviable position of the origin of leprosy. The disease is thought to have then spread, via trade and war, to China, Egypt, and the Middle East, and later to Europe and the Americas. From antiquity to modernity, Indian society treated leprosy singularly with respect to custom and law, a response shaped by both scientific knowledge and cultural attitudes. India's future challenges in leprosy control include multiple systems of medicine, stigma, and educational knowledge gaps. By looking through the historical window of leprosy in India, we propose that continued success in elimination and control requires a holistic approach addressing these issues (Image 1).
Leprosy in Ancient India
Early texts, including the Atharava Veda (circa 2000 BC) and the Laws of Manu (1500 BC), mention various skin diseases translated as leprosy. The Laws prohibited contact with those affected by leprosy and punished those who married into their families, effectively ostracizing those with the disease for their past sins [1]. The Sushruta Samhita (600 BC) recommended treating leprosy-or kushtha, meaning ''eating away'' in Sanskrit-with oil derived from the chaulmoogra tree; this remained a mainstay of treatment until the introduction of sulfones [2].
In a legend explaining chalmoogra oil's therapeutic origins, a king banished for his leprosy was instructed to eat the curative seeds of this tree, illustrating the cultural response to leprosy in antiquity: loss of social position and expulsion, even of kings, from the community [3]. Ancient Indian society marginalized those with leprosy because of several factors: its chronic, potentially disfiguring nature; inconsistently effective therapy; association with sin; and the fear of contagion. This combination endowed leprosy with a unique stigma that persists today and resulted in its treatment with both seclusion and medical therapy.
Leprosy in Colonial India
Soon after their arrival, Europeans described the uncommon practice of ritual suicide by those affected by leprosy, who were often assisted by their families. Though Hinduism generally considers suicide a sin, for leprosy it was not [4]. Christians too associated leprosy with sin. Struck by the scale of this Biblical disease, Europeans, especially missionaries, singled it out from a myriad of tropical infections. They often described the most dramatic forms of disfiguring leprosy, evoking fear of an ''imperial danger'': leprosy reaching the British Isles. The public pressured the colonial government for the segregation of people with leprosy.
Three events over a 30-year period strengthened the argument for confinement. First, the first leprosy census in 1872 quantified the problem: over 108,000 cases, for a prevalence of 54 cases/ 10,000 population. Approximately 1% received organizational support, renewing the cries for segregation to facilitate delivery of care [5]. Next, Hansen identified Mycobacterium leprae in 1873 and postulated it as the etiologic, transmissible agent of leprosy. Third, Father Damien, the Belgian missionary priest in Hawaii, contracted leprosy and died in 1889, proving its contagiousness. These events led to the popular consideration of leprosy as a widespread contagious disease requiring containment.
In response, the British government sent its Leprosy Commission (comprising both physicians and administrators) to India to investigate. The commission's report in 1891 concluded that ''the amount of contagion which exists is so small that it may be disregarded'' [6]. Initially, the colonial government accepted these findings but, under increasing popular pressure from England and within India, enacted the Leprosy Act of 1898. This law institutionalized people with leprosy, using segregation by gender to prevent reproduction. For the self-sufficient individual with leprosy, segregation and medical treatment were voluntary, but vagrants and fugitives from government-designated leprosaria were subject to punitive action. Charities and local governments in British India constructed many new institutions for people with leprosy, providing combined social, religious, and medical services. However, as predicted by the Leprosy Commission, the lack of infrastructure prevented the Leprosy Act from being strictly enforced. It was repealed in 1983 after the advent of effective multi-drug therapy for leprosy.
Leprosy in Post-Colonial India
Disease control marked the Indian government's initial approach, starting in 1955 with the creation of the National Leprosy Control Program for surveillance. In 1983, with the availability curative multi-drug therapy, the government changed the name to the National Leprosy Elimination Program (NLEP), with a focus on treatment. Starting in 1997, the government conducted several modified leprosy elimination campaigns; these short, concentrated bursts of statewide case detection activities included orientation of all village-level workers and volunteers on leprosy, house-to-house searches in specified areas, and awareness programs using mass media, school activities, and community meetings. State governments also began integrating leprosy care into their general health systems starting in 1997, moving from vertical control programs to horizontal health services, an intervention shown to decrease the stigma associated with leprosy due to family counseling and community outreach [7].
On January 30, 2005 India celebrated the elimination of leprosy as a public health problem after achieving the nationwide prevalence of ,1 case/10,000 population, though not without criticism regarding the accuracy and choice of target parameter [8]. This is a remarkable achievement given that in 1981, two years before NLEP, there were nearly 4,000,000 cases with a prevalence of .50 cases/10,000 population [9]. However, in a population of more than a billion people, up to 100,000 people with leprosy remain, representing approximately half of the world's disease burden. Some regions, mostly rural, still have up to five times the national average of cases; these areas have become the next targets in leprosy control [10].
The future of leprosy control and elimination offers several challenges with both structural and cultural dimensions. Efforts to decrease health inequity due to poverty, especially in rural areas with limited access to health care, may help with leprosy control. However, if cultural beliefs are not addressed, increased availability may not translate into an appropriate increase in utilization. Cultural aspects of leprosy affecting its control include traditional medicine and stigma.
Only limited efforts have been made to include the numerous nonallopathic (traditional) practitioners in India in leprosy control and elimination efforts, but their inclusion is important to its success [11]. Indians can seek public or private health care from allopathic (conventional Western) physicians, but often see private practitioners of homeopathy or the three major Indian systems of medicine (ISM) including Ayurveda, Siddha, and Unani. The practitioners of ISMs, who outnumber allopaths in India, continue to use compounded botanicals and agents such as chaulmoogra oil for primary or adjunctive therapy. If this therapy fails, patients are referred to government clinics where free multi-drug allopathic therapy is offered; use of traditional medicine has been shown to be a risk factor for delay in diagnosis [12]. The popularity of ISM can, as least in part, be attributed to two factors: the stigma carried by government-run vertical leprosy clinics and the preference for traditional medicine. Further investigation into the safety and efficacy of ISM therapies is needed, and the possibility of integrating aspects of ISM into the general health system should be evaluated. For example, chalmoogra oil may be effective as adjunctive therapy in wound healing [13]. The effectiveness of leprosy control in this integrated system should be periodically assessed not only in measures of leprosy rates, but of changes in knowledge, attitudes, and practices.
Leprosy continues to be stigmatized in a society with a deeply ingrained, though legally abolished, caste system, partly through lack of knowledge. Socially marginalized groups such as women, ''backward classes'' (minority social or ethnic groups defined by the government), and the urban poor are less likely to seek care; they often view elimination efforts as problematic because they fail to account for their individual needs [14]. Further, community education and medical knowledge of the disease does not immediately dispel stigma. In one community, only 30% of individuals claiming a high knowledge of leprosy also had a positive attitude toward patients with leprosy [15]. More studies are needed to better understand the causes of stigma and to assess the effect of interventions to decrease it.
Hansen's disease is still called kusht in most Indian languages, as it was in Sushrutha's time. The word itself still evokes fear and aversion, despite Mohandas ''Mahatma'' Gandhi's efforts to destigmatize the disease. Parchure Shastri, a Brahmin and Sanskrit scholar who became an outcast when he acquired leprosy, came to stay in Gandhi's ashram in 1939. His contemporaries considered sheltering or touching a person with leprosy unthinkable, but Gandhi changed Shastri's wound dressings and massaged his feet daily. This iconic image (http://commons.wikimedia.org/wiki/Image:Gandhi_leper. jpg) was later depicted on a postage stamp emblazoned with the words ''leprosy is curable.'' The cultural shift Gandhi desired is materializing; in 2005, representatives of the estimated 630 leprosy colonies in India met in New Delhi. Entitled ''Empowerment of People Affected by Leprosy,'' this conference sought to demarginalize those affected by the disease and reintegrate them into society.
Conclusions
The history of leprosy in India offers insights into one of the world's most misunderstood diseases. Furthermore, leprosy control and elimination in India still faces many challenges. Although many of the theoretical and practical approaches of the past have been discarded, their careful examination provides insights for the future. Sustaining the gains made so far and further reducing the disease burden in India require an innovative, holistic approach that includes ongoing education, efforts to identify interventions to dispel stigma, and the inclusion of nonallopathic practitioners in disease control programs. | 2014-10-01T00:00:00.000Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "94c0dbea38636dfbc2cb2cfa7103a37e7292488d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0000113&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c0584d9a557623bd1e8ee2b991d9eb44c21765a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255676913 | pes2o/s2orc | v3-fos-license | Numerical Study on Flexible Pipe End Fitting Progressive Failure Behavior Based on Cohesive Zone Model
: Flexible pipes are extensively used to connect seabed and floating production systems for the development of deep-water oil and gas. In the top connection area, end fitting (EF) is the connector between the flexible pipe and floating platform, as a critical component for structural failure. To address this issue, a combined numerical and experimental prediction method is proposed in this paper to investigate the failure behavior of flexible pipes EF considering tensile armor and epoxy resin debonding. In order to analyze the stress distribution of the tensile armor and the damage state of the bonding interface as the tensile load increases, a finite element model of the EF anchorage system is established based on the cohesive zone model (CZM). Additionally, the effects of the epoxy resin shear strength (ss) and the steel wire yield strength (ys) on the structural load-bearing capacity are discussed in detail. The results indicate that wire strength and interface bonding have a substantial effect on the anchorage system’s failure behavior, and the low-strength wire anchorage system has a three-stage failure behavior with wire yielding as the predominant failure mode, while the high-strength wire anchorage system has a two-stage failure behavior with interface debonding as the predominant failure mode.
Introduction
Over the years, oil and gas development has extended further into deep water, and flexible pipes are widely used in the floating production system to connect the seabed infrastructure and the floating platform [1].An unbonded flexible pipe is a typical composite multi-layer structure designed for a harsh environment, which includes a carcass, inner sheath, pressure armor, tensile armor, anti-wear tapes, and outer sheath, and the structure of typical unbonded flexible pipes is described in detail in the API 17 B [2] specification (as is shown in Figure 1); the polymeric layers act as seals, insulator, and/or anti-wear elements, while the metallic layers bear the most mechanical load [3].During service, flexible pipes are subject to various loads, including tension, bending, and internal and external pressure.During ultra-deep-water operations, the top connection area is subjected to high tensile loads caused by the pipe's self-weight, causing a significant risk to the safety of the flexible pipe.End fitting (EF) is one of the most crucial auxiliary devices of flexible pipe systems.As flexible pipes are utilized in deeper and deeper water, the issue of the end fitting's ultimate load-bearing capacity becomes increasingly significant.EF also serves as a termination for the pipe's primary structural components, i.e., the carcass, internal polymer sheath (fluid barrier), pressure armor, tensile armor, and external polymer sheath.EF is the structural interface in the top connection area; only an anchorage system transfers all the axial stress from the flexible pipe body and the helically armored steel wire to the epoxy resin inside the end-fitting body (shown in Figure 2).Offshore operational experience has proven that the EF region is the weakest point of the flexible pipe system [4].EF is the structural interface in the top connection area; only an anchorage system transfers all the axial stress from the flexible pipe body and the helically armored steel wire to the epoxy resin inside the end-fitting body (shown in Figure 2).Offshore operational experience has proven that the EF region is the weakest point of the flexible pipe system [4].EF is the structural interface in the top connection area; only an anchorage system transfers all the axial stress from the flexible pipe body and the helically armored steel wire to the epoxy resin inside the end-fitting body (shown in Figure 2).Offshore operational experience has proven that the EF region is the weakest point of the flexible pipe system [4].The study of the tension behavior of flexible pipes has a long history, and numerous researchers have investigated the mechanical behavior of flexible pipes using analytical and numerical analysis and taking into account the deformation and slip of tensile armor under tension loads [5,6].Furthermore, Yue et al. [7] carried out an experiment on a large scale to support the results of the numerical analysis.Dong et al. [8,9] investigated the impact of end fittings on the stress evaluation of tensile armor tendons in unbonded flexible pipes that were subjected to axial tension.However, the focus of this research was on the pipe body's capacity for deformation; the bearing capacity under extreme tension, particularly in the end fitting region, was neglected.
The fatigue failure of the tensile armor inside the end fitting has received extensive attention.Shen et al. [10] proposed a finite element (FE) model considering resin debonding and complex geometry to predict the stress characteristics of the tensile wire inside the end fitting.The results indicate that the highest stress measured within the EF is greater than any other position along the pipe's body.Simultaneously, he conducted several small-scale test samples to confirm the FE analysis model.Xaiver [11] regards the steel wire in the end fitting as the metal bar in prestressed concrete.He proposed a new anchorage model with a lower concentration and analyzed the stress of the tensile armor wire.Daflon [12] studied the adhesion between the resin and tensile armor wire by conducting the tensile test on the single tensile armor wire and complete end fitting.The shear stress results were between 7.5 and 14.8 MPa.According to the results of the test that was carried out in the 2.5 pipe end fitting, the tensile armor wires that are located inside the end fitting are of critical significance to the structural integrity of the flexible riser.Bueno [13] predicted the stress concentration factor through a 3D FE model, where the stress-strain characteristics of armored steel wire inside the end fitting are better described.Then, he carried out a full-scale test to verify the results of the numerical analysis.Sousa et al. [14] proposed a 2D FE approach to estimate the stress of tensile armors inside end fittings, and they performed a parametric study to investigate the influence of three parameters on the stress state along the wire.These three parameters are the contact conditions between the resin and tensile wire, the stress level during the factory acceptance test (FAT), and the resin's elastic properties.The study indicates that the stress distribution along the wire may be significantly affected by these parameters.Campello et al. [15,16] designed a novel concept of flexible pipe end fitting that they called tensile armor foldless assembly, which assessed the stress distribution and fatigue performance considering the EF mounting process and operational loads and quantified this difference between a novel concept EF and a conventional EF, claiming that the maximum stress on the key segment of the wires lay inside and that EF was expected to be approximately 2.4 times higher than the outside.Anastasiadis et al. [17] developed a finite element model in order to investigate the stress concentration characteristics of the steel wire inside the end fitting while taking into account the impact of the assembly process.They also proposed a formula that provided a rough estimate of the maximum stress concentration factor based on a parametric study.Miyazaki et al. [18,19] applied a 3D finite element model to perform fatigue analysis.According to the findings of this study, the level of the stress concentration that was connected with the EF assembly and FAT had a substantial influence on the fatigue life of the flexible pipes.Torres et al. [20] conducted laboratory tests to investigate the characteristics of epoxy resin.By modifying the ratio of resin to hardener, the compressive and adhesive capabilities of the epoxy resin were enhanced.Mattedi [21] proposed the improvement of the epoxy for the anchorage system, with a focus on the mechanical and adhesive qualities, by adding multi-walled carbon nanotubes (MWCNTs) to the mixture.In order to analyze the system's sensitivity to the features of the epoxy, an analytical model for the anchorage mechanism was built and then evaluated by numerical analysis.After that, the morphology of the nanotubes and the homogeneity of the matrix were expert-analyzed to see how they correspond with the mechanical results.
However, the existing study on the structural analysis of end fitting focuses mostly on the stress analysis of the steel wire and lacks precise information on the failure behavior of the entire anchorage system under extreme tension loads.There are relatively few historical investigations in the area of debonding failure between the epoxy resin and the tensile armor steel wire.The majority of the literature focuses on stress distribution and fatigue damage along the tensile armor wire inside the EF.This research aims to improve the understanding of the progressive failure behavior of end fitting.This study focuses on the debonding failure behavior under the tensile load of the epoxy resin and armored steel wire anchorage system inside EF.A numerical analysis method was used to predict the end fitting debonding failure behavior, and a model test was carried out to investigate the EF connection failure process.
Materials and Methods
The progressive failure behavior of EF anchorage systems in flexible pipes was investigated using numerical methods in this work, and the results were verified by model tests.The research process (shown in Figure 3) can be classified into four distinct steps: 1.
Consider the geometry and loading direction of the end fittings to simplify the threedimensional helical arrangements.
2.
Conduct a single-lap shear (SLS) test to determine the material properties of the epoxy resin inside the end fitting.
3.
On the basis of the properties of the materials gathered in step 2, use a CZM-based numerical model to investigate the progressive failure behavior of the anchorage system under an axial tension load.The failure process is described in terms of the time-dependent evolution of damage parameters and stress along the steel wire.4.
Perform model tests and compare the acquired load-displacement curves to the numerical results in order to determine the validity of the numerical model developed in this research.
fatigue damage along the tensile armor wire inside the EF.This research aims to improve the understanding of the progressive failure behavior of end fitting.This study focuses on the debonding failure behavior under the tensile load of the epoxy resin and armored steel wire anchorage system inside EF.A numerical analysis method was used to predict the end fitting debonding failure behavior, and a model test was carried out to investigate the EF connection failure process.
Materials and Methods
The progressive failure behavior of EF anchorage systems in flexible pipes was investigated using numerical methods in this work, and the results were verified by model tests.The research process (shown in Figure 3) can be classified into four distinct steps: 1. Consider the geometry and loading direction of the end fittings to simplify the threedimensional helical arrangements.2. Conduct a single-lap shear (SLS) test to determine the material properties of the epoxy resin inside the end fitting.
End Fitting Structure and Model Simplification
The end fitting is intended to prevent the pulling out of the tensile armor under dynamic and static service loads.Utilizing an embedded epoxy to secure the tensile armors of a flexible riser into the end fitting is a typical approach that gives excellent mechanical and chemical protection.To fulfill the requirements of the end fitting sealing system assembly and to improve the axial load capacity, the armor steel wire has a complicated geometric shape and is placed with a helical angle between 20 and 50 degrees.
It is recommended that the end fitting model is simplified due to the complexity of the internal structure and the limitations of the computational capacity.Because Campello et al. [4] pointed out that any axial stress operating on the flexible pipe is supposed to be distributed equally along all the pipe armor wires, the double-layer armor can be simplified to a single layer when taking into consideration the axial load.Recent papers by Shen et al. [16] represented the entire EF as a longitudinal slice, and the helical structure of the tension was ignored.This is in contrast to the representation of the complete EF as a slice.Referring to the common practice in the existing literature, this paper simplified the bearing of the 3D helical model into a 2D model (shown in Figure 4).
End Fitting Structure and Model Simplification
The end fitting is intended to prevent the pulling out of the tensile armor under dynamic and static service loads.Utilizing an embedded epoxy to secure the tensile armors of a flexible riser into the end fitting is a typical approach that gives excellent mechanical and chemical protection.To fulfill the requirements of the end fitting sealing system assembly and to improve the axial load capacity, the armor steel wire has a complicated geometric shape and is placed with a helical angle between 20 and 50 degrees.
It is recommended that the end fitting model is simplified due to the complexity of the internal structure and the limitations of the computational capacity.Because Campello et al. [4] pointed out that any axial stress operating on the flexible pipe is supposed to be distributed equally along all the pipe armor wires, the double-layer armor can be simplified to a single layer when taking into consideration the axial load.Recent papers by Shen et al. [16] represented the entire EF as a longitudinal slice, and the helical structure of the tension was ignored.This is in contrast to the representation of the complete EF as a slice.Referring to the common practice in the existing literature, this paper simplified the bearing of the 3D helical model into a 2D model (shown in Figure 4).
Single Lap Shear Test for Adhesive Material Properties
Practical engineering applied the single lap shear test (according to ASTM D4896 [22]) to acquire accurate material properties.In this article, a single-lap shear test was conducted using a tensile testing machine (experiment setup and sample dimension are shown in Figure 5a).In order to prevent the large deformation of the steel plate during the test, the steel plate is made of high-strength galvanized steel.First, use 100# sandpaper for surface sanding, use acetone solution to clean the surface, apply adhesive according to the design size, and apply pressure to ensure a strong bond.Cure at 120 • C for 6 h.Finally, leave it for 24 h before carrying out the experiment.
Single Lap Shear Test for Adhesive Material Properties
Practical engineering applied the single lap shear test (according to ASTM D4896 [22]) to acquire accurate material properties.In this article, a single-lap shear test was conducted using a tensile testing machine (experiment setup and sample dimension are shown in Figure 5a).In order to prevent the large deformation of the steel plate during the test, the steel plate is made of high-strength galvanized steel.First, use 100# sandpaper for surface sanding, use acetone solution to clean the surface, apply adhesive according to the design size, and apply pressure to ensure a strong bond.Cure at 120 °C for 6 h.Finally, leave it for 24 h before carrying out the experiment.
Single Lap Shear Test for Adhesive Material Properties
Practical engineering applied the single lap shear test (according to ASTM D4896 [22]) to acquire accurate material properties.In this article, a single-lap shear test was conducted using a tensile testing machine (experiment setup and sample dimension are shown in Figure 5a).In order to prevent the large deformation of the steel plate during the test, the steel plate is made of high-strength galvanized steel.First, use 100# sandpaper for surface sanding, use acetone solution to clean the surface, apply adhesive according to the design size, and apply pressure to ensure a strong bond.Cure at 120 °C for 6 h.Finally, leave it for 24 h before carrying out the experiment.The load-displacement curve is illustrated in Figure 5b.As can be seen, the load increases linearly until it reaches the point of maximum shear force, at which point failure occurs.The shear strength of Araldite 2015 is 15.3 MPa; other parameters that are required for the simulation are derived from Campilho et al. [23] (listed in Table 1).The evaluation of interface debonding is based on the mechanics of elastic-plastic fracture, despite the fact that the theoretical method is often applied only to simple geometries under unidirectional stresses.However, the complicated fracture problems of practical engineering structures sometimes necessitate costly and time-consuming testing.When dealing with the issue of debonding failures, a multi-method that combines numerical analysis with restricted experimental verification can save time and has found widespread application.
The cohesive zone model (CZM) [24] and virtual crack closure technique (VCCT) [25] are the primary FEM methods used for interface fracture mechanics.The CZM approach can focus on interface debonding initiation and propagation; CZM approach can focus on interface debonding initiation and propagation, in addition to using mixed failure modes (as depicted in Figure 6a), while the VCCT method considers the failure of the composite immediately after the loss of adhesion and is mainly used for failure mode I. CZM is used to characterize the material constitutive relationship during the failure process in order to evaluate the progressive debonding failure behavior of a steel wire-resin anchorage system.The evaluation of interface debonding is based on the mechanics of elastic-plastic fracture, despite the fact that the theoretical method is often applied only to simple geometries under unidirectional stresses.However, the complicated fracture problems of practical engineering structures sometimes necessitate costly and time-consuming testing.When dealing with the issue of debonding failures, a multi-method that combines numerical analysis with restricted experimental verification can save time and has found widespread application.
The cohesive zone model (CZM) [24] and virtual crack closure technique (VCCT) [25] are the primary FEM methods used for interface fracture mechanics.The CZM approach can focus on interface debonding initiation and propagation; CZM approach can focus on interface debonding initiation and propagation, in addition to using mixed failure modes (as depicted in Figure 6a), while the VCCT method considers the failure of the composite immediately after the loss of adhesion and is mainly used for failure mode I. CZM is used to characterize the material constitutive relationship during the failure process in order to evaluate the progressive debonding failure behavior of a steel wire-resin anchorage system.CZM establishes the interface's material behavior from elastic deformation to damage accumulation and total debonding via a traction-separation relationship.The most common CZM behavior is the bilinear debonding law, as shown in Figure 6b.This paper CZM establishes the interface's material behavior from elastic deformation to damage accumulation and total debonding via a traction-separation relationship.The most common CZM behavior is the bilinear debonding law, as shown in Figure 6b.This paper focuses on the failure behavior under tensile loading.In the simplified model, the contact stress across the interface increases and reaches a maximum, declines, and is eventually compromised, leading to a complete debonding.
Figure 6b illustrates the behavior of the traction-separation law for a single fracture mechanism.If one fracture mode is acting alone, the crack is initiated at σ c .When the bonding system is in the process of loading, the relative displacement of the contact surfaces results in contact stress in the cohesive zone.K is the ratio of contact stress to contact separation before the damage occurs.δ c is the critical separation when the damage occurs.Then, the damage grows, and the interface stiffness decreases.When the maximum displacement δ sep is reached, the contact surfaces are entirely separated.Equations ( 1) and (2) below are the governing equation for the bilinear cohesive zone model, which apply to all three separation failures in Figure 6a.The damage can be measured by the damage parameter D, which is defined by Equation ( 1): Govern equations for bilinear as Equation ( 2): ABAQUS 2020 [26] was utilized to represent the cohesive zone in the interface debonding.To improve the calculation's accuracy, it is essential to input the accurate material parameters and select the appropriate failure criteria.In this article, the square stress criterion is employed to determine the initiation of the damage, whereas the Power law criterion is used to determine fracture failure.
Finite Element Model Setup
A finite element model of the tensile armor pull-out from the anchorage system was generated using the CZM approach.The model consists of three parts: outer casing, epoxy resin, and tensile armor steel wire.The end-fitting outer casing is set as a fixed support, the contact between the epoxy resin and tensile armor is set as a cohesive behavior, and the displacement load is applied to the end of the tensile armor to simulate the failure process of the anchorage model.The geometry is shown in Figure 7.
curs.Then, the damage grows, and the interface stiffness decreases.When the maximum displacement is reached, the contact surfaces are entirely separated.Equations ( 1) and ( 2) below are the governing equation for the bilinear cohesive zone model, which apply to all three separation failures in Figure 6a.The damage can be measured by the damage parameter D, which is defined by Equation (1): Govern equations for bilinear as Equation ( 2): ABAQUS 2020 [26] was utilized to represent the cohesive zone in the interface debonding.To improve the calculation's accuracy, it is essential to input the accurate material parameters and select the appropriate failure criteria.In this article, the square stress criterion is employed to determine the initiation of the damage, whereas the Power law criterion is used to determine fracture failure.
Finite Element Model Setup
A finite element model of the tensile armor pull-out from the anchorage system was generated using the CZM approach.The model consists of three parts: outer casing, epoxy resin, and tensile armor steel wire.The end-fitting outer casing is set as a fixed support, the contact between the epoxy resin and tensile armor is set as a cohesive behavior, and the displacement load is applied to the end of the tensile armor to simulate the failure process of the anchorage model.The geometry is shown in Figure 7.The material and input parameters are summarized in Table 1.The contact behavior of the interface was modeled, as discussed in Section 2.3, using bilinear cohesive behavior.A displacement-controlled load of 8 mm was applied at the free end of the fiber.To investigate the failure behavior of a flexible pipe EF anchorage system, THE simplified finite element model of the anchorage system consists of three parts: the outer casing, the tensile armor, and the epoxy resin.To simplify the calculation process, the outer casing, which has very little deformation, is considered a linear elastic material (Young's modulus is 210 GPa).For the tensile armor, a bilinear hardening model is adopted, as follows.Young's modulus is 210 GPa, the yield strength is 235 MPa (yield strain is 0.01), and the ultimate strength is 400 MPa (failure strain is 0.23).The epoxy resin is considered to be an isotropic linear elastic material with Young's modulus of 3800 MPa.
The simplified model studied in this paper is comparable in size to a thin plate, and the structure's deformation and failure behavior is driven by an in-plane tensile load, which can be equivalent to a two-dimensional model.The load-bearing capacity of a 2D model is the result of per unit thickness and grows proportionally with plate thickness.Each component of the model uses the CPS4R element (4-node bilinear, reduced integration with hourglass control) to balance computational efficiency and precision, as is shown in Figure 8. Surface bonding is simulated by cohesive contact behavior, which implies that the failure of the cohesive bond is characterized by the increasing degradation of the cohesive stiffness caused by damage.
strength is 400 MPa (failure strain is 0.23).The epoxy resin is considered to be an isotropic linear elastic material with Young's modulus of 3800 MPa.
The simplified model studied in this paper is comparable in size to a thin plate, and the structure's deformation and failure behavior is driven by an in-plane tensile load, which can be equivalent to a two-dimensional model.The load-bearing capacity of a 2D model is the result of per unit thickness and grows proportionally with plate thickness.Each component of the model uses the CPS4R element (4-node bilinear, reduced integration with hourglass control) to balance computational efficiency and precision, as is shown in Figure 8. Surface bonding is simulated by cohesive contact behavior, which implies that the failure of the cohesive bond is characterized by the increasing degradation of the cohesive stiffness caused by damage.
Model Test of Anchorage System
The model test is employed to validate the results of the numerical analysis [27]; the test sample has the same dimensions as the numerical model.During the pull-out test, a tensile testing machine is used to apply a 10 mm displacement load (2 mm/min), and the pull-out force and displacement curve of the test is measured.The experiment is illustrated in Figure 9.In Section 3.1, the results are described in great detail.
Model Test of Anchorage System
The model test is employed to validate the results of the numerical analysis [27]; the test sample has the same dimensions as the numerical model.During the pull-out test, a tensile testing machine is used to apply a 10 mm displacement load (2 mm/min), and the pull-out force and displacement curve of the test is measured.The experiment is illustrated in Figure 9.In Section 3.1, the results are described in great detail.
Numerical and Experimental Results
This section contains a load-displacement curve from the FEM and an experiment to describe the progressive behavior of the end-fitting model through the evolution of the mises stress distribution along the steel wire orientation and the interface damage factor CSDMG during the loading process.Finally, the failure behavior of the anchorage systems with a different yield strength for steel wire and adhesive shear strength is investigated.
Load-Displacement Curve from FEM and Experiment
The results of the model test stated in Section 2.4 are shown in the figure below.For the anchorage system consisting of Q235 carbon steel (ys = 235MPa) and adhesive (Araldite 2015), an experiment and finite element analysis based on the cohesive zone model was performed.
Numerical and Experimental Results
This section contains a load-displacement curve from the FEM and an experiment to describe the progressive behavior of the end-fitting model through the evolution of the mises stress distribution along the steel wire orientation and the interface damage factor CSDMG during the loading process.Finally, the failure behavior of the anchorage systems with a different yield strength for steel wire and adhesive shear strength is investigated.
Load-Displacement Curve from FEM and Experiment
The results of the model test stated in Section 2.4 are shown in the figure below.For the anchorage system consisting of Q235 carbon steel (ys = 235MPa) and adhesive (Araldite 2015), an experiment and finite element analysis based on the cohesive zone model was performed.
The FEM and experimental results of the relationship between the tensile load and displacement of the flexible pipe end fitting anchorage system when subjected to an axial load are shown in Figure 10.It can be seen that the finite element approach has a matching growth trend to the load-displacement curve derived from the experimental results, indicating the finite element method's feasibility for predicting the anchorage system's debonding failure behavior.
Numerical and Experimental Results
This section contains a load-displacement curve from the FEM and an experiment to describe the progressive behavior of the end-fitting model through the evolution of the mises stress distribution along the steel wire orientation and the interface damage factor CSDMG during the loading process.Finally, the failure behavior of the anchorage systems with a different yield strength for steel wire and adhesive shear strength is investigated.
Load-Displacement Curve from FEM and Experiment
The results of the model test stated in Section 2.4 are shown in the figure below.For the anchorage system consisting of Q235 carbon steel (ys = 235MPa) and adhesive (Araldite 2015), an experiment and finite element analysis based on the cohesive zone model was performed.
The FEM and experimental results of the relationship between the tensile load and displacement of the flexible pipe end fitting anchorage system when subjected to an axial load are shown in Figure 10.It can be seen that the finite element approach has a matching growth trend to the load-displacement curve derived from the experimental results, indicating the finite element method's feasibility for predicting the anchorage system's debonding failure behavior.A further observation of the load-displacement curve demonstrates that, along with the loading process, the tensile load first appears to grow rapidly from point O to point A due to the steel wire's elastic deformation.Additionally, after point A, the rate of expansion slows, and the tensile load plateaus; the change in the slope after point A may be due to the plastic deformation of the wire and the initiation of interface bonding damage.The top of the curve is located at point B, and from point A to point B, the damage expands rapidly while the wire undergoes plastic deformation.At point B, the anchorage system totally fails, and the load drops sharply after reaching this location.Point B's coordinate is (L B , D B ). L B is the failure load, which indicates the maximum bearing capacity of the anchorage system, and D B is the failure displacement.
Stress Evolution of Steel Wire
Mise's stress in the tensile armor steel wire is a key indicator of the anchorage system's failure condition.By analyzing the evolution of the Mises stress throughout the length of the wire during the loading process, it is possible to better understand the failure mechanism of the EF anchorage system.
Figure 11a shows the entry point of the anchorage structure (x = 0) and the Mises stress distribution along the steel wire.Figure 11b illustrates the stress evolution of the wire from d1 to d7; the specific values for d1-d7 are shown in the Figure legend.From d1 to d3, the Mises stress along the steel wire decreases gradually due to the shearing effect of the adhesive, with the rate of decline likely being related to the adhesive's shear strength.From d4 to d5, when the steel wire Mises stress near the entry initially exceeds the yield strength, plastic deformation occurs, and the length of the yield section expands gradually toward the inside as the load increases.From d6 to d7, a stable plateau appears along the stress distribution of the wire, denoting that the shear effect disappears, the interface between the wire and the adhesive becomes debonded, and the stress distribution of the wire can indicate adhesive failure.The above-described stress evolution characteristics are generally consistent with the failure mechanism characterized by the load-displacement curve in Section 3.1.
Figure 11a shows the entry point of the anchorage structure (x = 0) and the Mises stress distribution along the steel wire.Figure 11b illustrates the stress evolution of the wire from d1 to d7; the specific values for d1-d7 are shown in the Figure legend.From d1 to d3, the Mises stress along the steel wire decreases gradually due to the shearing effect of the adhesive, with the rate of decline likely being related to the adhesive's shear strength.From d4 to d5, when the steel wire Mises stress near the entry initially exceeds the yield strength, plastic deformation occurs, and the length of the yield section expands gradually toward the inside as the load increases.From d6 to d7, a stable plateau appears along the stress distribution of the wire, denoting that the shear effect disappears, the interface between the wire and the adhesive becomes debonded, and the stress distribution of the wire can indicate adhesive failure.The above-described stress evolution characteristics are generally consistent with the failure mechanism characterized by the load-displacement curve in Section 3.1.
Failure Process of Anchorages System
For the visualization of the debonding process of the flexible pipe end fitting anchorage model, the cohesive surface damage (CSDMG) parameter D is adopted in ABAQUS.CSDMG = 0 denotes the undamaged material, but CSDMG = 1 denotes the total material failure (no material stiffness) for the cohesive surface.
Failure Process of Anchorages System
For the visualization of the debonding process of the flexible pipe end fitting anchorage model, the cohesive surface damage (CSDMG) parameter D is adopted in ABAQUS.CSDMG = 0 denotes the undamaged material, but CSDMG = 1 denotes the total material failure (no material stiffness) for the cohesive surface.
Figure 12 illustrates the progressive failure process of the EF anchorage system.The initial structure is not damaged during the loading procedure; this is known as the undamaged status.Currently, the CSDMG interface value is 0. With further loading, the wire gradually reaches its elastic limit, plastic deformation occurs, debonding initiation begins, the CSDMG = 1 position begins to appear, and the failure position begins at the entry point (x = 0).As the load grows, the inside gradually experiences damage.The axial shear stress dominates the surface and its debonding propagation process.Eventually, the structure appears to be totally debonded, and the damage parameter CSDMG of the whole straight section reaches 1.Although there are still internal twisted sections of wire in a bonded state, the failure of straight parts is unacceptable for the EF; therefore, this state is considered a structural failure.
When d = 0.85 mm, debonding starts at the connection position of the wire and the adhesive; debonding failure grows inside continuously with the loading process; and when d = 3.56 mm, total debonding happens in the straight section of the wire, when the tensile load reaches its peak.
Different Steel Wire Yield Strength
The anchorage system consists of steel wire and epoxy resin, and the current API code primarily addresses the steel wire utilization factor when designing end fittings.This study shows that there are limitations in forecasting the structural failure behavior based on the linear elasticity of the steel wire and that it is advantageous to consider elastic plasticity in order to improve the load-bearing capacity of the anchorage system.Taking into account the elastic-plastic behavior of steel wires, this paper investigates the failure behavior of an anchorage system composed of four commonly used types of steel and the same epoxy resin adhesive (Araldite 2015).The material properties of different steels are shown in Table 2. Figure 12 illustrates the progressive failure process of the EF anchorage system.The initial structure is not damaged during the loading procedure; this is known as the undamaged status.Currently, the CSDMG interface value is 0. With further loading, the wire gradually reaches its elastic limit, plastic deformation occurs, debonding initiation begins, the CSDMG = 1 position begins to appear, and the failure position begins at the entry point (x = 0).As the load grows, the inside gradually experiences damage.The axial shear stress dominates the surface and its debonding propagation process.Eventually, the structure appears to be totally debonded, and the damage parameter CSDMG of the whole straight section reaches 1.Although there are still internal twisted sections of wire in a bonded state, the failure of straight parts is unacceptable for the EF; therefore, this state is considered a structural failure.When d = 0.85 mm, debonding starts at the connection position of the wire and the adhesive; debonding failure grows inside continuously with the loading process; and when d = 3.56 mm, total debonding happens in the straight section of the wire, when the tensile load reaches its peak.
Different Steel Wire Yield Strength
The anchorage system consists of steel wire and epoxy resin, and the current API code primarily addresses the steel wire utilization factor when designing end fittings.This study shows that there are limitations in forecasting the structural failure behavior based on the linear elasticity of the steel wire and that it is advantageous to consider elastic plasticity in order to improve the load-bearing capacity of the anchorage system.Taking into account the elastic-plastic behavior of steel wires, this paper investigates the failure behavior of an anchorage system composed of four commonly used types of steel and the same epoxy resin adhesive (Araldite 2015).The material properties of different steels are shown in Table 2.The load-displacement curves of the anchorage system made of a variety of steel wire materials are illustrated in Figure 13.There are distinct differences between the loaddisplacement curves of different steel wire materials.As for Steel A and Steel B, which are carbon steel with low yield strength, the curves demonstrate three stages from the increase to plateau and decline where the anchorage system yields and generates plastic deformation before interfacial debonding leads to structural damage.In this case, enhancing the yield strength of the tensile armor wire can further improve its structural load capacity.The load-displacement for Steel C and D have higher yield strength and have only two stages, rising and falling, without significant plastic deformation; in this situation, bond failure is the dominant cause of failure.The load-displacement curves are remarkably similar for different yield-strength steel wires; increasing the yield strength (ys) of the wire does not increase the load-bearing capacity of the anchorage system.
Different Adhesive Shear Strength
The adhesive material's properties also have an essential influence on the load-bearing capacity of the anchorage structure.This paper focuses on the failure behavior of pipe end fittings under tensile loading; as the tensile load is parallel to the wire direction, the shear strength of the wire-epoxy bond interface is the crucial variable that dominates the debonding of the interface.Comparing shear strengths, the results obtained for nominal strengths and stiffnesses are not significantly different.Therefore, in this study, which focuses on the effect of adhesive shear strength, the load-bearing capacity of anchoring systems consisting of various wire materials and epoxy resins with different shear strengths was investigated.The shear strength properties of different adhesives are listed in Table 3.The load-displacement curves of the anchorage system made of a variety of steel wire materials are illustrated in Figure 13.There are distinct differences between the loaddisplacement curves of different steel wire materials.As for Steel A and Steel B, which are carbon steel with low yield strength, the curves demonstrate three stages from the increase to plateau and decline where the anchorage system yields and generates plastic deformation before interfacial debonding leads to structural damage.In this case, enhancing the yield strength of the tensile armor wire can further improve its structural load capacity.The load-displacement for Steel C and D have higher yield strength and have only two stages, rising and falling, without significant plastic deformation; in this situation, bond failure is the dominant cause of failure.The load-displacement curves are remarkably similar for different yield-strength steel wires; increasing the yield strength (ys) of the wire does not increase the load-bearing capacity of the anchorage system.
Different Adhesive Shear Strength
The adhesive material's properties also have an essential influence on the load-bearing capacity of the anchorage structure.This paper focuses on the failure behavior of pipe end fittings under tensile loading; as the tensile load is parallel to the wire direction, the shear strength of the wire-epoxy bond interface is the crucial variable that dominates the debonding of the interface.Comparing shear strengths, the results obtained for nominal strengths and stiffnesses are not significantly different.Therefore, in this study, which focuses on the effect of adhesive shear strength, the load-bearing capacity of anchoring systems consisting of various wire materials and epoxy resins with different shear strengths was investigated.The shear strength properties of different adhesives are listed in Table 3.
Figure 14a illustrates the calculated load-displacement curves of 450 MPa steel wire with various shear strength (SS) adhesives.The data show that when the interface shear strength grows, the structural load-bearing capacity remains constant while the failure displacement continues to increase.The increase in the load-bearing capacity driven by the rise in shear strength is constrained because wire yielding occurs prior to structural debonding in this situation.Figure 14b-d, respectively, show the load-displacement curves of an anchorage system that is composed of various yield strengths of 800 MPa, 1200 MPa, and 1300 MPa.In this situation, plastic deformation has not yet occurred when interfacial debonding takes place, and the interfacial debonding that occurs as a result of Figure 14a illustrates the calculated load-displacement curves of 450 MPa steel wire with various shear strength (SS) adhesives.The data show that when the interface shear strength grows, the structural load-bearing capacity remains constant while the failure displacement continues to increase.The increase in the load-bearing capacity driven by the rise in shear strength is constrained because wire yielding occurs prior to structural debonding in this situation.Figure 14b-d, respectively, show the load-displacement curves of an anchorage system that is composed of various yield strengths of 800 MPa, 1200 MPa, and 1300 MPa.In this situation, plastic deformation has not yet occurred when interfacial debonding takes place, and the interfacial debonding that occurs as a result of the insufficient adhesive shear strength is the predominant factor that contributes to failure. the insufficient adhesive shear strength is the predominant factor that contributes to failure.
Discussion
There are three failure modes for similar bonding systems according to the composite failure theory: wire failure, interface debonding, and epoxy resin failure.Due to the protection provided by the EF outer casing, the breakdown of the epoxy resin is less likely to
Discussion
There are three failure modes for similar bonding systems according to the composite failure theory: wire failure, interface debonding, and epoxy resin failure.Due to the protection provided by the EF outer casing, the breakdown of the epoxy resin is less likely to occur.Therefore, wire plastic fracture and interface debonding are employed as the primary failure modes of flexible pipe EF.This paper discusses the failure modes of the flexible pipe end fittings and the factors affecting the structural load-bearing capacity based on the previous calculation results.
Progressive Failure Behavior
The relative magnitude of the wire yield strength compared to the interfacial shear strength of the epoxy resin causes significant differences in the failure behavior of the anchorage system.
The load-displacement curve shows three stages of growth, plateau, and decline, which indicates that the failure of the structure can be divided into three stages: elastic bonding, debonding propagation, and the final failure stage.The failure of the steel wires occurs prior to the debonding of the interface in the anchorage system that is composed of low-yield strength steel wires.The elastic bonding stage is characterized by a linear relationship between the load and displacement; the shape of this curve is controlled by the elastic modulus and yield strength of the steel wire.As Young's modulus increases, the slope of the curve increases, and the maximum load rise as the yield strength increases.The peak load is not significantly increased by the increase in shear strength since the wire undergoes plastic deformation throughout the debonding extension phase.Additionally, the plastic deformation increases the failure displacement.
The structural failure process for anchorage systems made of high-yield strength steel wires typically does not include plastic deformation.The load-displacement curve has only two stages; the failure of the bonding interface is the predominant failure mode, and the interfacial shear strength of the wire and epoxy is the most crucial factor governing the peak load.
Influence Parameter on Load-Bearing Capacity
The failure displacement of the anchorage system with various material combinations is summarized in Figure 15.The three-stage failure is observed to have a higher failure displacement than the two-stage failure due to plastic deformation.Comparing the failure displacement of steels C, D, and E indicates that the yield strength has no effect on the failure displacement; however, an increase in the shear strength at the bonding interface can significantly increase the failure load and displacement.For steel C, failure displacement rises by 9%, 17%, and 23% for 20%, 30%, and 40%, and increases in epoxy resin shear strength, respectively.The absence of an impact of different steel wire strengths on the percentage increase further confirms the notion that interfacial bonding failure predominates in a two-stage failure.
and the interfacial shear strength of the wire and epoxy is the most crucial factor governing the peak load.
Influence Parameter on Load-Bearing Capacity
The failure displacement of the anchorage system with various material combinations is summarized in Figure 15.The three-stage failure is observed to have a higher failure displacement than the two-stage failure due to plastic deformation.Comparing the failure displacement of steels C, D, and E indicates that the yield strength has no effect on the failure displacement; however, an increase in the shear strength at the bonding interface can significantly increase the failure load and displacement.For steel C, failure displacement rises by 9%, 17%, and 23% for 20%, 30%, and 40%, and increases in epoxy resin shear strength, respectively.The absence of an impact of different steel wire strengths on the percentage increase further confirms the notion that interfacial bonding failure predominates in a two-stage failure.
Conclusions
Deep-water oil and gas development is limited by the load-bearing capacity of the anchorage system in flexible pipe end fittings.The existing literature emphasizes the failure behavior of steel wires under static and dynamic axial loads but rarely considers the debonding behavior between steel wires and resin.In this paper, we conducted a numerical study on the progressive failure behavior of the anchorage system in end fittings under the axial tension brought by the self-weight of the pipe using the ABAQUS software.We then discussed the influencing factors that affect the load-bearing capacity and failure behavior of the anchorage.According to the results of this research, the following conclu-
Conclusions
Deep-water oil and gas development is limited by the load-bearing capacity of the anchorage system in flexible pipe end fittings.The existing literature emphasizes the failure behavior of steel wires under static and dynamic axial loads but rarely considers the debonding behavior between steel wires and resin.In this paper, we conducted a numerical study on the progressive failure behavior of the anchorage system in end fittings under the axial tension brought by the self-weight of the pipe using the ABAQUS software.We then discussed the influencing factors that affect the load-bearing capacity and failure behavior of the anchorage.According to the results of this research, the following conclusions and suggestions can be made: 1.
The finite element method based on the cohesive zone model can effectively simulate the mechanical behavior of the anchorage system under axial loading.The loaddisplacement curve obtained from the numerical simulation has similar growth trends compared with the experimental result.
2.
Further investigation of the load-displacement curve of the anchorage system under axial load leads to the conclusion that the failure behavior of flexible pipe end fittings significantly depends on the material selection of the anchorage system, with two and three stages.
3.
Two-stage failure behavior is common in end fittings consisting of high-strength tensile armor steel wire, which is characterized by no plastic deformation in the steel wire, only a linear bonding, and debonding stage, in the load-displacement curve.Interfacial debonding is the predominant failure mode; hence, increasing the interfacial shear strength of the epoxy resin adhesive can significantly enhance the load-bearing capacity of the structure.4.
Three-stage failure behavior is common in end fittings consisting of low-strength tensile armor wire, which is characterized by the obvious plastic deformation of the wire and three stages of the load-displacement curve: elastic bonding stage, debonding stage, and the final failure stage.Steel wire failure is the dominant failure mode rather than interface debonding.Improving the yield strength of the wire can effectively increase the structural load-bearing capacity; however, increasing the interface shear strength of the epoxy resin adhesive has little or no effect on the failure load but can increase the failure displacement.
In conclusion, this paper investigates the failure behavior of the flexible pipe end fitting anchorage system using numerical and experimental methods and then analyzes the significant influence of the material selection on failure behavior by introducing the cohesive zone model to evaluate the interfacial debonding process between the tensile armor steel wire and epoxy resin during the loading process.This study's findings about the progressive failure behavior of end fitting anchorage systems can be applied to the design and development of anchorage systems for ultra-deep water flexible pipe end fittings.
Figure 1 .
Figure 1.Floating production system and flexible pipe.
Figure 1 .
Figure 1.Floating production system and flexible pipe.
Figure 1 .
Figure 1.Floating production system and flexible pipe.
3 .
On the basis of the properties of the materials gathered in step 2, use a CZM-based numerical model to investigate the progressive failure behavior of the anchorage system under an axial tension load.The failure process is described in terms of the timedependent evolution of damage parameters and stress along the steel wire.4. Perform model tests and compare the acquired load-displacement curves to the numerical results in order to determine the validity of the numerical model developed in this research.
Figure 4 .
Figure 4. End fitting model and simplification.
Figure 5 .
Figure 5. Single-lap shear test; (a) Experiment setup and sample dimension (b) Load-displacement curve in SLP test.
Figure 4 .
Figure 4. End fitting model and simplification.
Figure 4 .
Figure 4. End fitting model and simplification.
Figure 5 .
Figure 5. Single-lap shear test; (a) Experiment setup and sample dimension (b) Load-displacement curve in SLP test.Figure 5. Single-lap shear test; (a) Experiment setup and sample dimension (b) Load-displacement curve in SLP test.
Figure 5 .
Figure 5. Single-lap shear test; (a) Experiment setup and sample dimension (b) Load-displacement curve in SLP test.Figure 5. Single-lap shear test; (a) Experiment setup and sample dimension (b) Load-displacement curve in SLP test.
Figure 7 .
Figure 7. FE Model; (a) Model and boundary condition (b) Model geometry.Figure 7. FE Model; (a) Model and boundary condition (b) Model geometry.
Figure 7 .
Figure 7. FE Model; (a) Model and boundary condition (b) Model geometry.Figure 7. FE Model; (a) Model and boundary condition (b) Model geometry.
Figure 8 .
Figure 8. Mesh and element type of FE model.
Figure 8 .
Figure 8. Mesh and element type of FE model.
Figure 9 .
Figure 9. Experiment setup of model test.
Figure 9 .
Figure 9. Experiment setup of model test.
Figure 10 .
Figure 10.Load-displacement curve of the anchorage model.Figure 10.Load-displacement curve of the anchorage model.
Figure 10 .
Figure 10.Load-displacement curve of the anchorage model.Figure 10.Load-displacement curve of the anchorage model.
Figure 11 .
Figure 11.Steel wire Mise stress distribution; (a) Zero position; (b) Stress distribution at different displacement.
Figure 11 .
Figure 11.Steel wire Mise stress distribution; (a) Zero position; (b) Stress distribution at different displacement.
Figure 12 .
Figure 12.Failure process of the anchorage system.
Figure 12 .
Figure 12.Failure process of the anchorage system.
Figure 13 .
Figure 13.The load-displacement of the anchorage system consists of different steel wire.
Figure 13 .
Figure 13.The load-displacement of the anchorage system consists of different steel wire.
Figure 15 .
Figure 15.Failure displacement of different anchorage systems.
Figure 16
Figure16illustrates the failure loads for different material combinations.Two-stage failure has been found to have a larger failure load than three-stage failure.For steels A and B, an increase in the interfacial shear strength does not significantly improve the structural load-bearing capacity.Nevertheless, increases in the shear strength of 20%, 30%, and 40% might enhance structural failure loads by 11%, 16%, and 22%, respectively, for steels C, D, and E.
Figure 15 .
Figure 15.Failure displacement of different anchorage systems.
Figure 16
Figure16illustrates the failure loads for different material combinations.Two-stage failure has been found to have a larger failure load than three-stage failure.For steels A and B, an increase in the interfacial shear strength does not significantly improve the structural load-bearing capacity.Nevertheless, increases in the shear strength of 20%, 30%, and 40% might enhance structural failure loads by 11%, 16%, and 22%, respectively, for steels C, D, and E.
Figure 16 .
Figure 16.Failure load of different anchorage system.
Figure 16 .
Figure 16.Failure load of different anchorage system.
Table 1 .
Properties of the adhesive Araldite 2015 for CZM modelling.
Table 1 .
Properties of the adhesive Araldite 2015 for CZM modelling.
Table 2 .
Material properties of different steel material.
Table 2 .
Material properties of different steel material.
Table 3 .
Shear strength of different adhesives.
Table 3 .
Shear strength of different adhesives. | 2023-01-12T17:15:23.979Z | 2023-01-05T00:00:00.000 | {
"year": 2023,
"sha1": "9ab6642951b16c7b53a350e5498090d1c9519a38",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1312/11/1/116/pdf?version=1672920642",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fe1bd29a7f73094158b68f55a1599c0bf2fce66b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
49294744 | pes2o/s2orc | v3-fos-license | Adenosine A1 receptor: A neuroprotective target in light induced retinal degeneration
Light induced retinal degeneration (LIRD) is a useful model that resembles human retinal degenerative diseases. The modulation of adenosine A1 receptor is neuroprotective in different models of retinal injury. The aim of this work was to evaluate the potential neuroprotective effect of the modulation of A1 receptor in LIRD. The eyes of rats intravitreally injected with N6-cyclopentyladenosine (CPA), an A1 agonist, which were later subjected to continuous illumination (CI) for 24 h, showed retinas with a lower number of apoptotic nuclei and a decrease of Glial Fibrillary Acidic Protein (GFAP) immunoreactive area than controls. Lower levels of activated Caspase 3 and GFAP were demonstrated by Western Blot (WB) in treated animals. Also a decrease of iNOS, TNFα and GFAP mRNA was demonstrated by RT-PCR. A decrease of Iba 1+/MHC-II+ reactive microglial cells was shown by immunohistochemistry. Electroretinograms (ERG) showed higher amplitudes of a-wave, b-wave and oscillatory potentials after CI compared to controls. Conversely, the eyes of rats intravitreally injected with dipropylcyclopentylxanthine (DPCPX), an A1 antagonist, and subjected to CI for 24 h, showed retinas with a higher number of apoptotic nuclei and an increase of GFAP immunoreactive area compared to controls. Also, higher levels of activated Caspase 3 and GFAP were demonstrated by Western Blot. The mRNA levels of iNOS, nNOS and inflammatory cytokines (IL-1β and TNFα) were not modified by DPCPX treatment. An increase of Iba 1+/MHC-II+ reactive microglial cells was shown by immunohistochemistry. ERG showed that the amplitudes of a-wave, b-wave, and oscillatory potentials after CI were similar to control values. A single pharmacological intervention prior illumination stress was able to swing retinal fate in opposite directions: CPA was neuroprotective, while DPCPX worsened retinal damage. In summary, A1 receptor agonism is a plausible neuroprotective strategy in LIRD.
Introduction
Human retinal degenerative diseases are important disabling conditions. Among them, agerelated macular degeneration (AMD) is the first cause of acquired blindness in developed countries [1]. In the US, the prevalence of AMD is similar to that of all invasive cancers combined and more than double the prevalence of Alzheimer´s disease [2]. The treatment of advanced neovascular AMD ("wet¨variant) consists mainly on the use of monoclonal antibodies against vascular endothelial growth factor (VEGF) but the¨dry¨variant of AMD has no reliable treatment yet. Current treatments for dry AMD slow down or prevent additional vision loss to some extent but they do not restore lost vision. The majority of patients require indefinite treatment or demonstrate disease progression despite therapies [2]. A meta-analysis shows that 20-25% of unilateral AMD cases, and up to 50% of unilateral late AMD cases progress to bilateral in 5 years [3]. These evidences show the importance of exploring other pharmacological tools to deal with retinal degenerative diseases. Recent articles have also shown the neuroprotective effect of peptides such as pituitary adenylate cyclase-activating peptide (PACAP) and the octapeptide NAP, derived from activity-dependent neuropeptide protein (ADNP), in rat diabetic retinopathy which counteract the up-regulation of VEGF [4,5].
Animal models of retinal degenerative diseases must be employed to test potential pharmacological treatments. Light induced retinal degeneration (LIRD) has been widely used to study degenerative diseases of the retina [6][7][8][9][10][11][12]. The main hallmarks of the LIRD model are similar to some of those detected in human AMD, juvenile macular degeneration or retinitis pigmentosa. The degenerative process starts in the outer retina as continuous illumination (CI) produces photoreceptor (PH) degeneration, apoptosis in the outer nuclear layer (ONL), increased phagocytosis by the retinal pigment epithelium (RPE) and synaptic degeneration in the outer plexiform layer (OPL) [7,[13][14][15][16][17]. Conversely, in other degenerative diseases such as diabetic retinopathy, retinopathy of prematurity, glaucoma, and ischemia, degeneration starts in the inner retina and affects primarily to inner nuclear layer, ganglion cell layer and optic nerves [18][19][20].
In our hands, treating albino rats (Sprague Dawley) with white light (12 klux) produces a peak of NO after one day of continuous illumination [10], an increase of glucocorticoids, a great number of apoptotic nuclei in the outer nuclear layer after 2 days of continuous illumination [7], and the complete loss of photoreceptors after 7 days of continuous illumination [17].
Adenosine is a non-classical transmitter found in the extracellular space as a consequence of ATP breakdown by ectonucleosidases or through translocation by membrane nucleoside transporters. Adenosine binds to G protein coupled receptors belonging to the P1 family of receptors known as A1, A2A, A2B and A3 receptors [21,22]. Different autoradiographic and in situ hybridization studies have shown the localization of adenosine receptors in the retina of rabbits, mice, rats, monkeys, and humans [23][24][25][26].
In recent years, the modulation of adenosine receptors has emerged as a potential neuroprotective strategy to treat a wide range of insults and degenerative diseases of the CNS [27]. A1 receptor agonists have been reported to be neuroprotective in animal models of epilepsy, inflammatory, hypoxic, and degenerative diseases of the CNS [28][29][30]. In humans with Alzhei-mer´s disease, A1R expression rises and is associated with number of amyloid plaques and Tau phosphorylation. It was suggested that adenosine could slow down the progression of Alzhei-mer´s disease [31].
Adenosine release is an important component of the ischemic/hypoxic insult to the retina [32,33], where it probably produces hyperhemia that protects neurons from glutamate toxicity [34]. The neuroprotective role of adenosine after the ischemic injury of the retina is mediated via A1R and/or A2R [35]. Furthermore, recent works have demonstrated the neuroprotective role of A2A receptor antagonists against damage induced by retinal ischemia both in animal models of ischemia-reperfusion and in primary microglial cultures submitted to elevated hydrostatic pressure [36,37]. Although there is an extensive knowledge about the neuroprotective role of adenosine in different models of retinal degenerations, including ischemic and diabetic retinopathy [38], little is known about the role of adenosine in degenerative diseases of the outer retina.
In order to improve our knowledge on the processes underlying light induced retinal degenerations, and as a first step to assess new potential therapeutic targets, the role of A1R in the degenerative process was studied by modulating its activity with an A1R agonist (cyclopentyladenosine -CPA-) or an A1R antagonist (dipropylcyclopentylxanthine -DPCPX-) in the LIRD model. The effects of these drugs were studied by Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) and activated Caspase-3 Western Blotting (WB), and their effects on glial reactivity were determined by Glial Fibrillary Acidic Protein (GFAP) immunohistochemistry, Western Blot and qRT-PCR. Changes in microglia were studied by Iba1 (ionized calcium binding adaptor molecule 1) and major histocompatibility complex class II (MHC-II) immunohistochemistry. The effects of these drugs on retinal physiology were determined by electroretinography (ERG). In order to know the mechanisms involved in A1R modulatory effect of light induced retinal degeneration, the expression of inflammatory cytokines, iNOS, and nNOS genes was explored by qRT-PCR.
Experimental design
Male Sprague Dawley albino rats were intravitreally injected with either cyclopentyladenosine (CPA), an A1R agonist; or with dipropylcyclopentylxanthine (DPCPX), an A1R antagonist. While the right eyes received the mentioned drugs, the left eyes received vehicle (CPA vehicle: 0.9% ClNa w/v in water; DPCPX vehicle: 0.3% DMSO v/v dil in 0.9% NaCl w/v in water) and were the controls. One hour after intravitreal injections, rats were continuously illuminated for 1 day (12000 lux). Then the retinas were processed for GFAP immunohistochemistry (IHC), TUNEL or Western Blotting (WB). Electroretinograms (ERG) were performed previous to intravitreal injections of drugs and also a week after continuous illumination (Fig 1).
injections (volume: 5 μl) were performed using a Hamilton syringe (Reno, NV, USA) and a 30-gauge needle. The right eyes received the studied drugs (either cyclopentyladenosine (CPA), an A1R agonist; or dipropylcyclopentylxanthine (DPCPX), an A1R antagonist) while the left eyes received vehicle and were the controls (CTL). The final vitreal concentrations achieved were 0.775 mM for CPA and 0.01 mM for DPCPX. Doses were selected based on previous scientific reports [39,40] and taking into account that the volume of vitreous of the rat eye is 13.36 ± 0.64 μl [41]. The total amount per eye of CPA and DPCPX injected were 10.35 nanomoles and 0.13 nanomoles, respectively. To promote a correct healing, an ocular re-epithelization ointment (Oftalday1, Holliday-Scott SA, Beccar, Buenos Aires; Argentina) was applied after the injection. After recovery from the procedure, the animals were exposed to 1 day of CI.
Continuous illumination procedure
One hour after intravitreal injections, rats were continuously illuminated for 1 day. Groups of 3 to 5 rats were simultaneously placed in an open white acrylic box of 60 cm x 60cm x 60cm with 12 halogen lamps (12V 50 W each) located on top. Lighting level (12,000 lux) was determined using a digital illuminance meter. Temperature was maintained at 21˚C. This was repeated to obtain at least 8 animals for IHC, 4 animals for Western Blot procedures, 5 animals for ERG and 5 animals for qRT-PCR. IHC and Western Blot were performed immediately after CI. Animals used for ERG studies received a basal ERG previous to intravitreal injections of drugs and a second ERG (follow up) a week after CI (Fig 1). All animals were offered food and water ad libitum.
Electroretinography
After overnight adaptation, rats (5 animals per drug treatment) were anesthetized under dim red illumination with Ketamine (40mg/kg; Ketamina 501, Holliday-Scott SA, Beccar, After recovery, animals were continuously illuminated for one day (12000 lux). A group of animals was sacrificed right after the end of CI and they were processed for either IHC, TUNEL or WB assays. A second group of animals, which had been tested through a basal ERG, was left to recover for a week after CI and a follow up ERG was performed then.
Argentina) and Xylazine (5 mg/kg; Kensol1, Laboratorios König SA., Buenos Aires, Argentina). An ophthalmic solution of Phenylephrine hydrochloride 5% and tropicamide 0.5% (Fotorretin1, Laboratorios Poen, CABA, Argentina) was used to dilate the pupils. Rats were placed facing the stimulus at a distance of 25 cm in a highly reflective environment. A reference electrode was placed in the ear, a grounding electrode was attached to the tail, and a gold electrode was placed in contact with the central cornea. Recordings were made from both eyes simultaneously.
Scotopic electroretinograms (ERGs): 20 responses to flashes of unattenuated white light (1 ms, 1 Hz) from a photic stimulator (light-emitting diodes) set at maximum brightness were recorded with electroretinograph Akonic BIO-PC, Buenos Aires, Argentina. The registered response was amplified, filtered (1.5-Hz low-pass filter, 500Hz high-pass filter, notch activated) and data were averaged. The a-wave was measured as the difference in amplitude between the recording at onset and the through of the negative deflection while the b-wave amplitude was measured from the trough of the a-wave to the peak of the b-wave. Mean values from each eye were averaged, and the resultant mean value was used to compute the group means a-and bwave amplitudes ± SD.
Oscillatory potentials (OPs): Briefly, the same photic stimulator was used with filters of high (300 Hz) and low (100 Hz) frequency. The amplitudes of the OPs were estimated by using the peak-to-trough method [42]. The sum of four OPs was used for statistical analysis.
Membranes were incubated overnight at 4˚C with either a rabbit polyclonal antibody to GFAP (DAKO Inc., CA, USA; dilution 1:500) or a rabbit polyclonal antibody to activated Caspase 3 (Sigma Chemical Co., MO., USA; dilution 1:100). To test for protein loading accuracy, a monoclonal anti-β-actin antibody (Sigma Chemical Co., MO., USA, dil: 1: 1000) was used in the same membranes. To visualize immunoreactivity, membranes were incubated with Amersham ECL Rabbit IgG, HRP-linked F(ab)2 fragment (from donkey), and were developed using a chemoluminiscence kit (SuperSignal West Pico Chemiluminescent Substrate, Thermo Scientific, Massachusetts, US). Membranes were exposed to X-ray blue films (Agfa Heathcare, Buenos Aires, Argentina), which were developed and then scanned with a HP Photosmart scanner. Optical density was quantified by Image Studio Light software of Li-Cor. Relative density is compared to control levels. Differences in actin load were taken in consideration in each case and data were mathematically corrected in order to obtain the published results. Data were statistically analysed using Graphpad Software.
Tissue processing for immunohistochemistry and TUNEL assay
Animals were deeply anaesthetized by intraperitoneal injection of Ketamine (40mg/kg; Ketamina 501, Holliday-Scott SA, Beccar, Argentina) and Xylazine (5 mg/kg; Kensol1, Laboratorios König SA., Buenos Aires, Argentina) and their eyes were removed; the cornea and lenses were cut off, and the remaining tissues with a cup shape were fixed by immersion in a solution containing 4% paraformaldehyde in 0.1M phosphate buffer for 24 h. Eyes were embedded in gelatine, cryoprotected by immersion in a solution containing 30% sucrose in 0.1M phosphate buffer and then frozen. The frozen eyes were cut along a vertical meridional plane using a Lauda Leitz cryostat, and sections (thickness: 20 μm) were mounted on gelatine coated glass slides and processed by Immunoperoxidase, immunofluorescence or TUNEL techniques.
Immunoperoxidase technique
In order to inhibit endogenous peroxidase activity, sections were incubated in methanol containing 3% hydrogen peroxide for 30 min. After washing in phosphate buffered saline (PBS), pH 7.4, sections were incubated in 10% normal goat serum for 1h. Then, sections were incubated overnight with a previously characterized GFAP polyclonal primary antibody (Dako, USA, dilution 1:500). The following day, sections were incubated in biotinylated goat anti rabbit antibody (Sigma Chemical Co.,MO., USA; dilution 1:500). Following this, sections were incubated in ExtrAvidin-Peroxidase1 complex (Sigma Chemical Co., MO., USA; dilution 1:500). All antisera were diluted in phosphate-buffered saline (PBS) containing 0.2% Triton X-100 and, in all but in the peroxidase complex, 1% normal goat serum. Incubations in primary antibody were performed overnight at 4˚C while incubations in biotinylated antibody, ExtrAvidin-Peroxidase1 complex were performed at room temperature (RT) for 1h. Controls were performed by omitting primary antibodies. Development was performed using the DAB/ nickel intensification procedure [43].
Double labelling technique
Some sections were incubated overnight with a mixture containing a polyclonal rabbit antibody to A1R (Santa Cruz Biotech. Inc., USA, dilution 1:50) and a mouse monoclonal antibody to Iba 1 (Santa Cruz Biotech. Inc., USA, dilution 1:50). Other sections were incubated overnight with a mixture containing a mouse monoclonal to major histocompatibility complex class II (MHC-II) (Santa Cruz Biotech. Inc., USA, dilution 1:50) and a rabbit polyclonal antibody to Iba 1 (Invitrogen USA, dilution 1:50).
In both cases sections were later incubated in a mixture of goat anti rabbit antibody conjugated to Alexa Fluor1 488 (Abcam, dilution 1:50) and goat anti-mouse antibody conjugated to Alexa Fluor1 555 (Abcam, dilution 1:50) at RT for 1 h. Finally, sections were counterstained with Hoechst 33258 (Sigma Chemical Co., MO., USA) and were observed using an Olympus IX-83 inverted microscope. Simultaneously, negative controls were performed by omitting primary antibodies and their photographs were added to S1 Fig.
Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay
Cryostat sections were processed using the ApopTag1 Peroxidase In Situ kit (Millipore, USA). Briefly, sections were washed in PBS and post-fixed in ethanol:acetic acid (2:1) at -20˚C. After washing in PBS the endogenous peroxidase was quenched with 3% hydrogen peroxide solution at RT. After rinsing with distilled water and equilibration buffer, sections were incubated with terminal deoxynucleotidyl transferase for 1 hour at 37˚C. The reaction was stopped by a supplied buffer and the sections were incubated with anti-digoxigenin conjugate for 30 minutes at RT. Finally sections were developed using DAB/nickel intensification procedure and were counterstained with eosine.
Image analysis of TUNEL, GFAP immunoperoxidase sections and single or double labeled microglial cells
Six retinal sections of both eyes from each experimental group were analyzed (CPA, n = 8; DPCPX n = 8). Care was taken on selecting anatomically matched areas of retina among animals before assays. Slides were analysed using a Zeiss Axiophot microscope attached to a video camera (Olympus Q5). Images were taken using Q capture software. To avoid external variations, all images were taken the same day and under the same light conditions.
The following parameters were measured, blind to treatment, on 8 bits images, using the Fiji software (NIH, Research Services Branch, NIMH, Bethesda, MD): GFAP positive area: Images of drug treated and control retinas were randomly selected. Immunoreactive area of the whole sections was thresholded. The region of interest (ROI) was the retinal surface between the two limiting membranes where Müller cells extend their processes. The GFAP positive area was calculated as the percentage of the ROI immunostained by GFAP.
TUNEL positive nuclei/1000μm 2 : Images of drug treated and control retinas were randomly selected and thresholded. As region of interest (ROI), frames of 1000 μm 2 were randomly determined on the outer nuclear layer of treated and control retinas. The analyse particles function of Fiji was used [44] and the TUNEL positive nuclei/1000μm 2 ratio was then obtained in each ROI.
Iba 1 positive microglial cells/10000μm 2 : Images of drug treated and control retinas were randomly selected and thresholded. As region of interest (ROI), frames of 10000 μm 2 were randomly determined on treated and control retinas. The Iba 1 positive microglial cells/10000μm 2 ratios were obtained in each ROI.
Iba 1 + /MHC-II + microglial cells. Images of drug treated and control retinas were quantified. The number of activated microglia (double labelled as Iba 1 + and MHC-II + and) was expressed as the percentage of the total number of Iba 1 positive cells per retinal section.
RNA isolation and quantitative reverse transcription polymerase chain reaction (qRT-PCR)
Unilluminated rats (basal control), rats submitted to 1 day of CI, CPA and DPCPX treated rats (n = 5 per group) which were submitted to one day of continuous illumination were deeply anaesthetized by intraperitoneal injection of Ketamine (40mg/kg; Ketamina 501, Holliday-Scott SA, Beccar, Argentina) and Xylazine (5 mg/kg; Kensol1, Laboratorios König SA., Buenos Aires, Argentina) and their retinas were dissected out. In the cases of drug treated rats, right eyes received the studied drugs (either CPA or DPCPX), while the left eyes received vehicle and were the Controls (CTL). Additional controls were included: Non-illuminated control rats to evaluate basal gene level expression, and Non-treated rats (CTL) exposed to CI (CI 1d) in order to evaluate the effect of damage (n = 6, per group). Tissues were homogenized with TRIzol (Invitrogen, Madrid, Spain) and RNA was isolated with RNeasy Mini kit (Qiagen, Germantown, MD). Three μg of total RNA were treated with 0.5 μl DNAseI (Invitrogen) and reverse-transcribed into first-strand cDNA using random primers and the SuperScript III kit (Invitrogen). Reverse transcriptase was omitted in control reactions, where the absence of PCR-amplified DNA confirmed lack of contamination from genomic DNA. Resulting cDNA was mixed with SYBR Green PCR master mix (Invitrogen) for qRT-PCR using 0.3 μM forward and reverse oligonucleotide primers. Quantitative measures were performed using a 7300 Real Time PCR System (Applied Biosystems, Carlsbad, CA). Cycling conditions were an initial denaturation at 95ºC for 10 min, followed by 40 cycles of 95˚C for 15 seconds and 60˚C for 1 minute. At the end, a dissociation curve was implemented from 60 to 95ºC to validate amplicon specificity. Gene expression was calculated using absolute quantification by interpolation into a standard curve. All values were divided by the expression of the house keeping gene 18S.
Statistical analysis
The data of GFAP immunohistochemistry and TUNEL studies of CPA-treated rats (n = 8) and DPCPX treated rats (n = 8) were obtained by image analysis as was described before. Normality distribution of the data was evaluated using D´Agostino, KS, Shapiro-Wilk and F tests. In every case, Gaussian distribution was confirmed. Then, data were analysed using unpaired parametric Student´s t-test included in the GraphPad software (GraphPad Software, San Diego, CA). Values are expressed as mean ± standard deviation. In the case of Iba 1 immunohistochemistry (IHC) (n = 4 per group), WB (n = 4 per group), ERG (n = 5 per group) and RT-PCR (n = 5 for CPA and DPCPX; n = 6 for CTL and CI 1d), data distribution was analysed in the same way and at least one of the used tests confirmed Gaussian distribution, validating the use of Student´s T test. Values are expressed as mean ± standard deviation. Differences were considered significant when p<0.05.
CPA decreases apoptotic cell death, glial reactivity and Iba 1 + /MHC-II + microglial cells
No TUNEL positive nuclei were found in control eyes before illumination, but after 1 day of CI, apoptotic nuclei were found in retinal outer nuclear layer (ONL) in both experimental conditions (CPA and control). However, CPA treated retinas presented a lower number of TUNEL positive nuclei in the outer nuclear layer than control animals (Fig 2A and 2B). Quantification by image analysis showed an average of 1.454 ± 0.737 apoptotic nuclei per 1000 μm 2 in the outer nuclear layer of CPA treated retinas vs 4.25 ± 1.379 apoptotic nuclei per 1000 μm 2 in the outer nuclear layer of control retinas. The difference was significant using an unpaired Student´s t-test (p< 0.001; n = 8) (Fig 2G).
Before illumination, GFAP immunoreactivity was restricted to the end feet of Müller cells close to the inner limiting membrane. After illumination GFAP immunoreactivity increased in Müller cell processes across the whole retinas and a strong staining was observed in the end feet close to the inner limiting membrane in both conditions. However, in animals treated with CPA, Müller cell processes were thinner and GFAP immunoreactivity of the ending feet was weaker compared with control, indicating lower levels of glial activation (Fig 2C and 2D). In fact, image analysis quantification showed a significant decrease of GFAP positive area in CPA treated retinas (13.02 ± 10.67%) vs control retinas (33.32 ± 15.23%) (unpaired Student´s t-test; p<0.01; n = 8) (Fig 2H).
CPA treated retinas showed a significant decrease in the number of Iba 1 positive microglial cells (Fig 2E and 2F). Image analysis quantification showed that the decrease was significant (CPA: 1.28 ± 0.155 cells/10,000 µ 2 vs CTL: 2.68 ± 0.61 cells/10,000 μ 2 , p<0.01) (Fig 2I). In both conditions, CPA and Control, double labeling technique using primary antibodies to A1 receptor and Iba 1 showed the co-localization of the A1 receptor and Iba1 on microglial cells (Fig 3, Top and second row, and Fig 4). In order to detect reactive microglia, double labeling technique using primary antibodies to Iba 1 and MHC-II was performed (Fig 4, Top and second rows). CPA treated retinas showed a significant decrease of the percentage of reactive microglial cells (Iba 1+ and MHC-II +) compared to control (p < 0.05) (Fig 5).
DPCPX increases apoptotic cell death, glial reactivity and Iba 1 + /MHC-II + microglial cells
In contrast with the results observed with CPA, after the illumination procedure a higher number of TUNEL positive nuclei was observed in the outer nuclear layer of DPCPX treated eyes versus control (Fig 6A and 6B). Quantification by image analysis showed an average of 6.755 ± 2.337 apoptotic nuclei per 1000 μm 2 in the outer nuclear layer of DPCPX treated retinas vs 3.608 ± 1.402 apoptotic nuclei per 1000 μm 2 in control retinas. The difference was significant using an unpaired Student´s t-test (p < 0.05; n = 8) (Fig 6G).
An increase in GFAP immunoreactivity was observed in DPCPX treated retinas compared to their controls (Fig 6C and 6D). In animals treated with DPCPX, Müller cell processes were thicker and their ending feet close to the inner limiting membrane were bigger and more intensely stained than those observed in control, indicating a rise of glial activation (compare Fig 6C and 6D). In fact, image analysis quantification showed a significant increase of the percentage of GFAP positive area in DPCPX treated retinas (45.75 ± 16.1%) vs their respective controls (31.69 ± 10.15%) (unpaired Student´s t-test; p = 0.05; n = 8) (Fig 6H).
DPCPX treated retinas showed a significant increase in the number of Iba 1 positive microglial cells compared to controls (Fig 6E and 6F). Image analysis quantification showed that the increase was significant (DPCPX: 3.235 ± 1.356 cells/10,000 μ 2 vs CTL: 1.80 ± 0.89 cells/ 10,000 μ 2 , p< 0.05) (Fig 6I). In both conditions, DPCPX and Control (Fig 3, third and fourth row), double labeling technique using primary antibodies to A1 receptor and Iba 1 showed the co-localization of the A1 receptor and Iba1 on microglial cells. In order to assess reactive Adenosine A1 receptor: A neuroprotective target in light induced retinal degeneration microglia, double labeling technique using primary antibodies to Iba 1 and MHC-II was performed (Fig 4, third and fourth rows). DPCPX treated retinas showed a highly significant increase of the percentage of reactive microglial cells (Iba 1 + and MHC-II + ) compared to control (p < 0.01) (Fig 5).
Effect of CPA and DPCPX on scotopic electroretinograms and oscillatory potentials
A week after the CI exposure for 1 day, control eyes showed decreases in b-wave amplitude and oscillatory potential sum compared with their respective basal values (Fig 8C and 8D and Fig 9A and 9B). However, at the same time point, CPA treated eyes showed an increased amplitude for the a-wave and similar b-wave and oscillatory potentials compared to basal values measured before CI (Fig 8A and 8B and Fig 9A and 9B and Table 1).
In summary, continuous illumination induced an electrophysiological damage that was avoided by CPA treatment.
As mentioned above, a week after the continuous illumination exposure for 1 day, control eyes showed a decrease on the amplitude of the a-wave, b-wave (Fig 10C and 10D), and the oscillatory potentials sum (Fig 11A and 11B), compared with basal values measured before continuous illumination (Table 2).
After comparing DPCPX control eyes, illuminated for 1 day, with CPA control eyes illuminated for 1 day, a more important decrease of a-wave was observed which may be consequence of the drug vehicle (DMSO) which used to dissolve the DPCPX [45]. DPCPX treated eyes also showed decreases of the a-wave, b-wave, and oscillatory potential sum when compared to basal values measured before continuous illumination (Fig 10A and 10B, Fig 11A and 11B and Table 2).
In summary, illumination showed a deleterious effect on retinal function which was neither worsened nor prevented by DPCPX.
Effect of CPA and DPCPX on the expression of nNOS, iNOS, IL-1β, TNFα and GFAP mRNAs
Quantitave RT-PCR technique showed highly significant increases of nNOS, GFAP and TNFα mRNAs in non-treated rats exposed to 1d of CI compared to basal values (Fig 12). Also a significant increase of IL-1β mRNA was detected in this group but the method was unable to show a significant increase of iNOS. However, a significant decrease of iNOS mRNA expression was demonstrated in the retinas of CPA treated eyes compared to control (0.6990±0.4799 vs 1.322±0.7427, unpaired Student´s t-test, p<0.05, n = 5) while nNOS expression did not https://doi.org/10.1371/journal.pone.0198838.g005 change (Fig 12). Also the levels of inflammatory cytokine TNFα significantly decreased in the retinas of CPA treated eyes compared to control (0.8903±0.4123 vs 1.510±0.6335; unpaired Student´s t-test, p<0.05, n = 5). GFAP mRNA expression was also diminished by CPA (0.7582 ±0.2721 vs 1.17±0.2728; unpaired Student´s t-test, p<0.05, n = 5). Levels of IL-1β did not change significantly (Fig 12). No significant changes were detected by qRT-PCR in any of the four genes studied when comparing the retinas of DPCPX treated eyes with their controls (Fig 12).
Discussion
In the present work, we studied the effect of the intravitreal administration of an A1R agonist (CPA) and an A1R antagonist (DPCPX) on light induced retinal degeneration. Although a less invasive treatment could be implemented, intravitreal administration ensured achieving the intended drug concentration in the retinal tissue, as published [39,40]. In patients suffering the wet variant of AMD, intravitreal injection is the common way of administrating the anti-VEGF treatment.
In our study, the decrease of TUNEL staining in the outer nuclear layer induced by CPA treatment clearly shows a neuroprotective role for A1 receptor agonists on photoreceptors. Neuroprotection is further confirmed by Western Blot analysis which shows a decrease of activated Caspase 3 levels. In addition, the results show a decrease of Müller cell activation as GFAP diminishes both by RT-PCR (mRNA), immunohistochemistry and Western Blot, supporting further evidence of a neuroprotective action through avoidance of glial reactivity. This effect may also be regarded as part of an antiinflammatory action. In fact, qRT-PCR results showed a significant diminution of the inflammatory cytokine TNFα and iNOS.
So, the administration of an A1 agonist shows a neuroprotective effect through mechanisms that prevented photoreceptor apoptotic cell death, a reduction of microglial response, demonstrated by a reduction in iNOS and TNFα mRNA expression, and a decrease of glial reactivity, as demonstrated by GFAP immunoreactivity, Western Blot and qRT-PCR. In order to confirm that CPA induced a reduction of microglial reactivity, retinas were stained with Iba 1 (ionized calcium adaptor molecule 1). Iba 1 is a microglial and macrophage-specific calcium-binding protein involved in the reorganization of actin cytoskeleton through Rac signaling pathway [46]. Iba 1 is involved in membrane ruffling and phagocytosis in activated microglia [47] and was previously used as a marker of reactive microglia after transient focal cerebral ischemia [48]. Our results showed a significant reduction of Iba1 + microglial cell population in CPA treated retinas while, on the opposite, DPCPX induced a highly significant increase of Iba 1 + microglial cells. Double labeling experiments showed the co-existence of A1R and Iba 1 demonstrating the direct effect of the agonists on microglial cells. As major histocompatibility . From top to bottom bands correspond to GFAP, Actin and C3a. C) Quantification of GFAP by WB. CPA produced a highly significant decrease of GFAP relative density compared to CTL (0.652 ± 0.117 vs 0.993 ± 0.1329; unpaired t-test; p<0.01; n = 4), ÃÃ p<0.01. D)Quantification of GFAP by WB. DPCPX produced a significant rise in GFAP relative density compared to CTL (3.785 ± 2.515 vs 1.00 ± 0.108; unpaired Student´s t-test; p<0.05; n = 4), Ã p< 0.05. E) Quantification of C3a by WB. CPA produced a highly significant decrease of C3a relative density compared to CTL (0.6527±0.03 vs 0.996±0.04; unpaired Student´s t-test; P = 0.001; n = 4), ÃÃ p<0.01. F) Quantification of C3a by WB. DPCPX produced a highly significant rise in C3a relative density compared to CTL (1.85±0.5 vs 1.01±0.07; unpaired Student´s t-test; p<0.01; n = 4), ÃÃ p<0.01. A) Basal ERG response of a CPA treated eye. B) ERG response a week after CI of a CPA treated eye. Observe a small increase of a-wave amplitude and the preservation of b-wave amplitude compared to Basal ERG (A). C) Basal ERG response of CTL eye. D) ERG response a week after CI of a CTL eye. Observe a decrease of both a-wave amplitude and b-wave amplitudes. E) Quantification of a-wave amplitude of both eyes a week after injection and 1d of CI. A significant higher amplitude of of a-wave was detected in CPA treated eyes compared to CTL eyes (14.07 ± 3.56 μV vs 7.14 ± O.63, unpaired Student´s ttest; p<0.05; n = 5), Ã p< 0.05. F) Quantification of b-wave amplitude of both eyes a week after injection and 1d of CI. A significant higher amplitude of b-wave was detected between CPA treated eyes compared to CTL eyes (106 ± 57.9 μV vs 60.11 ± 37.37 µV; unpaired Student´s t-test; p<0.05; n = 5), Ã p< 0.05. Although microglia is involved in the inflammatory reaction in the retina producing inflammatory cytokines as TNFα, other sources of TNFα may be other resident activated macrophages, as well as CD4+ lymphocytes and natural killer cells which arrive to the retinal tissue by the blood vessels. Also Müller cells and retinal pigmented cells have been reported to produce TNFα in autoimmune uveoretinitis [51] so these cells may also contribute to the inflammatory response and their role cannot be ruled out.
The changes in ERG response support the idea that A1 modulation impacts not only on photoreceptor survival but also on the functionality of photoreceptors themselves and of other inner retina cell types (mainly bipolar and ganglion cells) as a-wave, b-wave, and oscillatory potentials were protected by CPA pretreatment. On the contrary, DPCPX, an A1R antagonist, worsened biochemical parameters and two of the studied morphological parameters (apoptotic nuclei and GFAP area). In addition, A1 antagonist, DPCPX, was unable to alter gene expression of iNOS, nNOS or inflammatory cytokines IL-1β and TNFα. It may be speculated that higher doses of DPCPX, or a longer time of exposure to the drug may alter retinal physiology. An alternative explanation may be that the A1R antagonist, DPCPX, lacks its effect in the absence of an increased A1 receptor activity which could play a part in the CI model pathophysiology.
The obtained results are in accordance with other reports on the role of adenosine in retinal neuroprotection mediated by A1 or A2A receptors [38,35].
However, other questions remain to be answered, such as how the changes in A1R activation are connected with the apoptosis of photoreceptors, inflammation and glial reactivity.
In the model of LIRD, the administration of an A1R agonist could protect the retina through the presynaptic inhibition of glutamate release and the modulation of NMDA receptor activity as was previously demonstrated in rat hippocampus [52].
In rod photoreceptors, the observed neuroprotective effect of CPA could be mediated by the inhibition of calcium influx as it is known that adenosine inhibits calcium influx through L-type calcium channels [53]. Also the observed protective effect of CPA on photoreceptors could be mediated by its antioxidant effect as CPA inhibits lipid peroxidation and potentiates the antioxidant defense mechanisms (peroxidase and catalase enzymes) [54]. In addition, the activation of A1 receptors inhibits adenylate cyclase (AC) and decrease intracellular cAMP concentration. These changes decrease cell metabolism and neuronal energy requirements enhancing cell survival [54,55].
Adenosine transmission also plays a role directly on the immune response. Higher A1 activity is necessary to diminish the immune response and promote cell survival [56]. So, we speculate that the neuroprotective role of CPA in LIRD could also be mediated through an effect on the immune response as well. Although the immune response is a late event in other models of retinal degeneration, our results clearly showed that CPA induced a significant decrease of Iba 1 reactive microglial cell population, and a decrease of iNOS and TNF α mRNAs in this model of light induced retinal degeneration. Besides, IL-1β is responsible of triggering glial reactivity [57,58] which was decreased in our model of LIRD by the treatment with CPA.
In addition, adenosine transmission works in coordination with other signalling systems that involve the production of trophic factors. A complex crosstalk between IL-6, A1R, and A2AR stimulates BDNF production and has been shown to protect retinal ganglion cells in vitro [59,60].
A cardiovascular effect could also be involved among the neuroprotective mechanisms mediated by adenosine A1 receptors, as was demonstrated in retinal ischemic insults that adenosine induces hyperhemia that protects neurons from glutamate toxicity [34].
As consequence of our findings a new strategy using A1 agonists could be used to prevent retinal degeneration. Knowing that AMD disease starts in one eye and usually progresses to the other one; the second eye could be protected after the diagnosis. However, Adenosine receptors can be found in most cells, widely distributed through the body, so the agonist will act not only on cells involved in the disease but also on cells involved in different physiological processes [61].
As adenosine receptors are present in most cells, and agonists have adverse effects, including sedation, headache, vasodilation, atrioventricular block, and bronchoconstriction [62,63]; therapeutic strategies should target these receptors only when and where agonists are needed [61]. In order to do this we considered that CPA locally administered (intravitreal injection) is the best option, producing less collateral effects. The same concept is behind actual treatments of AMD that also use intravitreal injections of monoclonal antibodies against VEGF.
Although, in our study the administration of CPA was given preventively before illumination, it could be administered after illumination to treat retinal degeneration but further studies are needed to confirm if it is useful as a therapeutic agent in this case.
The present study shows evidence supporting that adenosine, acting through A1 receptors, is an important factor in degenerative diseases of the eye and its modulation may be used as a neuroprotective strategy. However, a single treatment with CPA, an A1 agonist, reported here did not accomplish a total prevention of retinal degeneration. Thence a repetition of the treatment could be considered as well as a combination with other drugs and/or trophic factors. Although further work is needed to confirm our hypothesis, the modulation of A1 receptor has a translational value as it could be a useful strategy to prevent the progression of AMD and other degenerative diseases in humans.
In this work we have shown that a single pharmacological intervention previous to the beginning of the photic damage was able to swing the retinal fate in opposite directions. While CPA, an A1 agonist, shows a retinal neuroprotective effect; DPCPX, an A1 antagonist, worsened many of the parameters chosen to assess damage. These results propose a protective role for A1 activation in LIRD in accordance with other models of retinal degenerative diseases.
Furthermore, LIRD is a valid model for an acquired degenerative disease of the outer retina, since it recapitulates many of the human symptoms of AMD, such as non-classic transmission and its pleiotropic effects on different cell types involved in inflammation, apoptotic cell death and normal neuronal function.
In summary, adenosine and the activation of the A1 receptor are promising targets to accomplish neuroprotection in LIRD and, hopefully, in retinal degenerative diseases. | 2018-06-28T00:54:56.886Z | 2018-06-18T00:00:00.000 | {
"year": 2018,
"sha1": "fbafce722833b6d5feeff9242aef42edc559cd2f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0198838&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbafce722833b6d5feeff9242aef42edc559cd2f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
59376319 | pes2o/s2orc | v3-fos-license | Oat Hulls as Addition to High Density Panels Production
aDepartment of Materials Engineering, University of São Paulo – USP, Av. Trabalhador São-carlense, 400, Arnold Schimidt, CEP 13566-590, São Carlos, SP, Brazil bDepartment of Mechanical Engineering, Federal University of São João del Rei – UFSJ, Praça Frei Orlando, 170, Centro, CEP 36307-352, São João del Rei, MG, Brazil cDepartment of Production Engineering – SEP, University of São Paulo – USP, Av. Trabalhador São-carlense, 400, Arnold Schimidt, CEP 13566-590, São Carlos, SP, Brazil dDepartment of Structural Engineering – SET, University of São Paulo – USP, Av. Trabalhador São-carlense, 400, Arnold Schimidt, CEP 13566-590, São Carlos, SP, Brazil
Introduction
Wood panels manufacture show advantages of possibility to use alternative raw materials, as agro-industrial waste.Such products have increasing their importance, representativeness and use around the world, including Brazil, stimulated by the economic and environmental benefits of natural and renewable raw materials.
Considering wood-based panels from reconstituted wood, particleboard can be highlighted because it is the most consumed and produced worldwide (Brazilian Association of Wood Panels Industry -ABIPA) 3 .In Brazil, according to ABIPA 3 particleboard represents about 50% of reconstituted panel manufacture and continues providing growth prospects for the next years.Particleboard is commonly used in furniture sector, mainly in the production of cupboard sides, dividers, shelves, tabletops, and also in buildings (e.g.wood floors) 4 .
At the same time, companies should adopt proactive strategies to control and predict the environmental burdens of their activities, providing better results in environmental performance.In this sense, Schweinle 5 has highlighted wood panels because there are several relevant issues that require further developments, such as case study of alternative wood panels manufacture including agro-industrial waste as raw material.In Brazil, agro-industrial residues are available in large volumes and have significant potential for employment.In particular, it is mentioned oats, food product usually consumed in the country and that generates an abundant volume of waste (oat hulls).
In this context, "green" materials were applied in this study, such as reforested wood from Eucalyptus grandis species, oat hulls (agro-industrial waste) from Avena sativa and polyurethane resin from castor oil.
The aim of this study is to evaluate physical-mechanical properties of particleboards made with Eucalyptus grandis particles and addition of oat hulls residue.
Literature Review
Brazil is the sixth largest producer of wood panels in the world, and the particleboard is one of main products (Brazilian Association of Producers of Reforested Forests -ABRAF) 6 .
Chipboard or particleboard can be produced from any lignocellulosic material and can give a high mechanical strength and predetermined specific gravity because of the lignocellulosic structure is similar to timber, according Rowell et al. 7 .
In turn, Brazilian agro-industrial residues are available in large scale and have significant potential for employment.According Tamanini and Hauly 8 , agro-industrial waste generation is about 250 million tons/year.Among these residues, oat hulls have great potential, especially in relation to raw materials availability.The production of oats, the country, exceeded 500,000 tons in 2011 (Brazilian Commission of Research in Oat) 9 .According to Webster 10 , about 30% of oat production refers to oat hull, a byproduct of processing oat cereal that represents approximately 150,000 tons/year.Oats hulls have been discarded during grain processing, which become a pollutant source to the environment.Thus, it is necessary, essential and appropriate to establish alternatives for its reuse.
Amino resins are synthetic and thermosetting polymers mainly used in wood-based panels manufacture and ureaformaldehyde (UF) resin is one of the major commercial products 11 , because its low cost and good technical performance.However, there are some key environmental problems of formaldehyde-based resins.Silva et al. 12 highlighted environmental impacts of UF resin life cycle for ecotoxicity and human toxicity categories.Formaldehyde emissions are potentially carcinogenic and can cause effects on human health (e.g.nausea, watery eyes, nose and throat irritation).Thus, environmental benign alternatives to UF resin are desired.For this, was used castor oil based polyurethane resin (PU) that is from natural and renewable source.
PU resin from castor oil has been an alternative binder during wood panels manufacturing, as shown by Bertolini 13 , Ferro 14 and Jesus 15 , providing satisfactory technical performance.This bicomponente adhesive was originated in 1997, in the Institute of Chemistry of São Carlos, University of São Paulo, composed of polyol, extracted from castor beans, and prepolymer (isocyanate), resulting in polyurethane, which cure at temperature about 100 °C [15] .
Quality of wood-based panels is evaluated by their physical-mechanical properties such as modulus of elasticity (MOE) and modulus of rupture (MOR) in static bending, internal bond, density, water absorption, thickness swelling etc, according Iwakiri 16 .
Density is one factor that influences panel mechanical performance, and must be as uniform as possible along panel thickness to ensure uniformity properties.Particleboards are usually produced with density range from 600 to 700 kg/ m 3 .According to Kelly 17 , a minimum amount of particles compaction is required to provide their consolidation during pressing cycle.
Iwakiri et al. 18 produced high density particleboard and the results presented a significantly improvement in physical-mechanical properties (as more dimensional stability and better mechanical resistance).Melo et al. 19 determined physical-mechanical properties of particleboard made from Eucalyptus grandis wood and rice husk and the results showed that rice husk addition provided greater dimensional stability and lower the strength of the panels.Bertolini et al. 20 produced high density particleboards and the results of density were between 880 to 970 kg/m 3 .So, variability in physical-mechanical performance of the panels produced with different type of material showed their different behaviors.
Particles production
In panels manufacturing, were used particles of Eucalyptus grandis (apparent density of 640 kg/m 3 ) and oat hulls (apparent density of 290 kg/m 3 ).These particles were generated in a knifes mill, type Willye of Marconi brand MA 680 model, using 2.8 mm sieve opening.
Eucalyptus grandis was obtained from companies in the city and region of São Carlos -SP, while oat hulls (Avena sativa) were obtained from industry sector.
It was made a particle size analysis in order to determine its dimensions.SOLOTEST was the equipment used, with sieves of particle sizes that meet the specifications of ASTM corresponding to 7, 10, 16, 30, 40 and 50 mesh.Also was used a balance of Marconi brand, model AS 5000C, with a sensitivity of 0.1 grams.
After generation of particles, a sample of 200 grams was removed for each material.These samples were subjected to vibration for ten minutes in vibration velocity 5 and allowing the material to cross the sieves in descending order of aperture.Three replicates were performed for each material.The particles that passed through the sieve of 50 mesh (sieve with smallest opening) were considered "fine".
The moisture content of the particles of both materials was 9%. Figure 1 show the particles of both materials, before and after the milling process, and the mill used.
Panels manufacturing
Particleboards with one layer (homogeneous panels) of high density were produced.In this process were used castor oil based polyurethane resin (PU), bicomponent, 1:1 between prepolymer and poliol, with solids content of 100%.The 1:1 proportion was used because the excellent performance achieved by researchers of the LaMEM (Wood and Timber Structures Laboratory) in studies using this proportion 13,21 .One of the components (polyol) is derived from vegetable oil with a density of 1.10 g/cm 3 and the other component (prepolymer) is the polyfunctional isocyanate with a density of 1.24 g/cm 3 , both supplied by industry sector.This resin was used due to the excellent performance achieved in previous studies, developed in the LaMEM with wood panels 13,22,14,23 .
In each panel was used 640g of particles, bonded with PU resin, in the proportion of 10% relative to the dry mass of the particles, in all treatments.We used this amount of particles per panel (640 g) to ensure that the panels stay with high density (above 800 kg/m 3 ), considering the density of each material used.
Parameters used in press cycle were: press pressure 4 MPa; press time 10 minutes; press temperature 100 °C.Such parameters as well as the dimensions of the residues were evaluated by Dias 21 .Figure 2 show the panels manufacturing.
The particles of both materials were weighed and mixed with glue for five minutes approximately.The glue machine used was Lieme, model BP-12 SL, how to present the Figure 2b.Then, the particles with glue were subjected to small press (about 0.013 MPa).The pre-pressing of the panel was performed by manual mechanical press own manufacturing (Figure 2c).The next step was the pressing of the panels, done in the semi-automatic press Marconi, model MA 098/50, how to present the Figure 2d.Finally, before 72 hours, necessary for full cure of the resin and reaches the moisture equilibrium with the environment, panels produced were correctly squaring, being removed 20 mm from each edge.
Particleboards were divided into groups according to the different proportions of each particulate material (Eucalyptus grandis and oat hulls).Table 1 shows factors and levels used for design experiments giving rise to four experimental conditions (EC), as shown in Table 2.
Tests performed and results analysis
For each experimental condition (EC), six panels with identical particles proportion were produced.In total, twenty-four panels produced with nominal dimensions: 280×280×10 mm.
In each panel were removed one specimens for each property evaluated.The mechanical properties evaluated were modulus of elasticity (MOE) and modulus of rupture (MOR), both obtained by testing a three-point in static bending, and internal bond (tension perpendicular to the panel surface).The physical properties evaluated were density, water absorption, thickness swelling and compaction ratio.Specimen dimensions as well as physical and mechanical tests were performed according to ABNT NBR 14810:2006 [1] . Figure 3 show the physical and mechanical tests performed.
The compaction ratio of the panels was calculated by the following relation: panel density by the density of the material which originated the particles.
The variance analysis (ANOVA) was used to investigate the influence of the fraction of particles of Eucalyptus grandis (compositions between particles of both materials) in the physical and mechanical properties of the panels produced.The significance level (a) was 5%, considering the null hypothesis (H 0 ) the equivalence between the means and the non-equivalence as the alternative hypothesis (H 1 ).P-value greater than the significance level involves accepting H 0 , rejecting it otherwise.In validation of the ANOVA model, normality test of Anderson-Darling and the Bartlett´s testes to verify the homogeneity between variances were used, both at the 5% level of significance, considering the null hypotheses of the normality and of the equivalence between variances.The null hypothesis hypotheses be accepted if the P-value obtained in the tests are higher than the level of significance, rejecting them otherwise.Accused the significance of the factor by ANOVA, was used the Tukey test for grouping of the averages.
Results and Discussion
The Table 3 shows the results of particle size analysis.As can be seen in Table 3, about 70% of the particles of Eucalyptus grandis and 75% of the particles of oat hulls were retained on sieves of 16 mesh (1.190 mm) and 30 mesh (0.595 mm).The material Eucalyptus grandis showed more "fine" that oat hulls.
Table 4 presents the average values per experimental condition (EC) for the response variables: MOE and MOR, internal bond (IB), density, water absorption (WA) and thickness swelling (TS).
It is noteworthy that ABNT NBR 14810:2006 [1] establishes requirements for physical-mechanical properties of particleboards, except for the modulus of elasticity in bending (MOE).For the latter, requirements are established by BS EN 312:2003 [2] standard.
Experimental values of MOE were obtained for samples ranged from 1654 to 2865 MPa.All values of MOE of panels met the requirement established by BS EN 312:2003 [2] (minimum value of 2050 MPa), except the experimental condition 4 (Table 4).
In your study, Lee and Kang 24 and Melo et al. 19 have obtained similar results as the oat hulls factor in this study, that shows the phenomenon of reduced MOE as it increases the percentage of addition of other material.
The MOR property of the particleboards ranged from 13 and 30 MPa.All values of MOR of panels met the requirement established by standard ABNT NBR 14810:2006 [1] , with minimum value of 18 MPa (Table 4).As this study, Bertolini et al. 20 also obtained MOR values greater than 20 MPa, i.e. well above of the required value by ABNT NBR 14810:2006 [1] .
It was observed that the MOR property increased as the added oat hulls, probably explained by the interphase region, noted of the chemical interaction phases.
Internal bond property varied between 0.80 MPa and 2.74 MPa.All values of internal bond of panels met the requirements established by standards ABNT NBR 14810:2006 [1] and BS EN 312:2003 [2] , with minimum values of 0,40 and 0,45 MPa, respectively.High values of internal bond are related to good interaction resin-particles.The results obtained for internal bond are similar to those obtained by Bertolini 13 .
There are no requirements for water absorption property (2h) in the standards of particleboard.The results of water absorption (2h) obtained resemble those found by Bertolini 13 .
All results obtained for the thickness swelling (2h) were lower than of 8% stipulated by ABNT NBR 14810:2006 [1] , for panels with thickness between 8 and 13 mm.
Compound density ranged between 797 and 1068 kg/m 3 .Practically all panels classified as high-density.This large variation in panel's density (797 to 1068 kg/m 3 ), it was associated the large difference in density of the materials used.
The compaction ratio was not subjected to analysis of variance.Table 5 presents mean values of compaction ratio (CR) and variation coefficient (VC) for each of the four experimental conditions evaluated.
Mean values of compaction ratio ranged from 1.49 to 1.77 for the experimental conditions 1 to 3, consistent with established by Maloney 11 and Moslemi 25 .For the experimental condition 4, the mean value of compaction ratio obtained (3.50) is consistent with the results of Mendes et al. 26 , that obtained values between 1.39 to 3.12.The variation coefficients (VC) obtained for the compression ratio are lower than 8%, reflecting a small variation of this property between the panels evaluated.Panels of experimental condition 4 presented higher density than panels of experimental condition 1.It can be justified by lower apparent density of oat hulls particles (290 kg/m 3 ), a fact which provides greater accommodation in the material (higher compaction ratio), and consequently higher density to this panels.
Table 6 shows the results of normality tests and equivalence between variances (ANOVA) for each property.
Table 5, P-values of the Anderson-Darling and Bartlett tests were both higher than the level of significance, concluding be normal distributions and equal variances between treatments per response-variable, validating the ANOVA model.By analysis of variance, the fraction of particles of Eucalyptus grandis was significant only in the modulus of rupture in static bending (P-value < 0.05), providing equivalent results between treatments stipulated for other assessed properties.Table 7 presents the results of grouping by Tukey test for MOR.In Table 7, the highest value of the MOR was derived from the panels produced with 100% oat hulls, and was equivalent to the composition of particles between 70% Eucalyptus grandis and 30% oat hulls.The lowest values were derived from the condition with 85% of the particles of Eucalyptus grandis and 15% oat hulls, and this condition was equivalent to the composition with 100% Eucalyptus grandis.
Conclusions
From the results it can be concluded that: • Mean values to physical-mechanical properties evaluated met the requirements established by standards ABNT NBR 14810:2006 [1] and BS EN 312:2003 [2]; • Proportion of resin used (10%) proved to be sufficient, as it responded to requirements of national and international standards cited; • Progressive insertion of oat hulls was responsible for increasing MOR, conducting the greatest value for composition to 100% oat hulls; • The compaction ratio ranged between 1.49 to 3.50 for the panels produced; • By analysis of variance, the fraction of particles of Eucalyptus grandis was significant only in the MOR in static bending; • The highest value of the MOR was obtained from the panels produced with 100% oat hulls, and the lowest values were derived from the condition with 85% of the particles of Eucalyptus grandis and 15% oat hulls.
Figure 1 .
Figure 1.(a) Particles before to the milling process, (b) mill used, (c) particles after the milling process.
Figure 2 .
Figure 2. (a) Particles of both materials, (b) equipment that mixes the glue and particles, (c) pre-press (d) hydraulic press, (e) panel after pressing; (f) Panels produced.
Table 1 .
Factors and experimental levels.
Table 3 .
Mean values of particle size analysis.
Table 4 .
Mean values of response variables by experimental condition (EC).
Table 5 .
Results of compaction ratio of the panels.
Table 6 .
Results of Anderson-Darling, Bartlett and ANOVA.
Table 7 .
Results of the Tukey test. | 2018-12-31T14:01:09.634Z | 2013-08-13T00:00:00.000 | {
"year": 2013,
"sha1": "3d4c25ca5ad240251fe06e8bd9908054d917c3b2",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/mr/a/4PrbFT6XmNPdqWXDYtLy5my/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d4c25ca5ad240251fe06e8bd9908054d917c3b2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
119592164 | pes2o/s2orc | v3-fos-license | Brown measure and asymptotic freeness of elliptic and related matrices
We show that independent elliptic matrices converge to freely independent elliptic elements. Moreover, the elliptic matrices are asymptotically free with deterministic matrices under appropriate conditions. We compute the Brown measure of the product of elliptic elements. It turns out that this Brown measure is same as the limiting spectral distribution.
Introduction
Asymptotic * -freeness (in short freeness) of a sequence of random matrices, as the dimension increases, was introduced by Voiculescu [21]. It is a central object of study in free probability. Later it has been studied extensively in the literature by [6], [12], [18], [20], [22] and others. In particular it is known that under suitable assumptions, independent standard unitary matrices and deterministic matrices are asymptotically free, and so are independent Wigner and deterministic matrices. Further, let Y p×n be a p × n random rectangular matrix where the entries are i.i.d. Gaussian random variables with mean zero and variance one. It is known that (see Theorem 5.2 in [5]) independent copies of 1 n Y p×n Y ′ n×p are also asymptotically free. A generalisation of the Wigner matrix that has caught recent attention is the elliptic matrix where the entries are as in a Wigner matrix except that the (i, j)th and (j, i)th entries have a correlation which is same across all pairs. This is clearly a non-symmetric matrix. One may also allow some specific pattern of correlation instead of the constant correlation. We call the latter a generalised elliptic matrix. It is known that under suitable conditions the limit spectral distribution (LSD) of the elliptic matrix is the uniform distribution on an ellipse whose axes depend on the value of the correlation. See [8], [10], [15].
We first show that under suitable conditions, the sequence of generalised elliptic matrices converges in * -distribution (see Theorem 1). In particular any sequence of elliptic matrices converges to an elliptic element.
Next we show that under appropriate conditions, independent elliptic matrices, with possibly different correlation values, converge jointly in * -distribution and are asymptotically free. They are also asymptotically free of appropriate collection of deterministic matrices. See Theorem 2. The joint convergence should remain true for generalised elliptic matrices but freeness is not expected to remain valid. To keep things simple we decided not to pursue these ideas.
Date: October 24, 2017. The work is partially supported by National Post-Doctoral Fellowship, India, with reference no. PDF/2016/001601, and also supported by J. C. Bose National Fellowship, Department of Science and Technology, Government of India. Now consider the empirical spectral distribution (ESD) of 1 n Y p×n Y ′ n×p when p/n → y = 0. When the entries of Y p×n are i.i.d. random variables with mean zero, variance one and all moments finite, this converges to the Marčenko-Pastur law almost surely. Other variations under weaker assumptions are also known. See [1], [14], [23] and [24]. Now suppose Y p×n is elliptic. We show that then the expected ESD still converges to the Marčenko-Pastur law. See Theorem 3. Again, we have not pursued the almost sure convergence of the ESD for simplicity. We also show that independent copies of 1 n Y p×n Y ′ n×p converge jointly in * -distribution and are asymptotically free. See Theorem 4.
The Brown measure for any element of a non-commutative probability space was introduced by Brown [4]. There has been a lot of work done in the past decade to find connection between the limiting spectral distribution (LSD) of a sequence of random matrices and the Brown measure of the * -distribution limit of the sequence. Often they are not equal. A very simple example is given in [19].
However, often they are equal. For example the i.i.d. matrix converges in *distribution to the circular element and its LSD is the uniform distribution on the unit disc. The latter is indeed the Brown measure of the circular element. See [11]. The LSD of the elliptic matrix is the uniform probability measure on an ellipse. At the same time, the Brown measure of an elliptic element is also the uniform probability measure on an ellipse (see [2], [13]). Similalry, in [9], it has been shown that the LSD of bi-unitarily random matrices is actually the Brown measure of the * -distribution limit.
The LSD of product of elliptic matrices has been calculated in [17]. On the other hand, from Theorem 2, we know that this product converges in * -distribution to product of free elliptic elements. We calculate the Brown measure of such a product and show that it is the same as the LSD. See Theorem 5.
We introduce the basic definitions and facts in Section 2 and state our results in Section 3. In Sections 4 and 5 we give the proofs of Theorems 1 and 2 respectively. Proof of Theorems 3 and 4 are given in Section 6 and the proof of Theorem 5 is presented in Section 7.
Preliminaries
We first recall some basic definitions and facts from free probability theory. A self-adjoint element s, in a non-commutative probability space (NCP) (A, ϕ), is said to be a (standard) semi-circular element if if e = 1+ρ 2 s 1 + i 1−ρ 2 s 2 , where s 1 and s 2 are free semi-circular elements. Note that ρ = 1 and ρ = 0 yield respectively the semi-circular and the circular element.
Let (A n , ϕ n ) n≥1 be a sequence of NCP. Let (a n i ) i∈I be a collection of random variables from A n which converges in * -distribution to some (a i ) i∈I in (A, ϕ). Then (a (n) i ) i∈I are said to be asymptotically free if (a i ) i∈I are free. Let A n be the algebra of n × n random matrices whose entries have all moments finite. It is equipped with the tracial state ϕ n (x) = 1 n ETr(x) for x ∈ A n . Clearly a sequence of random matrices (A n ) from A n , converges in * -distribution to some element a ∈ A if for every choice of ǫ 1 , ǫ 2 , . . . , ǫ k ∈ {1, * } we have lim n→∞ ϕ n (A ǫ1 n · · · A ǫ k n ) = ϕ(a ǫ1 · · · a ǫ k ).
Then we write A n * -dist −→ a. If A n is in addition hermitian, then the above condition is same as saying lim ϕ n (a k n ) = ϕ(a k ) exists for all non-negative integers k. Then we say A n converges to a in the distribution sense. The joint convergence of several sequences is expressed in an analogous manner.
There are other related notions of convergence of random matrices. Let A n be an n × n random matrix with eigenvalues λ 1 , . . . , λ n . Then 1 n n k=1 δ λ k , where δ x denotes the Dirac delta measure at x, is the empirical spectral measure of A n . Equivalently, the empirical spectral distribution (ESD) is given by where | · | denotes the cardinality and ℑ and ℜ denote the imaginary and real parts. Clearly F An is a random distribution function. If, as n → ∞, it converges (almost surely) to a non-random distribution function F ∞ weakly, then F ∞ is said to be the almost sure limiting spectral distribution (LSD) of A n . Often it is easier to show the convergence of the non-random expected ESD function E[F An (x, y)]. This limit is also called LSD and coincides with the earlier limit if both exist. If the sequence of matrices is hermitian, then its convergence in the distribution sense yields the candidate LSD whose moments are ϕ(a k ). We now state a few well known facts, which will be used in the proofs of our results. We need the following notation: γπ(r) := γ(π(r)) for π ∈ P 2 (2k), where π(r) = s and π(s) = r if (r, s) ∈ π, Fact 1 (Moments-free cumulants). Let a 1 , a 2 , . . . , a n ∈ (A, ϕ). Then ϕ(a 1 a 2 · · · a n ) = π∈N C(n) V ∈π κ(V )[a 1 , a 2 , . . . , a n ], where N C(n) denotes the set of all non-crossing partition of {1, . . . , n} and κ(V ) denotes the usual multiplicative extension of the free cumulant function. See [16] for details on the definition of free cumulants and its multiplicative extension.
Main results
The following assumption is basic to us. ( Note that (i) is a particular case of (ii). Any rectangular matrix X p×n is defined to be elliptic or generalised elliptic by adapting the above definition in the obvious way. In the literature an elliptic matrix is assumed to also satisfy the condition that {a ii : 1 ≤ i} and {(a ij , a ji ) : 1 ≤ i < j} are collections of i.i.d. random variables. The first result is on the * -distribution limit for generalised elliptic matrices.
where 0 ≤ f ≤ 1 is a bounded continuous function on [0, 1]. We elaborate on this in Section 4. The next result gives asymptotic freeness of independent elliptic matrices with deterministic matrices.
be m independent elliptic random matrices (with possibly different correlations) whose entries satisfy Assumption 1(i). Then where e 1 , . . . , e m are free and elliptic.
In addition, suppose that A are constant matrices such that Now we move to rectangular random matrices.
Theorem 3. Suppose X p×n is an elliptic rectangular random matrix whose entries satisfies Assumption 1(ii). If p n → y > 0 as p → ∞, then X p = 1 n X p×n X ′ n×p , converges to a free Poisson element of rate y. Its expected ESD converges to the corresponding Marčenko-Pastur law with parameter y. Now we claim that independent matrices of the form X p are asymptotically free. We now give the Brown measure of product of free elliptic elements.
Theorem 5. Let k ≥ 2 and e 1 , . . . , e k be k free elliptic elements in (A, ϕ) with possibly different parameters. Then the Brown measure µ k of e 1 · · · e k is rotationally invariant and can be described by the probabilities Note that this Brown measure is also the LSD of E 1 · · · E k , where E 1 , . . . , E k are independent elliptic matrices as defined in [17]. The result does not hold for k = 1, as the Brown measure of elliptic element is uniform distribution on ellipse. So the condition k ≥ 2 is crucial.
Proof of Theorem 1
We first make the following remark to clarify condition (1). For any π ∈ N C 2 (2k) and Remark 1. We show that how in two cases, the limit condition (1) in the statement of Theorem 1 can be verified.
Then (1) gives The last equality follows from Fact 3. Hence the limit condition (1) holds.
Before proving Theorem 1 we state the following lemmas, which will be used in the proof of Theorem 1. We give the proofs of the lemmas at the end of this section. The first lemma gives the moments of an elliptic element.
Lemma 1. Let e be an elliptic element with parameter ρ in an NCP (A, ϕ). Then, for ǫ 1 , . . . , ǫ p ∈ {1, * }, The next lemma is key in proving Theorem 1. It will be used repeatedly. We use the notation: Lemma 2. Consider a ′ (r, s) and b ′ (r, s) as defined above. Then Now we proceed to prove Theorem 1. We shall use the following notation: Proof of Theorem 1. Let ǫ 1 , . . . , ǫ p ∈ {1, * }. Then we have Let w = (i 1 , i 2 , . . . , i p ) and supp(w) denote the support of {i 1 , . . . , i p }, the set consisting of distinct elements from w. Let G w be the graph with the vertex set supp(w) and the non-directed edge set {{i k , i k+1 } : k = 1, . . . , p} where i p+1 = i 1 . Note that by the construction the graph G w is connected, which further starts and terminates at the same vertex. Since G w is connected, for j < p, |{w ∈ I p : G w has atmost j distinct edges}| = O(n j+1 ).
By the construction of G w , a maximum of p distinct non-directed edges is possible. Since E[a ij ] = 0, each edge has to appear at least twice to have a non-zero contribution in the right side of (2). Therefore we have at most ⌊ p 2 ⌋ distinct edges, where ⌊x⌋ denotes the largest integer not exceeding x. In such cases |supp(w)| has at most ⌊ p 2 ⌋ + 1 elements, as G w is connected. Therefore by (3) and the fact that the random variables a ij have all moments finite, if p is odd, we have Saying that G w has k distinct edges is same as saying that there is a pair partition of the 2k edges. Therefore by (3) and the fact that the random variables a ij have all moments finite, for p = 2k, from (2) we get Since A n satisfies Assumption 1, we have Using this and Lemma 2, as δ ǫrǫs + ρ |ir −ir+1| (1 − δ ǫrǫs ) ≤ 1, in (4) we get Observe that ((1 − δ ǫrǫs ) + ρ |ir−ir+1| δ ǫrǫs ) = ρ δǫ r ǫs |ir−ir+1| . Since ρ i are bounded, using Fact 3, the right side gives non-zero contribution when π is a non-crossing pair matching. Therefore we get as i r+1 = i s . The existence of * -distribution limit then follows from Condition (1). Particular case: Now suppose ρ i ≡ ρ. Therefore, by Remark 1 (I), we have The last equality follows from Lemma 1. Hence the result.
It remains to prove Lemma 1 and Lemma 2.
Proof of Lemma 1. First we calculate the free cumulants and mixed free cumulants of e and e * . We have κ 2 (s i , s i ) = 1 for i = 1, 2 and κ 2 (s 1 , s 2 ) = κ 2 (s 2 , s 1 ) = 0, as s 1 and s 2 are two standard semi-circular elements and free. Therefore Similarly we have κ 2 (e * , e * ) = ρ, κ 2 (e, e * ) = κ 2 (e * , e) = 1 and other free cumulants are zero. From Fact 1, we have ϕ(e ǫ1 e ǫ2 · · · e ǫp ) = Note that only pair partitions will contribute as the other free cumulants are zero. Therefore if p is odd then right side of the last equation is zero, as no pair partition is possible. And, for p = 2k, we get ϕ(e ǫ1 e ǫ2 · · · e ǫ 2k ) = N (0, 1) and Y ij = Y ji . It is well known (e.g., see Theorem 22.16 in [16]) that W n converges, as n → ∞, in distribution to a semi-circular element s. In other words, lim n→∞ ϕ n (W p n ) = ϕ(s p ) for all p ∈ N. In particular, for p = 2k, we have lim n→∞ ϕ n (W 2k n ) = ϕ(s 2k ) = π∈N C2(2k)
(5)
By the trace formula for product of matrices we have where the last equality follows from Wick's formula.
, then by expanding the product we get Again we have where the last equality is a consequence of Fact 3. The result now follows from (5), (6) and (7).
Proof of Theorem 2
We first state a fact and a lemma. The proof of the lemma is given at the end of this section.
It remains to prove Lemma 3. We first exemplify the proof for a few special cases. Let p = 4 and π = (12)(34). Then, for self-matching among i's and j's, . The calculations show that the limit is non-zero when there is no self-matching among i's and j's. In general, Lemma 3 says that the limit is non-zero when there is no self-matching among i's and j's. (8) holds for k + 1 if it holds for k. Let π ∈ P 2 (2(k + 1)) and π = (r 1 , s 1 ) · · · (r k+1 , s k+1 ), with the convention that r i ≤ s i for all i. Note that there is at least one self-matching pair in {i 1 , . . . , i 2k+2 } and {j 1 , . . . , j 2k }. Without loss of generality we assume that the self-matching in (r 1 , s 1 ) pairs, i.e., u ′ ℓ1 = δ ir 1 is 1 δ jr 1 js 1 = 1. Then = lim Case I: Let r 1 = s 1 − 1 (similar for r 1 = 1 and s 1 = 2k). Then we have is of the form D τ1 · · · D τq for some positive integer q and τ 1 , . . . , τ q ∈ {0, 1, * }. Renaming rest of the 2k variables of i's, j's and D i 's, (13) becomes of the form lim n→∞ 1 n k+2 Let t ≥ 1, i.e., there exists at least one self-matching within i's and j's. Therefore the limit (14) is zero by the induction hypothesis. Let t = 0, i.e., there is no self-matching in i's or j's. Then also the limit (14) is zero because the maximum number of blocks among D (i) 's is k + 1, using Fact 3. Case II: Let r 1 = s 1 − 1 (except for r 1 = 1 and s 1 = 2k). Then we have Therefore four matrices reduced to two matrices, each of the form D τ1 · · · D τq for some positive integer q and τ 1 , . . . , τ q ∈ {0, 1, * }. Again we have reduction of four variables i r1 , i s1 , j r1 , j s1 . By renaming the variables properly, the expression (13) reduces to the form of (14), and which is zero by the previous arguments, i.e., lim n→∞ 1 n k+2 This completes the proof by the principle of induction.
Proofs of Theorem 3 and Theorem 4
The following result will be used in the proof of Theorem 3. This result holds in more general settings. However the restricted version is enough for our purpose. For its proof we refer to Theorem 5.5 in [3].
This completes the proof. | 2017-10-23T09:20:48.000Z | 2017-10-23T00:00:00.000 | {
"year": 2019,
"sha1": "8268450c4e5c84b64ca82ecbe1a44bc8a2fcec01",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.08160",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8268450c4e5c84b64ca82ecbe1a44bc8a2fcec01",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
56483087 | pes2o/s2orc | v3-fos-license | A Ranking Method for User Recommendation Based on Fuzzy Preference Relations in the Nature Reserve of Dangshan Pear Germplasm Resources
Precision orchard management is an important avenue of investigation in agricultural technology and is an urgently needed part of information development in the fruit industry. Precision management based on a precision agricultural technology system involves many factors and results in users being unable to make accurate judgments. To improve user decision-making accuracy and the level of precision management, we used user preferences to achieve the recommendation function. In this paper, a ranking method based on fuzzy preference relations for user recommendation is proposed. We selected the Nature Reserve of Dangshan Pear Germplasm Resources as the research location and invited experts and representatives of different roles (government, farmers, and tourists) to give the fuzzy preference relation coefficients. Then, an optimization model was proposed based on the fuzzy preference relation. We solved the proposed model by constructing a Lagrangian function, and obtained the ranking values of the user preference recommendation function. Finally, we ranked the order of the given roles and implemented the fuzzy preference recommendation. The experimental results show that the proposed method is effective and can be conveniently applied to other problems related to user preference relations.
Introduction
With the massive changes brought about in the information era, China has been deeply affected.A series of successive information technologies have been applied to agriculture, resulting in the development of technologies such as precision agriculture, orchard informatization, digital orchard management, and precision orchard management.Precision orchard management is not only an important part of orchard management, but also the main research direction of precision agriculture.Because China has a large fruit industry, there is a need to apply precision orchard management technology to deal with the huge production in a better way.
Precision agriculture is a modern agricultural operation technology system that comprises a set of technologies that combines sensors, information systems, enhanced machinery, and informed management to optimize production by accounting for variability and uncertainties within agricultural systems [1].The goal of precision agriculture is to manage the inputs on site to get the maximum production [2].At present, precision agriculture technology has been studied in China and other countries at multiple levels with multiple perspectives.Related technologies can provide a technical reference for the accurate management of orchards [3].However, fruit trees and orchards have their own characteristics, especially in the Nature Reserve of Dangshan Pear Germplasm Resources.
The Nature Reserve of Dangshan Pear Germplasm Resources is located in Dangshan County, Anhui Province, China.It includes 17 villages in Dangshan County.The protected features include not only more than 10 Dangshan Crisp Pear germplasm resources, more than 40,000 pear trees of over 100 years old, but also the Yellow River wetland and its ecological system.The general situation of the Nature Reserve of Dangshan Pear Germplasm Resources is shown in Figure 1.Therefore, the management of this nature reserve is different from general nature reserves which are strictly protected, and it is an open protection area for the protection of Crop Germplasm resources.There is not only the government managing the reserve, but also local farmers producing fruit trees, as well as tourists visiting the nature reserve.Therefore, different types of people have different objects of concern in protected areas.By investigating the preferences of different groups of people, the main features in the reserve are sorted according to preference, which will give different groups a better experience of the reserve.To improve the accuracy of user decision making and the level of precision management, we applied a ranking method for user recommendation based on fuzzy preference relations.Fuzzy set theory was proposed in 1965 by Zadeh, and many scholars have since used it to solve decision making problems [4].These problems include supplier selection [5], investment load evaluation [6], ordinal peer assessment [7], 0-1 mixed programming [8], RFID technology selection [9], etc. Fuzzy set theory is used to manage the uncertainty in decision making and optimization problems, as ranking of fuzzy numbers is the most important part of decision process in these problems [10].Since then, several types of fuzzy sets have been proposed such as intuitionistic fuzzy sets, interval-valued fuzzy sets, interval-valued intuitionistic fuzzy sets, hesitant fuzzy sets, type-2 fuzzy sets, and fuzzy multisets.These types have been used in scientific and real life problems.
In most cases, it is very difficult for a human to make a choice if it depends on some numerical factors.Therefore, decision making is very important in real life.Group decision making is a process to get the optimal solution from a set of solutions under certain criteria and the solutions are evaluated by multiple decision makers under that criteria.The decision makers can give their suggestions in the form of preference of solutions and also provide the comparison between one solution and another solution, this kind of comparison is called preference relation.Preference relation is a great tool to model decision making problems and is used to represent decision making problems [11].Different kinds of preference relations have been proposed, for example multiplicative fuzzy preference relations [12], incomplete fuzzy preference relations [13], interval fuzzy preference relations [14], intuitionistic fuzzy preference relations, and hesitant fuzzy preference relations [15,16].Many researchers have applied different fuzzy preference relationship methods to solve decision making problems [8,9].These methods include the multiplicative preference relationship [17], the triangular multiplicative fuzzy preference relationship [18], and multiplicative consistent fuzzy preference relations [19].As a result of a lack of knowledge, a shortage of time, and an unavailability of data resources decision makers often provide their preferences in the form of numerical values.To handle such situations, we proposed and solved the fuzzy preference relations.The fuzzy preference relation model refers to choosing the best solution from the feasible solution set X = (x 1 , x 2 , . . ., x n ) where n ≥ 2. The key problem in fuzzy set theory is how to rank the fuzzy numbers.The ranking method for fuzzy numbers was first presented by Jain in 1976 [20].Yao and Wu introduced a ranking method for fuzzy numbers based on the decomposition principle [21].Later Gu and Xuan proposed a ranking method for generalized L-R fuzzy numbers based on possibility theory [22].
From the above studies, researchers have applied fuzzy set theory or preference relation theory to the evaluation of decision-making problems.On the basis of the preference relationship of the decision analysis process, the basic method is to get its priority by integrating the preference information of the alternative properties.In the decision-making process, the preference value in the preference relationship must satisfy the logical form of thinking.However, traditional multi-criteria decision-making methods are generally only applicable to the small number of decision-makers, how to use this decision-making method to deal with the public participation in public project decision-making issues, and the existing results have relatively little research in this field.Some scholars adopt the method of clustering the public group first and then evaluating it, which is a more reasonable and feasible way to deal with the problem of public group in the known method.The order of preference items of different groups to be solved in this paper is also related to the public group, but it has definite group classification.Therefore, this study does not need to conduct pre-decision-making clustering work, as long as the different user groups examine the preference information, the fuzzy preference relationship ranking model for decision making can be built.
In this paper, the Nature Reserve of Dangshan Pear Germplasm Resources was selected as the research location, and experts and representatives of different roles (government, farmers, and tourists) were invited to give the fuzzy preference relation coefficients.Then, the fuzzy preference relations model was proposed and solved using the Lagrange function to obtain the ranking values of the user preference recommendation function.Finally, the results of sorting were applied to the precision management of the Nature Reserve of Dangshan Pear Germplasm Resources.
The paper is organized as follows.In Section 1, previous work is presented.Section 2 discusses the ranking method based on fuzzy preference relations for user recommendation.Section 3 contains the requirements analyses, data collection, experiments results and analyses.Section 4 concludes the paper and gives the future prospects.
Ranking Model for Alternatives Based on Fuzzy Preference Relations
The human decision making process is a process of the interpretation of people's evaluation and selection of things.The priority of the items to be sorted is difficult to distinguish with distinct boundaries.Fuzzy logic can be well used to reflect the thinking process of people analyzing and dealing with things.Therefore, in the process of decision analysis, the fuzzy preference information of decision-makers often needs to be considered.The preference information of decision makers for attributes or alternatives is generally expressed as a preference relation matrix.Because the ranking of the main features in the reserve is a complex decision-making problem, it is often difficult for different groups to give the ranking results of the main features in the reserve directly.However, it is easy for users to compare the priority of two items, so the preference relation matrix is suitable for describing the preference information of different groups to the main features in the reserve.
The approach for the user recommendation projects in the reserve is to select the optimal solution from a limited set of project-sorting solutions, such as X = (x 1 , x 2 , . . ., x n ) (n ≥ 2).In the process of project sorting, the decision information used in the model is a kind of fuzzy preference relation that users provide about the sorting scheme.On the basis of the fuzzy preference relation and the requirements of reserve management, the preference ranking model of different user groups to the main features in the reserve was established.Users provided some information which is a type of fuzzy preference relations.We used a fuzzy relation matrix P to represent the fuzzy preference relations, where P ⊂ X × X.The corresponding membership function is µ p : X × X − → [0, 1] where µ p (X i , X j ) = P ij , which indicates that the item x is superior to the preference degree of the item y. is a decimal between 0 and 1. Matrix P = [P ij ] n×n is complementary matrix, that is to say ∀i, j, P ij + P ji = 1, P ij ≥ 0, P ii = 0.5.
On the basis of the above-mentioned fuzzy preference relations, we propose a ranking model for user recommendation projects.
Suppose that the ranking result of the user recommendation project X is D, where In order to obtain the value of D, the following approximate relational expression is expected: It can be seen that the larger the value of d i , the larger the value of P ij .Therefore, the following optimization model with constraints based on the relationship between P ij and d i is proposed.
In order to analyze conveniently, the above model can be transformed to the following ones: Model Input: Use Equation (4) into Equation ( 3) From Equation ( 5), we can find the value of the approximate optimal solution D. Model output is vector D (the ranking vector of recommended projects): Through the above analysis, programming languages can be used to solve the proposed model.We applied MATLAB to perform experiments and solved the proposed models.
Requirements Analyses and Data Collection
To test the model, we selected the "Dangshan Pear Germplasm Resources Nature Reserve" as the research location.By going through relevant documents and with government assistance, we launched a survey of the local ecological environment, geography, and living status to determine the main user preference recommendations.Then, we consulted the relevant experts and representatives of the different roles (government, farmers, and tourists) and collected their preferences using questionnaires.Finally, we used the collected data for the project sorting experiment.
Design of the Survey Table
We referred to China's official documents on agriculture, with a special focus on forestry.These documents include the relevant regulations on the Measures for the Administration of the Protection of Old and Famous Trees in Cities, which was formulated and promulgated on 1 September 2000, by the Ministry of Construction of the People's Republic of China.The ordinance clearly stated that "old trees" refers to trees that are over a hundred years old."Famous trees" refers to rare plants both in China and abroad, as well as trees with historical significance, memorial significance, or important scientific research value.On the basis of the implementation of protection and management of forest trees, the living conditions of residents, and tourists' sightseeing requirements, we selected three different roles: government regulation, farmers, and tourists.From these different roles, we determined different user preferences.Finally, we divided the recommended projects into seven categories: Dangshan pear germplasm resources; native variety and new variety resources of Dangshan pear; other fruit species resources; woodland; water; animal protection area; industrial and mining enterprises.Table 1 describes the mentioned categories.
Statistical Analysis
For each role category (government, farmers, and tourists), we invited five experts or users to determine the fuzzy priority relationship coefficient through the implementation of a questionnaire.
Government Regulatory Role Survey Data
Firstly, we investigated the role of government regulation.We invited experts on ancient and famous trees and local environmental protection agencies to conduct a survey.The survey data are shown in Table 2, where names of the five experts are replaced with A, B, C, D, and E, and the priority of projects p 1 to p 2 is denoted by "p 1 vs. p 2 "; moreover, (p 2 vs. p 1 ) = 1− (p 1 vs. p 2 ).
Endangered pear varieties
Distribution, quantity, growth status, protection measures, and biological characteristics.
Other fruit species resources
Apple varieties Distribution, fruit economic characters, biological characteristics, main pests and diseases species, market sale, etc.
Peach varieties
Distribution, fruit economic characters, biological characteristics, main pests and diseases species, market sale etc.
Woodland
Main tree species Distribution, scale and growth condition.
Forest ecosystem
The main tree species, the auxiliary tree species, the other plant species in the woodland, the growth situation, and the degree of harmony.
Growth condition
The growth of fruit tree wind forest, farmland windproof forest, village forest, road tree, special timber forest, etc.
Protection condition
The protection and growth of the key trees and ancient trees.
Damage condition Deforestation, renewal, destruction and pollution.
Water Environmental effects
The area of water, the amount of water, the pollution factor, the degree of pollution, the occupation, the species of animals and plants in the water, etc.
Animal protection area
Birds Quantity, species, migration, and influence on other resources in the protected area.
Other species of animals Amphibians, aquatic animals (fish), zooplankton, etc.
Nature reserve area occupied by enterprises
The name, scale, product and impact on protected area resources.
Pollution
The emission of pollutants and pollution, the impact on the resources of the protected area, and the protection measures.
Public infrastructure
Other projects (such as road repair, river control, large residential construction) have no EIA reports, pollution level and control.The evaluation data for the five experts was processed using the weighted average method.The final survey results are shown in Table 3.
The results in Table 3 form the fuzzy preference relation matrix.This matrix is the input of the ranking model for restoration and reconstruction projects.We applied Formula (5) to obtain the ranking results.These ranking results are shown in Table 4.
From the ranking results in Table 4, it can be seen that Dangshan pear germplasm resources is ranked first, and animal protection area is ranked second.The ranking results reflect that government regulation mainly focuses on ancient rosewood, animal protection zones, and the ecological environment.
Farmers' Role Survey Data
During the second stage of the survey, we investigated the farmers and local residents in the Nature Reserve of Dangshan Pear Germplasm Resources by inviting them to carry out a questionnaire; the survey data are shown in Table 5, where names of the five experts are replaced with A, B, C, D, and E, and the priority of projects p 1 to p 2 is denoted by "p 1 vs. p 2 "; moreover, (p 2 vs. p 1 ) = 1− (p 1 vs. p 2 ).The evaluation data for the five experts was processed by the weighted average method.By using Formula (5), we obtained the sorting results.The final survey results are shown in Table 6.The results in Table 6 form the fuzzy preference relation matrix.This matrix is the input of the ranking model for restoration and reconstruction projects.The ranking results are shown in Table 7. From Table 7, it can be seen that the Dangshan pear germplasm resources project is ranked first, and Dangshan local pear varieties and new varieties is ranked second.The ranking results reflect the local residents being more concerned about the output of fruit trees and the introduction of new local varieties.
Tourist Role Survey Data
At the end of the survey process, we invited visitors who came to visit the Nature Reserve of Dangshan Pear Germplasm Resources to carry out a questionnaire survey.The survey data are shown The results in Table 9 form the fuzzy preference relation matrix.This matrix is the input of the ranking model for restoration and reconstruction projects.We obtained the ranking results.These ranking results are shown in Table 10.From the ranking results in Table 10, it is seen that Dangshan pear germplasm resources is ranked first, and animal protection area is ranked second.The results of the rankings show that tourists are most concerned about scenic spots with higher ornamental value in the protected areas.Visitors can use the user preference ordering function to choose scenic spots for themselves.Therefore, in the future, we can recommend scenic spots to tourists according to the sorting results.
Result Analysis and Discussion
The Nature Reserve of Dangshan Pear Germplasm Resources is an open area sustained for the protection of pear Germplasm resources, over 100 year old pear trees, biodiversity, and the wetland ecosystem.It is not only the government who manages the reserve, but also local farmers who produce fruit trees, as well as tourists who visit the nature reserve.The questionnaire data were collected by classification according to the three roles of government regulation, farmers, and tourists.The data were processed to get the fuzzy preference relationship coefficients of the different roles, and finally, the ranking value of the recommended items for various user preferences was obtained.We selected the first three preference features from the above experimental results and compared them with each other.The first three preferences for each role are shown in Table 11.The top three items that the government are most concerned about are the Dangshan Pear Resources, the protected animal area, and the industrial and mining enterprises.Because the main protection features of this reserve are the Dangshan pear germplasm resources, the over 100 year old pear resources, the biodiversity, and the wetland ecosystem represented by the Dangshan crisp pear.Therefore, the government is most concerned about Dangshan Pear Resources because they are the most important features of this nature reserve and also the most important protected features.The "protected animal area" is ranked second because biodiversity and the wetland ecosystem of the old Yellow River are also the core protected features of this nature reserve."Industrial and mining enterprises" are ranked third because they have the biggest environmental impact on the protected areas and need to be closely monitored.
The top three items that farmers are most concerned about are the Dangshan Pear Resources, the native variety and new variety resources of Dangshan pear, and other fruit species resources.Farmers in the reserve are mainly local farmers from the 17 villages under the jurisdiction of the reserve.They carry out daily production activities with fruit trees in the reserve.Therefore, farmers firstly care about the Dangshan Pear Resources, because it contains Dangshan pear's precious varieties and brand effect.It is also the pride of the local farmers.The native variety and new variety resources of Dangshan pear ranks second because fruit trees planted by farmers are mostly of the native variety and new variety resources of Dangshan pear, which is directly related to local farmers' income.Other fruit species resources are ranked third because they are also the target of local farmers' agricultural production, such as Dangshan yellow peach, although not comparable to Dangshan pear, but also directly related to the income of local farmers.
The top three items that tourists are most concerned about are the Dangshan Pear Resources, the protected animal area, and the woodland.Because tourists are mainly attracted to the special resources and scenery in the nature reserve.Therefore, tourists firstly care about the Dangshan Pear Resources, because they contain tens of thousands of ancient Dangshan pear trees, which is the most attractive feature for tourists visiting this nature reserve.The protected animal area ranks second because tourists are generally interested in wildlife, especially teenagers and their parents, who may be more interested in wild swans than centuries-old Dangshan pear trees.The woodland ranks third because it is not only an escape to the noisy city, but the fresh air and scenery of the forest also greatly attract them.
To sum up, it can be seen that the ranking model in this paper combined with the survey data, which gives the project ranking results, are in line with the preferences of the users of different roles.Therefore, the results are of practical value.
Conclusions and Future Prospects
With the advancements in the information era, China's agricultural information technology has been continuously improving.To achieve the goal of applying better information technology to agricultural management, we need to develop better models to improve the management systems.In order to improve the information level of agricultural management, a ranking method based on fuzzy preference relations for user recommendation was proposed.We selected the Nature Reserve of Dangshan Pear Germplasm Resources as the research location and invited experts and representatives of different roles (government, farmers, and tourists) to give the fuzzy preference relationship coefficients of the user preference recommendation function.Then, an optimization model was proposed based on the fuzzy preference relations.The model was solved by constructing a Lagrangian function, and the ranking values of the user preference recommendation function were obtained.Finally, we ranked the order of the given roles and implemented the fuzzy preference recommendation.The experiments show that this proposed method is easy to implement, consistent, and effective.The experimental results are effective and in line with the actual situation.
In the future, we will expand the survey respondents, collect a wider range of data, and conduct further experiments to get more realistic survey results.We will aim to expand the optimization model to adapt it to other forms of preference relations, for example, those of conditional probability or interval number.We will also compare this method with other existing ranking methods.
Figure 1 .
Figure 1.General situation of the Nature Reserve of Dangshan Pear Germplasm Resources.(a) Shows pear trees of over 100 years old; (b) Shows a pear orchard and farmland; (c) Shows pear germplasm resources repositories; (d) Shows the Yellow River wetland.
We established a Lagrangian function to solve the proposed optimization model.The solution process is shown below.The flowchart of the model is shown in Figure2, and the solution process is given below.
Figure 2 .
Figure 2. Flowchart of the ranking model.Set the Lagrangian function to be L. L = D T QD + 2α(e T D − 1) (where α is the Lagrangian multiplier)
Table 1 .
Recommended project categories.
Table 2 .
Government regulatory role survey data.
Table 3 .
Government regulatory role survey results.
Table 4 .
Ranking results of government regulatory role.
Table 5 .
Survey data on the role of farmers.
Table 6 .
Results of the survey of farmers' roles.
Table 7 .
Ranking results of farmers' roles.
Table 10 .
Ranking results of tourist roles.
Table 11 .
First three preference objects of the experiments. | 2018-12-18T14:04:28.988Z | 2018-11-19T00:00:00.000 | {
"year": 2018,
"sha1": "337f1854a1bd84a83dfb8aa28ba9ff029d0fe3e3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2078-2489/9/11/291/pdf?version=1542785626",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "337f1854a1bd84a83dfb8aa28ba9ff029d0fe3e3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233237727 | pes2o/s2orc | v3-fos-license | Microbiome–miRNA interactions in the progress from undifferentiated arthritis to rheumatoid arthritis: evidence, hypotheses, and opportunities
The human microbiome has attracted attention for its potential utility in precision medicine. Increasingly, more researchers are recognizing changes in intestinal microbiome can upset the balance between pro- and anti-inflammatory factors of host immune system, potentially contributing to arthritis immunopathogenesis. Patients who develop rheumatoid arthritis from undifferentiated arthritis can face multiple irreversible joint lesions and even deformities. Strategies for identifying undifferentiated arthritis patients who have a tendency to develop rheumatoid arthritis and interventions to prevent rheumatoid arthritis development are urgently needed. Intestinal microbiome dysbiosis and shifts in the miRNA profile affect undifferentiated arthritis progression, and may play an important role in rheumatoid arthritis pathophysiologic process via stimulating inflammatory cytokines and disturbing host and microbial metabolic functions. However, a causal relationship between microbiome–miRNA interactions and rheumatoid arthritis development from undifferentiated arthritis has not been uncovered yet. Changes in the intestinal microbiome and miRNA profiles of undifferentiated arthritis patients with different disease outcomes should be studied together to uncover the role of the intestinal microbiome in rheumatoid arthritis development and to identify potential prognostic indicators of rheumatoid arthritis in undifferentiated arthritis patients. Herein, we discuss the possibility of microbiome–miRNA interactions contributing to rheumatoid arthritis development and describe the gaps in knowledge regarding their influence on undifferentiated arthritis prognosis that should be addressed by future studies.
Introduction
Rheumatoid Arthritis (RA) is an autoimmune disorder that affects more than just the joints particularly fingers and toes, and causes significant morbidity [1]. RA is frequently progressive, and the current medications can only delay the progress but cure. Undifferentiated arthritis (UA), which defined as patients not fulfilling the 2010 ACR/EULAR RA criteria and who did not have a clinical diagnosis other than RA at baseline, can be self-limiting (i.e., the case can undergo spontaneous remission, self-healing, or remain undifferentiated) or develop into rheumatoid arthritis (RA), ankylosing spondylitis, systemic lupus erythematosus, osteoarthritis (OA), or other diseases [2]. A large-scale 2-year follow-up study on the prognosis of UA patients found that only 4.4% of cases spontaneously attained complete remission, while 60.3% remained undifferentiated, and 29.4% progressed to RA [3]. RA is an autoimmune disorder in which the immune system attacks its own tissues and cells, particularly those of the joints. The progression to RA from UA is a continuous and dynamic process, and the resulting spectrum from health to illness is known as the health-disease continuum [4]. Susceptibility factors and immune monitoring are two main research directions regarding RA prevention and control. Much work has been focused on risk factors for RA development. On the basis of data derived from studying patients with preclinical or early stage RA [5], researchers generally believe that there are two main types of RA susceptibility factors: (1) heritable factors, i.e., RA susceptibility genes such as HLA-DR and HLA-DQ, and (2) environmental factors, such as smoking and lifestyle.
Immune dysbiosis profiles, regardless of findings from early serological examination, recent clinical imaging, synovial fluid examination, or synovium biopsy, have been reported to successfully identify patients presenting with UA who were likely to have their disease progress to RA [4,6,7]. When such patients are diagnosed very early in their disease course, timely interventions such as the administration of disease-modifying antirheumatic drugs (DMARDs) can improve their prognosis, shorten their disease course, and reduce their disability risk [8]. Although many studies have undeniably furthered our understanding of the molecular mechanisms behind RA development, neither the shared epitope hypothesis of RA susceptibility nor a clear connection between human gene function and RA pathogenesis [9] was confirmed by research conducted on identical twins. Several factors are involved in the induction of RA among cohorts of patients with UA [10]. Recent work has begun to focus on environmental factors and their interactions with genes, but the specific mechanisms are still unclear.
The human intestinal microbiome has attracted attention for its potential utility in precision medicine. Microbiome-host immune system interactions occur via microbial antigens and metabolites [11]; changes in these interactions can upset the balance between the microbiome and host immune system [12], potentially contributing to RA immunopathogenesis. Recent studies have shown that intestinal microbiota dysbiosis accompanies most diseases, including chronic inflammation and tumors [12], cirrhosis/ liver cancer [13][14][15], chronic kidney disease [16], lung disease [17], and arthritis [18]. Increasingly, more researchers are recognizing the critical roles played by the human microbiome (particularly the intestinal microbiome) in the progress and prognosis of RA [18].
Mountains of studies also showed that alterations in miRNA expression contribute to susceptibility of RA/ UA (seen in Table 1). However, while many related studies have focused on comparing differences between RA patients and healthy individuals [18,19], a few studies have compared the differences among UA patients with different prognoses. Additionally, host miRNA-microbiome axis is considered to play a critical role in host-microbiota interactions, and associated with susceptibility in a wide range of diseases such as colorectal cancer [20] and Alzheimer's disease [21]. In this review, we summarize the recent progress regarding microbiome-miRNA interactions, and their potential associations with RA development, and we discuss the future perspectives of viable biomarkers for RA prevention and targeted UA prognosis manipulation.
The intestinal microbiome drives RA pathologic responses in genetically susceptible hosts
The intestinal microbiome drives RA pathologic responses in genetically susceptible hosts. RA genetic research has identified over 100 RA-related gene loci, such as HLA, PTPN22, and TRAF1-C5, and determined that the main RA susceptibility gene in China is HLA-DRB1 [22], PADI4 in Japan [23], PTPN22 in northern European [24], and ACE I/D allele in Arab [25]. However, these loci explain only about 15% of the difference in RA susceptibility risk among individuals [26]. Pioneering studies in animal models have highlighted the importance of non-host genetic factors (intestinal microbiota), revealing that specific microbes in the intestine drive a pathologic immune response to RA in genetically susceptible hosts, thus providing evidence for the involvement of the intestinal microbiome in the development of inflammatory arthritis [27,28]. For example, Lactobacillus and segmented filamentous bacteria in the intestinal microbiota triggered autoimmune diseases and inflammatory arthritis in sterile healthy K/BxN mice, an RA animal model, by inducing Th17 cells [29]. It is well known that Th17 lineage produces cytokines which involved in the pathogenicity of RA; for example GM-CSF, TNF-α, IFNγ, and most of the interleukins [30]. These cytokines, in turn, drive shifts in the composition of the intestinal microbiota and microbial metabolic outputs [31], and thereby play an important role in the progression of autoimmune disorders in RA patients. A high Prevotella copri abundance in the intestines of individuals who are genetically susceptible to RA can drive a pathologic response toward RA development [32]. Maeda et al. [33] colonized germ-free SKG mice (GF-SKG mice) with fecal samples from RA patients or healthy individuals and found that the SKG mice colonized with RA patient fecal samples (P. copri-dominated microbiota; RA-SKG mice) displayed more Th17 cells in their large intestine compared with mice colonized with healthy control fecal samples. Furthermore, severe Th17 cell-dependent arthritis appeared in the RA-SKG mice after their injection with low doses of the fungal component zymosan, whereas there were no signs of arthritis when GF-SKG mice were injected with zymosan. These results indicate that intestinal microbiota dysbiosis dominated by P. copri can lead to arthritis. Intestinal P. copri may contribute to the development of arthritis via the action of superoxide reductase and adenosine phosphate phosphoryl sulfate reductase, the genes for which have been detected in its genome [33]. These two enzymes can enhance the active oxygen tolerance of bacteria, produce thioredoxin, promote the proliferation and inhibit the apoptosis of fibroblast-like synoviocytes, form pannus, and participate in the RA pathologic process [33].
Another study found that some of the low-abundance microbes in the healthy subject controls group were very abundant in untreated RA patients, such as Collinsella, the abundance of which was positively correlated with α-aminoadipic acid and asparagine serum levels and related to IL-17A production [34]. Based on subsequent mouse experiments, the researchers concluded that Collinsella can change the intestinal permeability and disease severity of mice with experimental arthritis. Together, these findings confirm that certain intestinal bacteria can drive a pathologic immune response toward RA in the host and increase an individual's risk of developing RA.
Although a causal relationship between the intestinal microbiome and RA development has not yet been comprehensively depicted, it is now clear that the microbiome-metabolite-immune system axis is involved in RA immunopathogenesis. Intestinal microbes maintain homeostasis with the host immune system via their constituents and metabolites. The regulatory effect of metabolites on host immune cells is a vital component of intestinal microbiome-host immune cell interactions; these can trigger chronic inflammation and autoimmunity, which are involved in RA initiation. For example, the short-chain fatty acids and aromatic amines can regulate immune cells through free fatty acid receptor (FFAR) 2, FFAR3, or G protein-coupled receptors metabolites, and they participate in many host immune pathophysiologic processes [35]. Additionally, indole, which is produced from tryptophan through the metabolism of intestinal microbiota, has anti-inflammatory effects; it can inhibit the proinflammatory cytokine production by macrophages via up-regulating PFKFB3 (the main regulator gene of cellular glycolysis) expression, thus significantly reducing the severity of liver steatosis and inflammation [36]. Furthermore, a small proportion of bile acids synthesized by the human liver enter the colon, where they are metabolically transformed by the intestinal microbiome and can act on multiple host nuclear receptors and G-coupled protein receptors, playing a key role in shaping the host innate immune response [37,38]. Bacterial bile acid metabolites can regulate the number of colonic RORγt + regulatory T (Treg) cells via the vitamin D receptor, and knocking out the bile acid metabolic pathways of intestinal symbiotic bacteria (for example, Bacteroides fragilis) inhibited their ability to induce RORγt + Treg cells in murine colons [39]. Additionally, the secondary bile acids (3-oxo LCA and isoallo LCA) metabolized by the intestinal microbiome can regulate Th17 and Treg cell differentiation [40], and the Th17/Treg cell balance is closely related to RA development and severity [41]. However, there is currently no acceptance of a Th17/ Treg-based therapeutic strategy to treat RA in humans.
An imbalance in human intestinal microecology, along with the associated changes to the intestinal microbiota metabolic profile, such as a decrease of specific metabolites and the loss of metabolite diversity, will negatively affect the host immune response. Studies on patients with osteoarthritis found that gut microbiome dysbiosis is involved in bacterial metabolite dysbiosis and joint degeneration [42], and similar phenomena were discovered for RA [43]. Interactions between the intestinal microbiota and immune system have been shown to promote and sustain autoimmune rheumatic diseases [44]. Alterations in the function and metabolites of the intestinal microbiome, especially regarding the immune-related inflammatory complex or miRNA metabolites, can cause local or systemic pathophysiologic responses in the host [45,46], which supposed to be associated with the onset of RA in susceptible individuals. Therefore, the intestinal microbiome may be the most influential non-heritable inducer of RA outside the joints.
Intestinal microbiome dysbiosis and abnormal miRNA profiles accompany RA development
Although many factors contribute to RA development, the intestinal microbiome has recently been identified as an important pathogenic factor in RA initiation and progression. The contribution of microbiome dysbiosis to RA immunopathogenesis was first reported comprehensively by Zhang et al. [18], who comprehensively analyzed the structure and function of the intestinal microbiota in RA patients in comparison with that of healthy populations (including immediate relatives and relatives without a blood relationship) using metagenomics. They found that Haemophilus sp. was enriched in the oral and intestinal flora of healthy controls, and its abundance in the patient group was inversely proportional to the titer of RA autoimmune antibody; Lactobacillus salivarius was enriched in the plaque, saliva, and stool of RA patients, especially those with a highly active condition; and, compared with healthy controls, the abundances of some functional genes in the oral and intestinal microorganisms of RA patients were significantly different (including genes related to the transport and metabolism of iron, sulfur, zinc arginine, and citrulline cyclization, which are associated with RA). These findings suggest that abnormalities in the abundance of these functional genes play an important role in the main pathophysiologic processes of RA. In summary, the intestinal microbiome, as well as products of its co-metabolism with the host, can induce host autoimmune diseases and affect RA development.
Abnormal miRNA profiles play a pivotal role in the pathogenesis of many joint injury diseases [47]. Using RNA-seq technology, 63 miRNAs were found to be differentially expressed in the peripheral blood mononuclear cells of RA patients as compared with healthy controls [48]. Lower miRNA-31 levels were also observed in the synovial tissues of RA patients as compared with controls; synovial tissue miRNA-31 is important for RA-induced synovial cell apoptosis [49]. Decreased expression levels of microRNAs (miR-139-3p, miR-204, miR-760, miR-524-5p, miR-136, miR-548d-3p, miR-214, miR-383, and miR-887) in T cells are also involved in RA immunopathogenesis [50]. Additionally, miR-146a is up-regulated in CD4 + T cells from RA patients. The possible miRNA which reported to be associated with UA/RA are shown in Table 1.
Because of the stability, non-invasiveness, and sensitivity of miRNAs, the abnormal expression of miRNAs might be useful for disease diagnosis [63][64][65]. The serum levels of miR-16-5p, miR-23-3p, miR125b-5p, miR-126-3p, miRN-146α-5p, and miR-223-3p in RA patients were identified as potential novel biomarkers for predicting and monitoring therapy outcomes to anti-TNFα/DMARD combination therapies [66]. The disease specificity of altered miRNA expression profiles is an advantage for their use in the early diagnosis of many diseases. For example, miRNA profiles can be used to distinguish Kashin-Beck disease from osteoarthritis and RA, diseases with clinical manifestations similar to that of Kashin-Beck disease [47]. The importance of therapeutically targeting miRNA has also been demonstrated in various disease models [67]. Further studies with large samples and cell experiments are needed to confirm the therapeutic efficacy of miRNA targeting.
Studies on the role of microbiome dysbiosis in RA development have almost invariably focused on exploring the interaction network between the microbiome, its metabolites, and host immune and miRNA profiles. However, an etiopathogenic role of specific bacteria cannot be inferred by association alone. Therefore, integrating multi-omics studies on RA immunopathogenesis will be important for elucidating targetable mechanisms in cases of preclinical and established RA.
Bidirectional regulation between intestinal flora and miRNA
The intestinal microbiome composition varies widely among different people; however, for an individual, the composition of the intestinal microbiome is relatively stable, and the structure of its core communities will not change with temporary changes to diet and lifestyle [68,69]. Human microbiome research generally focuses on the mechanism of selectively shaping the intestinal microbiota. Notably, the intestinal microbiota not only regulates the transcription of host miRNA, but also affects the post-transcriptional modification of some genes [70], thus inducing a host pathophysiologic response; host miRNA can also shape the composition of the intestinal microbiome and regulate the transcription and expression of intestinal microbial genes. The intestinal microbiome has been shown to affect the emotions, social abilities, and cognitive deficits of aseptic mice by changing the expression of miRNA related to anxiety in the brain area; depression-related behaviors could also be induced in this manner and later resolved by intervention with bacteria [70]. Importantly, such interventions restored the miRNA expression profile to normal, suggesting that the intestinal microbiome can regulate the expression of host extraintestinal miRNA and trigger a pathophysiologic response. Tryptophan-derived metabolites produced by the intestinal microbiota can influence miRNA expression in murine white adipose tissue, which is related to the inflammatory pathology of this tissue [71]. Host miRNA regulation by the intestinal microbiome was also found to affect host growth and development [72].
Notably, miRNA can also shape the intestinal microbiota composition and regulate the activity of intestinal bacterial genes [73]. Because intestinal miRNA produced by the host plays an important role in shaping the intestinal microbiome structure and function and is closely related to human health, miRNA has been proposed as a key molecule with which the host regulates intestinal microbiota [73]. Liu et al. [73] screened and identified miRNA isolated from murine and human feces using NanoString digital spatial profiling technology; it revealed that host extracellular miRNA, secreted by small intestinal epithelial cells and Hopx-positive cells in mice and humans, could selectively enter bacteria (such as Fusobacterium nucleatum and Escherichia coli) to regulate the transcription and expression of bacterial genes, thus affecting intestinal bacteria growth and shaping the composition of intestinal flora. When these researchers specifically knocked out Dicer, an enzyme responsible for miRNA processing, in murine small intestinal epithelial cells and Hopx-positive cells, fecal miRNA was reduced, and the mice showed symptoms of uncontrolled intestinal bacteria growth and colitis aggravation. Transplantation of the intestinal miRNA from normal mice to these defective mice was able to restore the intestinal microecological balance and improve the physical condition of the animals. Additionally, miRNAs' regulation of microbial gene expression and growth was also reported in neurodegenerative diseases [74].
Host miRNA action provides an important mechanism for maintaining intestinal microbial homeostasis. In addition to the intestinal flora being related to the host extraintestinal immune function, it is also capable of affecting the host extraintestinal miRNA expression, known as the "microbiome-miRNA axis" (Fig. 1). Its roles in the pathophysiology of immune health and diseases were discussed by Li et al. [75], which suggested a promising new approach for presenting valuable diagnostic tools in UA/RA.
Studies on the mechanisms of development for host extraintestinal diseases have found that the intestinal microbiome is related to the host miRNA regulatory network. Abnormal miRNA profile changes are related to a variety of diseases, and their role in arthritis development is clear.
Manipulating the intestinal microbiota and miRNA may improve treatment for this disease.
Predicting RA development in UA patients to prevent RA
Administering DMARDs to patients during the early initiation of arthritic disease is beneficial for relieving disease activity and radiographic joint damage [76]. Thus, it is vital for clinicians to identify patients with UA whose disease will evolve into RA if left untreated and to implement an appropriate treatment strategy. Such patients may exhibit a particular clinical presentation during the process of UA evolving into RA [2]. This theoretical presentation could be used to predict the evolution of RA in UA patient cohorts. However, once UA patients have obvious symptoms of joint damage visible by radiographic examination, it is too late for disease intervention. Consequently, much research has focused on attempting to predict the prognosis of UA and on identifying the early inducing factors of RA [77].
In the preclinical stages of RA, even before synovial biopsy and joint MRI show joint tissue damage, antibody to cyclic citrullinated peptide (anti-CCP) and rheumatoid factor (RF) are already detectable in the peripheral blood [78]. Furthermore, antibody titer and epitope specificity are increased, and proinflammatory cytokine levels are abnormally high a few months before obvious disease, i.e., synovitis, appears [79]. Together, these findings indicate that patients develop autoimmune disorders before developing joint injuries. Although many studies have tried to describe the pathologic history of UA/RA, the field remains in need of non-invasive, stable, sensitive biomarkers that specifically identify the subgroup of patients with UA who will develop RA.
Multicenter studies with larger cohorts that investigate shifts in the clinical variables of patients with different UA prognoses will be vital to predicting the future development of RA in UA cohorts. Some clinical variables, such as anti-CCP levels, polyarthritis, symmetric arthritis, and erosions visible on radiographs, have the potential to predict future RA development in UA cohorts. For example, the Leiden prediction rule, which takes into account the tender joint count, duration of morning stiffness, and duration of arthritis, was reported to have a potential role in predicting RA development [80]. However, the presence of anti-CCP, duration of morning stiffness, number of swollen joints, radiographic progression, modified disease activity score (DAS), and percentage of RF-positive individuals were similar between RA patients who initially presented with UA and those who presented with RA directly [2]. To better prevent RA development and progression, ideal biomarkers should dynamically and specifically reflect the disease pathology 1 3 [81] and be capable of clearly distinguishing among UA patients who will undergo different disease evolutions.
Many investigations on RA patients have revealed microbiome dysbiosis and abnormal miRNA profiles in these individuals. Emerging evidence suggests a bidirectional regulatory mechanism between the intestinal microbiota and miRNA in patients with UA/RA during the presentation of UA disease, and the intestinal microbiome may affect an individual's UA prognosis. Additionally, the microbiome and miRNA have the highest specificities and positive predictive values for human health and disease states [82].
Conclusion
The role of the microbiome and miRNA in the process of UA evolving into RA is an active area of RA research, and the mechanism of interaction is still unclear. Increased understanding of how these two factors interact and of their involvement in disease progression may provide mechanistic insight into RA development and lead to improved treatments for modifying UA and preventing RA. Additionally, we speculate that the miRNA profile as well as the microbiome composition and function differ between the subgroup of UA patients who progress to having RA and those who present directly with RA.
Although key alterations in the oral and intestinal microbiomes have been demonstrated in patients who present with RA, the natural microbiome characteristics in patients who present with UA and subsequently develop RA are unknown, as are the shifts that occur during this progression. Therefore, we recommend that additional research be conducted on the abnormal alterations (dysbiosis) in the intestinal microbiome and miRNA of individuals as their UA evolves into RA.
Conflict of interests
The authors declare that they have no conflicts of interest.
Ethics approval This article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent No part of the manuscript is copied or published in whole or in part elsewhere. Its publication has been approved by all co-authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, Fig. 1 The hypotheses of microbiome-miRNA interactions in the progress from undifferentiated arthritis to rheumatoid arthritis provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-04-15T14:06:45.235Z | 2021-04-15T00:00:00.000 | {
"year": 2021,
"sha1": "42291646bae572760937827412a623c2f0a181a9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00296-021-04798-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "42291646bae572760937827412a623c2f0a181a9",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.