id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
245068662
pes2o/s2orc
v3-fos-license
Dynamics of Mid-Channel Bar during Different Impoundment Periods of the Three Gorges Reservoir Area in China : The dynamics of the mid-channel bars (MCBs) in the Three Gorges Reservoir (TGR) were substantially impacted by the large water-level changes due to the impoundments of the TGR. However, it is still not clear how the morphology of the MCBs changed under the influence of water level and hydrological regime changes induced by the impoundments and operation of the TGR. In this work, the MCBs in the TGR were retrieved using Landsat remote sensing images from 1989 to 2019, and the spatio-temporal variations in the number, area, morphology and location of the MCBs during different impoundment periods were investigated. The results showed that the number and area of MCBs changed dramatically with water-level changes, and the changes were dominated by MCBs with an area less than 0.03 km 2 and larger than 1 km 2 . The area of MCBs decreased progressively with the rising water level, and the number generally showed a decreasing trend, with the minimum number occurring at the third stage when the water level reached 139 m, resulting in the maximum average area at this period. The ratio of length to width of the MCBs generally decreased with the changes in hydrological and sediment regimes, leading to a shape adjustment from narrow–long to relatively short–round with the rising of the water level. The water impoundments of the TGR led to the migration of the dominant area from the upper section to the middle section of the TGR and resulted in a more even distribution of MCBs in the TGR. The results improve our understanding of the mechanisms of the development of MCBs in the TGR under the influence of water impoundment coupled with the annually cyclic hydrological regime and longer periods of inundation and exposure. Introduction The mid-channel bar (MCB) is formed by favorable hydrological conditions in the river. It is a stable island above the river's water level, formed by the gradual development and shaping of the river siltation over a long period [1,2]. The development of MCB is influenced by exogenous materials such as sediment, the transport capacity of flowing water and the sediment concentration, as well as by dam construction and reservoir regulations [3][4][5][6]. The dynamics of MCBs were significantly impacted by the water-level changes due to the impoundment of the TGR. [7,8]. On the one hand, some original MCBs in the Yangtze River were submerged, while some new MCBs were formed from the inundation of low-lying lands and point bars by the reservoir, due to the rising of the water level [9,10]. On the other hand, some MCBs were periodically exposed and submerged due to the annually cyclic hydrological regime induced by the TGR operation. Therefore, the morphology of the MCBs in the TGR changed significantly under the influence of these hydrological regime changes, which are expected to have an important impact on channel stability, water-land interactions and biological diversity [11][12][13]. Over the past century, great efforts have been made to investigate the formation and development processes of MCBs in bifurcated channel stretches, using field observations [14], remote sensing [5], theoretical generalized models and mathematical models [15][16][17]. Many experiments have also been conducted to explore the morphological dynamics of MCBs [18][19][20]. However, most of the studies are based on ideal environmental conditions, including constant flow, slope, etc. Furthermore, many other environmental factors affecting the development of MCBs were not considered [21,22]. In recent years, the rapid development of numerical simulation technology, remote sensing and spatial analysis in geographic information science has provided an opportunity to monitor and model the dynamics of MCBs at multi-spatial and multi-temporal scales [23,24]. Schuurman et al. [15] generated datasets of water depth, flow and sediment transport of MCBs based on physical models, and further developed a conceptual network model describing the interactions of MCBs, sub-branches and river channels. Liu et al. [25] investigated the proportion of riverine sand partitioning when MCBs reached their stable equilibrium form, using an analytical hydrodynamics method. Rasbold et al. [26] identified the development signatures of MCBs based on the theory of sedimentology. Adami et al. [22] used wavelength, migration rate and height to investigate the spatio-temporal variations of the morphological dynamics of the MCBs in the Alpine Rhine over the last 30 years. As the longest river in China, the Yangtze River has an important strategic position and a role in boosting the development of the cities along its length [27]. The morphological development of MCBs in the Yangtze River is of great significance in maintaining the stability of the river and enhancing the function of the "golden channel" [28]. However, the construction of the Three Gorges Dam (TGD) has significantly changed the hydrological and sediment regimes downstream of the TGD over the last 30 years, altering the hydrological conditions for the development of the MCBs [28][29][30]. Based on long-term observations, multi-temporal remote sensing data and model simulations, many studies have been conducted to monitor the changes in the MCBs in the middle and lower reaches of the Yangtze River [31,32]. The results showed significant morphological changes in the MCBs after the TGD operation [5,33], and revealed the process [28] and mechanism for the development of MCBs [33] downstream of the TGD. In contrast, under the influence of the annually cyclic hydrological regime and of longer inundation and exposure periods induced by the TGR operation, the morphological development process of MCBs in the TGR and their response to hydrological and sediment regime changes differs greatly from those downstream of the TGD [34]. However, due to the lack of relevant studies, it is still not clear how the morphology of the MCBs changes under the influence of water level and hydrological regime changes induced by the impoundments and operation of the TGR. Therefore, this study was carried out to fill the knowledge gap. The main objectives of this study were to: (1) retrieve the MCBs from Landsat images and construct datasets of morphological changes of MCBs in the TGR; (2) investigate the spatio-temporal variations in the numbers, areas, morphology and locations of the MCBs during different impoundment periods. The study helps to reveal the mechanisms for the development of MCBs in the TGR; it also offers a scientific basis for the planning, optimal utilization and ecological restoration of the MCBs in the TGR. Study Area The Three Gorges Reservoir (TGR) is located in the lower section of the main waterway in the upper reaches of the Yangtze River, which is a typical mountainous river. It extends from Jiangjin District in Chongqing to Yichang City in Hubei Province, from west to east ( Figure 1). The topography of the Three Gorges Reservoir Area (TGRA) is dominated by mountains and hills. The TGRA has a subtropical monsoon climate with an average annual temperature of 17-19 • and annual precipitation of 1000-1800 mm [35]. After the official operation of the TGR, it formed a narrow-valley reservoir with a total length of 660 km and a surface area of 1084 km 2 . The geographical location of the TGR is between 28 • 56 and 31 • 44 east (longitude) and between 106 • 16 and 111 • 28 north (latitude), which includes 25 districts and counties in Hubei and Chongqing municipalities. The Three Gorges Reservoir (TGR) is located in the lower section of the main waterway in the upper reaches of the Yangtze River, which is a typical mountainous river. It extends from Jiangjin District in Chongqing to Yichang City in Hubei Province, from west to east (Figure 1). The topography of the Three Gorges Reservoir Area (TGRA) is dominated by mountains and hills. The TGRA has a subtropical monsoon climate with an average annual temperature of 17-19° and annual precipitation of 1000-1800 mm [35]. After the official operation of the TGR, it formed a narrow-valley reservoir with a total length of 660 km and a surface area of 1084 km 2 . The geographical location of the TGR is between 28°56′ and 31°44′ east (longitude) and between 106°16′ and 111°28′ north (latitude), which includes 25 districts and counties in Hubei and Chongqing municipalities. Division of Impoundment Periods The upper reaches of the Yangtze River were successfully intercepted by the TGD in 1997, raising the reservoir water level to approximately 66 m above sea level. The TGR began storing water in steps from 139 m in June 2003, to 156 m in October 2006 and 175 m in November 2009. It was officially operated after one year of experimental water storage, in 2009 [36]. Thus, based on the construction phase and the changes in the water levels of the TGD (Figure 2), five stages were identified to investigate the morphological changes in the MCBs. Division of Impoundment Periods The upper reaches of the Yangtze River were successfully intercepted by the TGD in 1997, raising the reservoir water level to approximately 66 m above sea level. The TGR began storing water in steps from 139 m in June 2003, to 156 m in October 2006 and 175 m in November 2009. It was officially operated after one year of experimental water storage, in 2009 [36]. Thus, based on the construction phase and the changes in the water levels of the TGD (Figure 2), five stages were identified to investigate the morphological changes in the MCBs. The Three Gorges Reservoir (TGR) is located in the lower section of the main waterway in the upper reaches of the Yangtze River, which is a typical mountainous river. It extends from Jiangjin District in Chongqing to Yichang City in Hubei Province, from west to east (Figure 1). The topography of the Three Gorges Reservoir Area (TGRA) is dominated by mountains and hills. The TGRA has a subtropical monsoon climate with an average annual temperature of 17-19° and annual precipitation of 1000-1800 mm [35]. After the official operation of the TGR, it formed a narrow-valley reservoir with a total length of 660 km and a surface area of 1084 km 2 . The geographical location of the TGR is between 28°56′ and 31°44′ east (longitude) and between 106°16′ and 111°28′ north (latitude), which includes 25 districts and counties in Hubei and Chongqing municipalities. Division of Impoundment Periods The upper reaches of the Yangtze River were successfully intercepted by the TGD in 1997, raising the reservoir water level to approximately 66 m above sea level. The Data Collection The spatial information of the MCBs was retrieved using multi-temporal Landsat images. Since the TGR had been in operation for more than twenty years from the interception of the Yangtze River, three criteria were employed to ensure data consistency regarding the spatial-temporal resolution and retrieval accuracy for the MCBs. Firstly, images with less cloud cover were used where possible [5,23]. Secondly, images acquired during the dry season (i.e., November to March) were used, to reduce the difference in water levels during the different stages. Lastly, due to the influence of the 16-day-long revisit cycle of the Landsat satellite, it was difficult to obtain images with the same water levels for different strips. Therefore, images with similar water levels were obtained as far as possible. A total of 36 images were collected, with an spatial resolution of 30 m, following these criteria. These images, spanning the 39th-42nd strips of the Landsat satellite were obtained from the Chinese Geospatial Data Cloud (http://www.gscloud.cn/). The data preprocessing, including single-band extraction, false color synthesis and geometric correction, etc., was carried out using ENVI 5.3 software. Many previous studies proved that these images can be used to monitor the landscape dynamics with reasonable accuracy [5,23,24]. Detailed information on the collected image data is shown in Table 1. The data on the water level, sediment and siltation of the TGR were obtained from the Yangtze River Sediment Bulletin (http://www.cjw.gov.cn/zwzc/bmgb/) and Yangtze River Three Gorges Group (https://www.ctg.com./sxjt/sqqk/index.html). Retrieval of MCBs from Landsat Images MCBs were retrieved from Landsat images using auto-classification coupled with manual inspection and digitization. They were initially auto-retrieved using the modified normalized difference water index (MNDWI) developed by Xu [37]. This index was derived from the normalized difference water index (NDWI), which highlighted the water information in the image by normalizing the spectral difference between the green band and the mid-infrared band [38]. The MNDWI has been proved to be an effective method for retrieval of the MCBs with reasonable accuracy [5,23]. The MNDWI was calculated as: where ρ Green and ρ MIR are the reflectances of the green band and mid-infrared band [37], respectively. Due to differences in the spectral features among different images, some MCBs were misclassified as other landscapes, while some other landscapes were misclassified as MCBs since they had similar spectral features. Thus, the automatically retrieved vector data of MCBs need to be manually modified and verified by combing them with the observed data such as water levels, hydrological data and land use data. Such modification can significantly improve the retrieval accuracy of MCBs from Landsat images. Classification of MCBs A field survey showed that the areas of MCBs varied greatly in the TGR, and previous studies found that MCBs with different sizes had different responses to the changes in the hydrological and sediment regimes [5]; thus, all the MCBs were reclassified into 4 types based on different areas: small MCBs with an area less than 0.03 km 2 (SMB), medium MCBs with an area less than 0.1 km 2 (MMB), medium-large MCBs with an area less than 1 km 2 (MLMB) and large MCBs with an area greater than 1 km 2 (LMB). Index of Area and Shape The area and perimeter of a single MCB can be directly calculated from the vector data of MCBs using ArcGIS software. The length and width changes of MCBs can reflect their adjustment to the changes in hydrological and sediment regimes [39]. The ratio of length to width (LWR) was used as a comprehensive index to investigate the morphological characteristics of, and changes in, MCBs [5,23]. This was calculated as: where L and W are the length and width of the MCB, respectively. The Coefficient of Variation (CV) The CV was used to measure the spatial variability of the morphological characteristics of MCBs. It has been proved to be a useful indicator for investigating the variations in spatial features and is widely used in landscape ecology [5]. It was calculated as: where x i , x and n are the LWR of each MCB, average LWR and number of MCBs, respectively. The Gravity Center Shifting Model The gravity center shifting model was used for investigating the spatial change trends of MCBs during different impoundment stages [40]. where X s and Y s are the latitude and longitude of the gravity center of all the MCBs at stage s, respectively, A si is the area of the ith MCB at stage s, x i and y i are the latitude and longitude of the geometric center of the ith MCB, respectively, and n is the total number of MCBs. Water 2021, 13, 3427 6 of 16 The following equation was used for calculating the shifting distance of the gravity center: where D s −s is the shifting distance of the gravity center, X s and Y s are the latitude and longitude of the gravity center of all the MCBs at stage s , respectively, and X s and Y s are the latitude and longitude of the gravity center of all the MCBs at stage s, respectively. Variations in Area and Number of MCBs Retrieved results from Landsat images showed significant variations in the area and number of MCBs in the TGR ( Figure 3). The area and number presented different trends with respect to the changes in water level during different impoundment periods. The number of MCBs ranged between 89 and 150, with an average of 113 in the TGR; 90 MCBs were located in the main stream and accounted for 79.7% of the total number, and the remaining MCBs were located in tributaries ( Figure 3a). where Xs and Ys are the latitude and longitude of the gravity center of all the MCBs at stage s, respectively, Asi is the area of the ith MCB at stage s, xi and yi are the latitude and longitude of the geometric center of the ith MCB, respectively, and n is the total number of MCBs. The following equation was used for calculating the shifting distance of the gravity center: where Ds′-s is the shifting distance of the gravity center, Xs′ and Ys′ are the latitude and longitude of the gravity center of all the MCBs at stage s′, respectively, and Xs and Ys are the latitude and longitude of the gravity center of all the MCBs at stage s, respectively. Variations in Area and Number of MCBs Retrieved results from Landsat images showed significant variations in the area and number of MCBs in the TGR (Figure 3). The area and number presented different trends with respect to the changes in water level during different impoundment periods. The number of MCBs ranged between 89 and 150, with an average of 113 in the TGR; 90 MCBs were located in the main stream and accounted for 79.7% of the total number, and the remaining MCBs were located in tributaries (Figure 3a). The number and area of MCBs changed dramatically with water-level changes ( Figure 3). The maximum number of MCBs occurred at stage 1 under the natural hydrological regime. The number decreased sharply to 89 at stage 3 when the water level reached 139 m, while it increased to 103 at stage 4 when the water level rose to 156 m and slightly declined to 99 at stage 5 when the water level increased to 175 m. The number trend for MCBs in the main stream differed greatly from that in tributaries. It decreased progressively from 139 at stage 1 to 54 at stage 5, in the main stream. The number slightly increased from stage 1 to stage 2 and then slightly decreased in stage 3 in tributaries, while it sharply increased from a minimum of 11 at stage 3 to a maximum of 45 at stage 5. The area of MCBs in the TGR varied greatly from 0.2 to 1134 (×10 −2 km 2 ), with the average area ranging between 31.69 (×10 −2 km 2 ) at stage 4 and 44.42 (×10 −2 km 2 ) at stage 3. The average area in the main stream was much higher than that in tributaries, with maximum values of 54.19 (×10 −2 km 2 ) and 6.4 (×10 −2 km 2 ), respectively (Figure 3c). The total area decreased progressively from a maximum of 4910.56 (×10 −2 km 2 ) at stage 1 to a minimum of 3214.6 (×10 −2 km 2 ) at stage 5, with the rising of the water level, and the most obvious changes occurred between stage 2 and stage 4 ( Figure 3b). The total area in the main stream presented a similar trend to that in the whole reservoir. The total area changed slightly from stage 1 to stage 4 in tributaries, and then increased sharply to a maximum of 288.07 (×10 −2 km 2 ) at stage 5. The area and number changes in MCBs for the different classes are presented in Figure 4. In terms of the number, the MCBs in the TGR were dominated by SMBs followed by MMBs, accounting for 55.6% and 20.9% of the total number on average, respectively. In contrast, the MCBs were dominated by LMBs followed by LMMBs in terms of area, accounting for 80.2% and 14.8% of the total area on average, respectively. The number and area variations in the MCBs differed greatly for different sizes, with the most obvious changes in number and area appearing in SMBs (Figure 4a) and LMBs (Figure 4b), respectively. The number of SMBs dramatically decreased from stage 1 to stage 3, then increased sharply to stage 5, which was the main reason for the sharp change in the total number in tributaries from stage 4 to stage 5. The numbers of MMBs, LMMBs and LMBs generally showed a similar decreasing trend as the water level rose. The trend for area changes in MCBs in the TGR was determined by that of LMBs generally, due to the dominant role of LMBs in the total area. MMBs presented a similar trend to LMMBs regarding area, with an increase from stage 1 to stage 2 and a significantly decreasing trend from stage 2 to stage 5. declined to 99 at stage 5 when the water level increased to 175 m. The number trend for MCBs in the main stream differed greatly from that in tributaries. It decreased progressively from 139 at stage 1 to 54 at stage 5, in the main stream. The number slightly increased from stage 1 to stage 2 and then slightly decreased in stage 3 in tributaries, while it sharply increased from a minimum of 11 at stage 3 to a maximum of 45 at stage 5. The area of MCBs in the TGR varied greatly from 0.2 to 1134 (×10 −2 km 2 ), with the average area ranging between 31.69 (×10 −2 km 2 ) at stage 4 and 44.42 (×10 −2 km 2 ) at stage 3. The average area in the main stream was much higher than that in tributaries, with maximum values of 54.19 (×10 −2 km 2 ) and 6.4 (×10 −2 km 2 ), respectively (Figure 3c). The total area decreased progressively from a maximum of 4910.56 (×10 −2 km 2 ) at stage 1 to a minimum of 3214.6 (×10 −2 km 2 ) at stage 5, with the rising of the water level, and the most obvious changes occurred between stage 2 and stage 4 (Figure 3b). The total area in the main stream presented a similar trend to that in the whole reservoir. The total area changed slightly from stage 1 to stage 4 in tributaries, and then increased sharply to a maximum of 288.07 (×10 −2 km 2 ) at stage 5. The area and number changes in MCBs for the different classes are presented in Figure 4. In terms of the number, the MCBs in the TGR were dominated by SMBs followed by MMBs, accounting for 55.6% and 20.9% of the total number on average, respectively. In contrast, the MCBs were dominated by LMBs followed by LMMBs in terms of area, accounting for 80.2% and 14.8% of the total area on average, respectively. The number and area variations in the MCBs differed greatly for different sizes, with the most obvious changes in number and area appearing in SMBs (Figure 4a) and LMBs (Figure 4b), respectively. The number of SMBs dramatically decreased from stage 1 to stage 3, then increased sharply to stage 5, which was the main reason for the sharp change in the total number in tributaries from stage 4 to stage 5. The numbers of MMBs, LMMBs and LMBs generally showed a similar decreasing trend as the water level rose. The trend for area changes in MCBs in the TGR was determined by that of LMBs generally, due to the dominant role of LMBs in the total area. MMBs presented a similar trend to LMMBs regarding area, with an increase from stage 1 to stage 2 and a significantly decreasing trend from stage 2 to stage 5. Morphological Changes of MCBs The temporal variation of the LWR is presented in Figure 5. The LWR of MCBs in the TGR ranged between 2.09 and 3.05, with an average of 2.56. The changing LWR generally indicates a morphological adjustment of MCBs following the changes in hydrological and sediment regimes induced by the operation of the TGR. The LWR increased from stage 1 to a peak at stage 2, and then decreased from stage 2 to a minimum at stage 5. This suggests that the morphology of MCBs tended to change from a narrow-long shape to a short-round shape with the rising of the water level. On average, LMMBs had the highest Morphological Changes of MCBs The temporal variation of the LWR is presented in Figure 5. The LWR of MCBs in the TGR ranged between 2.09 and 3.05, with an average of 2.56. The changing LWR generally indicates a morphological adjustment of MCBs following the changes in hydrological and sediment regimes induced by the operation of the TGR. The LWR increased from stage 1 to a peak at stage 2, and then decreased from stage 2 to a minimum at stage 5. This suggests that the morphology of MCBs tended to change from a narrow-long shape to a short-round shape with the rising of the water level. On average, LMMBs had the highest LWR followed by MMBs, while the lowest LWR was observed for SMBs, generally indicating that the LMMBs and MMBs tended to be a narrow-long shape, while the SMBs tended to be a short-round shape. The LWR variation of MCBs differed greatly among the different classes, with the most obvious change occurring in LMBs, suggesting that the effect of impoundment on the morphology of LMBs was more pronounced than the effect on SMBs, MMBs and LMMBs. The effect for LMBs was relatively small from stage 1 to stage 2, resulting in only slight changes in LWR, while a significant effect appeared at stage 3 when the water level rose to 139 m. MMBs showed similar trend to LMMBs regarding the LWR, with an increase from stage 1 to stage 2 and a decreasing trend from stage 3 to cating that the LMMBs and MMBs tended to be a narrow-long shape, while the SMBs tended to be a short-round shape. The LWR variation of MCBs differed greatly among the different classes, with the most obvious change occurring in LMBs, suggesting that the effect of impoundment on the morphology of LMBs was more pronounced than the effect on SMBs, MMBs and LMMBs. The effect for LMBs was relatively small from stage 1 to stage 2, resulting in only slight changes in LWR, while a significant effect appeared at stage 3 when the water level rose to 139 m. MMBs showed similar trend to LMMBs regarding the LWR, with an increase from stage 1 to stage 2 and a decreasing trend from stage 3 to stage 5. Compared to LMBs, MMBs and LMMBs, SMBs had a more stable LWR with lower fluctuations probably due to the fact that SMBs often had a shorter development time and a short-round morphology. The LWR stability of the MCBs is shown in Figure 6. Generally, the CV decreased with an increase in the area of the MCB, indicating that larger MCBs tended to have a more stable morphology, and vice versa. The MCBs in the river, with natural hydrological and sediment regimes, often showed large morphological changes due to the large variations in erosion and siltation, resulting in a higher CV at stage 1. However, the conversion from natural river to man-made reservoir resulted in a rise in the water level, which significantly weakens the hydrodynamic conditions, leading to the CV changing from being scattered to being more clustered, as the water level rose from stage 1 to stage 5, as can been seen in Figure 6. This change was more evident for the LMBs. The LWR stability of the MCBs is shown in Figure 6. Generally, the CV decreased with an increase in the area of the MCB, indicating that larger MCBs tended to have a more stable morphology, and vice versa. The MCBs in the river, with natural hydrological and sediment regimes, often showed large morphological changes due to the large variations in erosion and siltation, resulting in a higher CV at stage 1. However, the conversion from natural river to man-made reservoir resulted in a rise in the water level, which significantly weakens the hydrodynamic conditions, leading to the CV changing from being scattered to being more clustered, as the water level rose from stage 1 to stage 5, as can been seen in Figure 6. This change was more evident for the LMBs. Spatial Distribution of MCBs in TGR The spatial distributions of the number and area of MCBs in the TGR are shown in Figures 7 and 8, respectively. It can be seen that the MCBs were unevenly distributed spatially. They were mostly distributed in the upper section of the TGR at 300 km from the TGD, accounting for 96.7% and 99.4% of the total number ( Figure 7a) and area (Figure 8a) at stage 1. The distributions of the MCBs at the second stage (Figures 7b and 8b) were generally similar to those at stage 1. The number and area of MCBs in the upper section of the TGR were reduced mainly due to the influence of human activities such as sand mining, which contributed mostly to the number decrease in MCBs from stage 1 to stage 2. A total of 41 MCBs with a total area of 1068.03 (×10 −2 km 2 ) disappeared in the section of TGR from 250 to 500 km from the TGD (Figure 7c), due to the inundation occurring when the water level rose to 139 m at stage 3. The number of MCBs changed dramatically in the whole reservoir as the water level increased from 139 m to 156 m at stage 4 ( Figure 7d). In the section from the dam to 400 km from the dam, 40 new MCBs formed following the inundation of low-lying mountain tops by the reservoir, while the area changed slightly (Figure 8d), and meanwhile 26 MCBs disappeared in the section from 400 to 660 km from the dam due to the inundation when the water level rose, leading to a drastic decline (Figure 7e), resulting in the number proportion increasing from 6.3% at stage 3 to 48.5% at stage 5, and the area proportion increasing to 44.1% (Figure 8e). However, 21 MCBs with a total area up to 1287.11 (×10 −2 km 2 ) disappeared in the section from 500 to 660 km from the TGD due to the inundation when the water level rose to 175 m. The spatial variation of MCBs during the different stages was caused mainly by the rising water level and the expansion of the inundation area created by the water impoundments of the TGR. The results suggest that water impoundments at the TGR had led to the migration of the dominant area from the upper to the middle section of the TGR, resulting in a more even distribution of MCBs in the TGR. Gravity Center Migration of MCBs The weighted gravity center by area of the MCBs was compared during different impoundment periods, and the migration routes are presented in Figure 9. Generally, the spatial location of the gravity center of the MCBs varied obviously with the water level changes during different water impoundment periods. The gravity center migrated 2.04 km south-westwards after the interception of the Yangtze River at stage 2. It continued to move south-westwards with the rising of the water level, with migration distances of 25.68 km and 9.39 km when the water level reached 139 m at stage 3 and 156 m at stage 4, respectively. The migration direction of MCBs was consistent with the tail direction of the TGR, which was significantly associated with the expansion of the inundation area towards the tail direction of the TGR induced by the rising water level. However, many new MCBs formed in the middle section of the TGR, owing to the inundation of low-lying mountain tops by the reservoir, induced by the water level rising to 175 m at stage 5. This resulted in a notable migration of the gravity center to the middle section of the TGR. As can been seen from Figure 8, the gravity center moved 70.63 km north-eastwards from stage 4 to stage 5, which was opposite to the migrations from stage 1 to stage 4. Spatial Distribution of MCBs in TGR The spatial distributions of the number and area of MCBs in the TGR are shown in Figures 7 and 8, respectively. It can be seen that the MCBs were unevenly distributed spatially. They were mostly distributed in the upper section of the TGR at 300 km from the TGD, accounting for 96.7% and 99.4% of the total number (Figure 7a) and area (Figure 8a) at stage 1. The distributions of the MCBs at the second stage (Figures 7b and 8b) were generally similar to those at stage 1. The number and area Gravity Center Migration of MCBs The weighted gravity center by area of the MCBs was compared during different impoundment periods, and the migration routes are presented in Figure 9. Generally, the spatial location of the gravity center of the MCBs varied obviously with the water level changes during different water impoundment periods. The gravity center migrated 2.04 km south-westwards after the interception of the Yangtze River at stage 2. It continued to move south-westwards with the rising of the water level, with migration distances of 25.68 km and 9.39 km when the water level reached 139 m at stage 3 and 156 m at stage 4, respectively. The migration direction of MCBs was consistent with the tail direction of the TGR, which was significantly associated with the expansion of the inundation area towards the tail direction of the TGR induced by the rising water level. However, many new MCBs formed in the middle section of the TGR, owing to the inundation of low-lying mountain tops by the reservoir, induced by the water level rising to 175 m at stage 5. This resulted in a notable migration of the gravity center to the middle section of the TGR. As can been seen from Figure 8, the gravity center moved 70.63 km north-eastwards from stage 4 to stage 5, which was opposite to the migrations from stage 1 to stage 4. Discussion The Yangtze River had a natural hydrological and sediment regime before the construction of the TGD, where the MCBs developed with a relatively stable balance between erosion and siltation [37]. However, the construction of the TGD has significantly changed the hydrological and sediment regimes of the Yangtze River over the past decades, which has seriously disrupted this balance. Many previous observations and studies found tremendous riverbed erosion in the middle and lower reaches of the Yangtze River, mainly due to the intercepting of sediment and discharging of clear water since the initial impoundment of the TGD [30,32,41]. The degree of erosion became weaker as the distance from the TGD increased [5,23,28]. The MCBs in the TGR varied dramatically under the influence of notable changes in the hydrological and sediment regimes of the TGR induced by the weakened hydrodynamic condition and rising water levels from stage 1 to stage 5 [36,42]. The surface area of the TGR has expanded as the water level rose since the initial impoundment of the TGD [42,43]. This resulted in a large portion of the previous MCBs being submerged, contributing to a subsequent reduction in the areas of the exposed MCBs. Meanwhile, many new MCBs formed from the inundation of point bars and Discussion The Yangtze River had a natural hydrological and sediment regime before the construction of the TGD, where the MCBs developed with a relatively stable balance between erosion and siltation [37]. However, the construction of the TGD has significantly changed the hydrological and sediment regimes of the Yangtze River over the past decades, which has seriously disrupted this balance. Many previous observations and studies found tremendous riverbed erosion in the middle and lower reaches of the Yangtze River, mainly due to the intercepting of sediment and discharging of clear water since the initial impoundment of the TGD [30,32,41]. The degree of erosion became weaker as the distance from the TGD increased [5,23,28]. The MCBs in the TGR varied dramatically under the influence of notable changes in the hydrological and sediment regimes of the TGR in-duced by the weakened hydrodynamic condition and rising water levels from stage 1 to stage 5 [36,42]. The surface area of the TGR has expanded as the water level rose since the initial impoundment of the TGD [42,43]. This resulted in a large portion of the previous MCBs being submerged, contributing to a subsequent reduction in the areas of the exposed MCBs. Meanwhile, many new MCBs formed from the inundation of point bars and lowlying mountain tops by the reservoir (Figure 10). A large number of new MCBs appeared, especially in the area of the TGR with the largest fluctuations in water level, in Kaizhou county ( Figure 11). the hydrological and sediment regimes of the Yangtze River over the past decades, which has seriously disrupted this balance. Many previous observations and studies found tremendous riverbed erosion in the middle and lower reaches of the Yangtze River, mainly due to the intercepting of sediment and discharging of clear water since the initial impoundment of the TGD [30,32,41]. The degree of erosion became weaker as the distance from the TGD increased [5,23,28]. The MCBs in the TGR varied dramatically under the influence of notable changes in the hydrological and sediment regimes of the TGR induced by the weakened hydrodynamic condition and rising water levels from stage 1 to stage 5 [36,42]. The surface area of the TGR has expanded as the water level rose since the initial impoundment of the TGD [42,43]. This resulted in a large portion of the previous MCBs being submerged, contributing to a subsequent reduction in the areas of the exposed MCBs. Meanwhile, many new MCBs formed from the inundation of point bars and low-lying mountain tops by the reservoir (Figure 10). A large number of new MCBs appeared, especially in the area of the TGR with the largest fluctuations in water level, in Kaizhou county (Figure 11). Human activities such as sand mining caused an unsaturated sediment transportation capacity of the flow of the TGR (Figure 12) before stage 3, which led to poor stability of the riverbed and MCBs in the TGR. In addition, the conversion from river to reservoir weakened the hydrodynamic condition when the water level reached 139 m at stage 3. Human activities such as sand mining caused an unsaturated sediment transportation capacity of the flow of the TGR (Figure 12) before stage 3, which led to poor stability of the riverbed and MCBs in the TGR. In addition, the conversion from river to reservoir weakened the hydrodynamic condition when the water level reached 139 m at stage 3. The flow rate slowed and a great deal of sediment was trapped in the reservoir, and the water level was the dominant influence on the MCBs [43]. The morphology of the naturally developed MCBs often took typical forms such as oval, bamboo-leaf and sickleshaped forms. However, the morphology of MCBs changed greatly as the water level rose, regulated by the man-made dam [6,7]. Due to the coupled effect of natural hydrological and sediment regimes and water-level changes regulated by the TGD, the MCBs in the fluctuating backwater zone of the TGR were more affected by siltation than those in the perennial backwater zones. Thus, more attention and protection should be paid to MCBs in fluctuating backwater zones. However, limitations and uncertainties associated with the remote sensing image data may exist and need to be improved in future work. In the data acquisition stage, we did our best to collect Landsat images with similar times and water levels to reduce the potential effect on the morphology, numbers and areas of the retrieved MCBs. However, it is very difficult to obtain ideal image data, mainly due to the influence of cloud cover, the difference in spectral features and the revisit cycle of the Landsat satellite, leading to a certain degree of uncertainty in the analysis and results. Thus, more remote sensing images need to be employed and collected from other platforms to reduce the influence of acquisition times, spectral differences and water levels. In the context of the "storing clean water and discharging sandy water" operation schedule [7,8], the influence of the annually cyclic hydrological regime and the longer inundation and exposure induced by the TGR operation, the water level of the TGR fluctuated dramatically between 145 and 175 m, with increase periods from September to January in the following year, and decrease periods from January to September. The present study only investigated the dynamics of MCBs from October to January. It will be necessary to further investigate the changes in MCBs during the year. Due to the coupled effect of natural hydrological and sediment regimes and water- However, limitations and uncertainties associated with the remote sensing image data may exist and need to be improved in future work. In the data acquisition stage, we did our best to collect Landsat images with similar times and water levels to reduce the potential effect on the morphology, numbers and areas of the retrieved MCBs. However, it is very difficult to obtain ideal image data, mainly due to the influence of cloud cover, the difference in spectral features and the revisit cycle of the Landsat satellite, leading to a certain degree of uncertainty in the analysis and results. Thus, more remote sensing images need to be employed and collected from other platforms to reduce the influence of acquisition times, spectral differences and water levels. In the context of the "storing clean water and discharging sandy water" operation schedule [7,8], the influence of the annually cyclic hydrological regime and the longer inundation and exposure induced by the TGR operation, the water level of the TGR fluctuated dramatically between 145 and 175 m, with increase periods from September to January in the following year, and decrease periods from January to September. The present study only investigated the dynamics of MCBs from October to January. It will be necessary to further investigate the changes in MCBs during the year. Due to the coupled effect of natural hydrological and sediment regimes and water-level changes regulated by the TGD, the development mechanism of MCBs in the fluctuating backwater zones differed from that in the perennial backwater zones, and this was mainly influenced by water-level changes regulated by the TGD. This was not explored in this study. Nevertheless, the scientific findings help to reveal the mechanisms of the development of MCBs in the TGR and can also offer a scientific basis for planning, optimal utilization and ecological restoration of the MCBs in the TGR. Conclusions This work investigated the spatio-temporal variations in the number, area, morphology and location of MCBs in the TGR during different impoundment periods, using Landsat images. The results showed that the number of MCBs ranged between 89 and 150 with an average of 113, and the area varied greatly from 0.2 to 1134 (×10 −2 km 2 ) with an average of 31.69. The number and area of MCBs changed dramatically with the water-level changes induced by the impoundments and operation of the TGR. The total area of MCBs decreased progressively from stage 1 to stage 5, with the most significant changes occurring between stage 2 and stage 4. Although the number showed a decreasing trend, the minimum number appeared at stage 3, which was dominated by the change in the number of SMBs. The number and area variations of MCBs differed greatly among MCBs with different sizes, with the most obvious changes appearing for SMBs and LMBs, respectively. The LWR of MCBs in the TGR ranged between 2.09 and 3.05 with an average of 2.56. It generally decreased as the water level rose, suggesting that the morphology of MCBs tended to change from a narrow-long shape to a short-round shape. The LWR variation of MCBs differed greatly among different sizes, with the most obvious changes occurring in SMBs, suggesting that the effect of impoundment on the morphology of SMBs was more pronounced than the effect on MMBs, LMMBs and LMBs. The MCBs were unevenly distributed spatially. They were mostly distributed in the upper section of the TGR at stage 1, under a natural hydrological regime. The water impoundments of the TGR led to the migration of the dominant area from the upper to the middle section of the TGR, resulting in a more even distribution of MCBs in the TGR and the migration of the gravity center of MCBs from the upper to the middle section of the TGR. This study showed the enormous impacts of the operation of the TGD on the morphological dynamics of MCBs. While the mechanisms of the development of MCBs in the TGR are complex, it will be necessary to investigate these changes further in the future. Data Availability Statement: Landsat TM, Landsat OLI and the water level, sediment and siltation data for the TGR used in this paper can be downloaded from the Chinese Geospatial Data Cloud, Yangtze River Sediment Bulletin and the Yangtze River Three Gorges Group website.
2021-12-12T17:49:43.404Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "8ba512509dfe691a331440941e61c715bfe42538", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/13/23/3427/pdf?version=1638533017", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7aa583bc21b3e1401b6377686dd3701a17a6aebe", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
235784434
pes2o/s2orc
v3-fos-license
Monocular Visual Field Defect on Humphrey 24-2 SITA-Fast Testing Later Identified as a Highly Incongruous Homonymous Defect on Humphrey 30-2 SITA-Fast Testing Monocular visual field defects generally localize at or anterior to the optic chiasm, while homonymous hemianopias localize to the retrochiasmal visual pathway. Highly incongruous visual field defects may be difficult to identify on 24-2 Humphrey visual field testing, and this case demonstrates the value of optical coherence tomography (OCT) ganglion cell-inner plexiform layer (GCIPL) in rapidly localizing the lesion. A 54-year-old woman was found on routine examination to have an isolated superonasal quadrant visual field defect respecting the vertical meridian in the left eye only on Humphrey 24-2 SITA-Fast testing. She had a remote history of significant head trauma. Visual acuity, anterior segment, and fundus examination were normal. OCT revealed a bow-tie atrophy of the retinal nerve fiber layer in the right eye (OD), and binocular homonymous hemi-macular atrophy of OCT GCIPL, confirming the localization was the left retrochiasmal visual pathway. A repeat Humphrey 30-2 SITA-Fast visual field demonstrated that the visual field defect was also present in the OD in a highly incongruous manner. Magnetic resonance imaging of the brain with contrast showed mild atrophy of the left optic tract. This case demonstrates that highly incongruous visual field defects may be difficult to identify on Humphrey 24-2 SITA-Fast visual fields, and OCT GCIPL serves as a rapid way to localize the lesion. More detailed visual field testing including 30-2 programs should be considered in these cases. Introduction Homonymous hemianopias are caused by lesions in the contralateral retrochiasmal visual pathways, which most commonly involve the occipital lobe, optic radiations, or optic tract [1]. The most common etiologies are stroke, trauma, and brain tumors [1]. They are often associated with homonymous hemi-macular atrophy of the ganglion cell-inner plexiform layer (GCIPL) on optical coherence tomography (OCT). A homonymous visual field defect is considered congruous when it appears similar in both eyes and incongruous when it differs between eyes. The "rule of congruity" states that the more posterior a lesion is within the retrochiasmal visual pathways, the more congruous the defect [2]. We report a case of binocular homonymous hemi-macular atrophy of the GCIPL with only a repeatable monocular visual field defect on Humphrey 24-2 SITA-Fast but a highly incongruous visual field defect on Humphrey 30-2 SITA-Fast visual fields. Case Presentation A 54-year-old asymptomatic woman was noted to have a visual field defect respecting the vertical meridian in the left eye (OS) on a routine optometry examination. She had no known medical conditions but reported a history of a motor vehicle accident resulting in significant head trauma requiring hospitalization at age 5. There was an associated right zygomatic bone fracture that was conservatively managed. No further details regarding the admission were available. Neuro-ophthalmic examination revealed a visual acuity of 20/20 in both eyes, no relative afferent pupillary defect, normal color vision, and normal-appearing optic nerves without obvious pallor (online suppl. Figure 1; for all online suppl. material, see www.karger.com/doi/10.1159/000516663). Confrontation visual fields revealed a superonasal visual field defect in the OS and full visual fields in all other quadrants in both eyes. Humphrey 24-2 SITA-Fast visual fields showed a visual field defect only in the superonasal quadrant of the OS, similar to what was seen on the optometry exam (Fig. 1a). OCT retinal nerve fiber layer (RNFL) showed bow-tie atrophy in the right eye (OD) and diffuse thinning in the OS with an average RNFL thickness of 67 and 64 µm, respectively (Fig. 2a). OCT GCIPL showed inferior homonymous hemi-macular atrophy of the nasal retina in the OD and the temporal retina in the OS, with an average thickness of 71 and 67 µm, respectively (Fig. 2b). Magnetic resonance imaging (MRI) of the brain and orbits with contrast showed subtle atrophy of the left optic tract (Fig. 3). Follow-up examination including a repeat visual field (24-2 SITA-Fast) at 6 months was unchanged. However, an additional Humphrey 30-2 SITA-Fast visual field was performed twice at the follow-up visit and revealed a highly incongruous visual field defect as there was small superotemporal defect in the OD (Fig. 1b). Discussion Monocular visual field defects localize to the pre-chiasmatic optic nerve or globe, and investigations are guided by the clinical history and examination. In this case, a monocular defect seen on Humphrey 24-2 visual field testing in the right nasal hemifield of the OS was the result of a previous insult to the left retrochiasmal visual pathways. OCT RNFL and GCIPL were instrumental in localizing the lesion and prompted more detailed visual field testing with the Humphrey 30-2 program. On OCT, there was right bow-tie atrophy of the RNFL and homonymous hemi-macular atrophy of the GCIPL in both eyes. There was a functionalanatomical correlation as the left optic tract appeared atrophied on MRI. There were no other radiological findings in the left retrogeniculate visual pathways. The incongruous nature of the visual field and OCT findings suggests this was a result of primary involvement of the left optic tract, but it may also have been a result of involvement of the left retrogeniculate pathways and secondary retrograde transsynaptic degeneration [3]. Although rare, monocular visual field defects have been reported in the setting of retrochiasmal pathology. A 65-year-old woman was found to have a right temporal visual field defect and corresponding nasal hemi-macular OCT GCIPL atrophy due to a left carotid aneurysm compressing the left optic tract [4]. This case utilized Humphrey 24-2 testing, although the unilateral nature of OCT GCIPL defect argues that this may have been truly unilateral. However, the timing of the visual field and OCT in relation to the insult were not reported. With longer term follow-up, it may be possible that the OS manifests OCT GCIPL or 30-2 visual field changes. Lee et al. [5] reported 3 individuals who suffered cerebral stroke but had detectable visual field defects by automated perimetry in only one eye. No OCT data was available in these cases. Only 24-2 programs were utilized in these cases, and it is possible that a more detailed visual field program may show highly incongruous defects. Our case adds to the literature on monocular defects detected on 24-2 visual field programs related to retrochiasmal pathology; uniquely, the defect was later revealed to be highly incongruous on Humphrey 30-2 testing. OCT GCIPL was instrumental in this case as it was able to rapidly demonstrate that a monocular defect was the result of retrochiasmal pathology. There was excellent anatomical-functional correlation between the visual fields and the OCT GCIPL as the inferior quadrants of the macula were most affected and this correlated to the superior visual field, which was affected. The OS also had more significant atrophy which correlated with its more dense visual field defect. The differences between 24-2 and 30-2 Humphrey visual fields have been studied previously for neuro-ophthalmology patients. Khoury et al. [6] analyzed 187 Humphrey visual fields from neuro-ophthalmology patients with nonglaucomatous optic neuropathies and 206 Humphrey visual fields from patients with glaucoma. An occluder device was designed to cover the additional outer 22 points tested in the 30-2 strategy. Ninetyfive percent of the fields in neuro-ophthalmology patients were read similarly with the 24-2 and 30-2 strategies. In the few cases where there was discrepancy, appropriate clinical management would not have been compromised by using the 24-2 strategy. The authors concluded that 24-2 testing strategy provides information comparable to that provided by the 30-2 strategy. The 24-2 strategy has the advantage of reduced testing time. In our patient, the duration of the Humphrey 24-2 test was 5 min and 34 s, whereas the duration for the 30-2 test was 7 min and 44 s. This has implications for busy clinics where visual field testing can be in short supply. The additional use of OCT, which captures an image in seconds, is an important adjunct as retrochiasmal pathology often manifests as homonymous hemi-macular atrophy. The 30-2 testing program can be reserved for cases such as ours where a homonymous defect is suspected, but only manifests in 1 eye. The importance of OCT in localizing retrochiasmal visual field defects has been previously reported as the 24-2 visual field may be normal in both eyes. Lukewich et al. [7] described 7 patients with homonymous OCT GCIPL hemi-macular atrophy related to demyelination or traumatic brain injury. Other pathologies may also produce such a change. Momen et al. [8] reported a patient with an asymptomatic optic tract glioma from neurofibromatosis type 1 who was found to have homonymous hemi-macular thinning on OCT GCIPL. More recently, Zaslavsky et al. [9] reported a case of congenital porencephalic cyst resulting in an OCT GCIPL homonymous quadrantic defect without detectable visual field change. Traumatic brain injury has been reported to result in homonymous visual field defects, and it may be seen on OCT in the absence of any significant visual field defect [6]. Decramer et al. [10] described the use of diffuse tensor imaging to provide a morphometric analysis of the primary visual cortex and emphasized the importance of early imaging with this modality as the process of Wallerian degeneration could affect the results. Our patient did have subtle thinning of the left optic tract noted, and it was possible that there was primary traumatic damage to this area or trauma to the retrogeniculate pathway with secondary transsynaptic degeneration. Bow-tie atrophy on OCT RNFL is classically seen in contralateral optic tract lesions but can also be the result of contralateral retrogeniculate lesions after retrograde transsynaptic degeneration [11]. The highly incongruous nature of the visual field defect was likely a result of involvement of the left optic tract from the patient's remote traumatic brain injury. This is because fibers just posterior to the chiasm (crossed and uncrossed fibers) remain spatially distant, which is the basis for the "rule of congruity" [2]. These fibers then run in closer proximity in the occipital lobe, which is why lesions in this area are highly congruent. The incongruency of the OCT GCIPL lesion also supports this notion. It is also possible that this patient had a more congruent defect with asymmetric recovery. The OCT findings argue against 2 distinct lesions in each eye, and the OCT findings argue against individual optic neuropathies. In summary, this is a rare case of a patient who had a monocular visual field defect on 24-2 Humphrey SITA-Fast visual fields, right bow-tie atrophy on OCT RNFL, and right homonymous hemi-macular atrophy on GCIPL with a normal MRI brain due to presumed remote traumatic brain injury. This case highlights that OCT is extremely helpful in localizing the lesion to the retrochiasmal visual pathways and that monocular defects on 24-2 SITA-FAST visual fields may be highly incongruous defects on 30-2 programs. Statement of Ethics This study was carried out in accordance with the World Medical Association Declaration of Helsinki, and the need for approval was waived as per University of Toronto Research Ethics Board guidelines. Written informed consent was obtained from the patient for publication of this case report and any accompanying images.
2021-07-11T05:28:07.821Z
2021-06-11T00:00:00.000
{ "year": 2021, "sha1": "6ba67051db5dffde71e9f66a519a5e6fa2230ae3", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/516663", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a97f5f90ac295fa8a7e45d8d48a428c0ff96d98a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
120745704
pes2o/s2orc
v3-fos-license
Study of storm weather situations in observation andECHAM3/T42 model simulation In this paper, we provide an estimation of the seasonal distribution of strong wind and storm weather situations in an ECHAM3/T42 climate model simulation in relation to observed conditions. Observational data of the German Bight and the southern Baltic Sea are taken to compare observations and climate model simulation. The results of the study show significant differences in the seasonal frequency of occurrence for strong wind and storm weather situations between simulation and observations. A new objective classification routine for detecting single strong wind and storm weather situations (Bft 7 and more) in coarse resolution models is used to validate the large-scale parameters of those events in the climate simulation. The objective classification routine is able to detect strong wind and storm weather situations of two flow regimes in the German Bight in winter. The routine is applied to the ECMWF Re-Analysis (T42 resolution) and to a climate simulation of the northern Hemisphere, which was performed with the ECHAM3/T42. It is shown that the large-scale parameters of single strong wind and storm weather situations are simulated quite realistically in the ECHAM3/T42. 1. Introduction the ECHAM3/T42 is able to reproduce the largescale parameters which are necessary for strong In the current climate discussion, the frequency wind or storm weather situations in the selected and intensity of strong wind and storm weather area. The large-scale parameters of single strong situations are becoming more and more relevant wind or storm weather situations in the climate (Bengtsson et al., 1996;Kö nig et al., 1993). model simulation are compared with observations Regarding studies in investigations on the field of in this study. The aim of the paper is to examine coastal morphodynamics and coast protection whether the seasonal distribution of strong wind ( Von Storch and Reichardt, 1997) there is considand storm weather situations is the same as in erable interest as to whether the seasonal distribuobservations for the region of the German Bight tion of strong wind and storm weather situations and the southern Baltic Sea. In order to achieve changes in an altered climate or not. In order to this, the ECHAM3/T42 control run (CTRL), the examine this, it is convenient to use global climate ECMWF Re-Analysis, surface and upper level models like the Hamburg ECHAM3/T42 (DKRZ, weather maps of the area of the North Atlantic 1994). First of all, we need to discuss the issue of and Europe as well as observations in the German whether a general circulation model (GCM) like Bight and the southern Baltic Sea are considered. If the seasonal distribution of strong wind and storm weather situations in the CTRL is not the .   . consideration in regional climate studies of this ECHAM3 from the Max-Planck-Institut fü r problem. A possible change in the seasonal distri-Meteorologie Hamburg (MPI-Hamburg) and the bution of strong wind and storm weather situ-Deutsches Klimarechenzentrum (DKRZ). The ations in a double or threefold CO 2 -scenario in ECHAM was developed from the ECMWF model. relation to the CTRL has to be discussed addition-It was subjected to several changes, mostly in ally in view of the knowledge that there are parameterization, in order to adjust the model to differences between observations and climate climate simulation. T42 is chosen as the reference model simulation. resolution (Gaussian grid with 2.8°), even though The present paper contains two methods of the model uses resolutions in the range of T21 comparing model output and observations. First, (Gaussian grid with 5.6°) to T106 (Gaussian grid the seasonal distribution of strong wind and storm with 1.1°) (DKRZ, 1994). Prognostic variables are weather situations in simulation and observations vorticity, divergence, temperature, surface presis compared. To test the seasonal distribution of sure, water vapour and cloud water content. The these events, cases with wind speeds between atmosphere is vertically represented by a 19-layer 10.5 m s−1 and 14 m s−1 (Bft 6) and with more hybrid coordinate system with second-order finite than 14 m s−1 (Bft 7 and more) at a height of differences. The roughness length over sea is calcu-10 m are selected. For the calculation of the conlated by the Charnock formula in accordance with fidence level of the seasonal distribution a method Miller (1992). More detailed information in pardevised by Dukes and Palutikof (1995) is used. ticular with regard to the physical parameteriz-Second, a new objective classification routine ation of the radiation, clouds and precipitation, for detecting single strong wind and storm weather convection and vertical and horizontal diffusion situations (Bft 7 and more) in coarse resolution is given by Roeckner et al. (1992). models is used (Zielke et al., 1997). The routine is The model run which was selected for the able to detect strong wind and storm weather present investigation is the so-called control run situations of two flow regimes. The criteria of this (CTRL) in T42 resolution. At present, this model classification routine are laid down using the run has provided the highest spatial resolution evaluation of storm weather situations in observathere is for a permanent 30-year time series. The tions of the period 1949 to 1996. In order to CTRL used a climatological sea surface temperachieve this, surface and upper level weather maps ature (SST). Therefore each model year uses the of the area of the North Atlantic and Europe as identical annual cycle of the mean climatological well as observations and measurements of several SST (Sausen et al., 1995 Storch, 1995). The available data for the study are the calculated values of the wind The model data were generated by the Hamburg spectral atmospheric general circulation model speed at 00 and 12 UTC, which were stored by Tellus 50A (1998), 4 the MPI-Hamburg during the simulation of the reference height of 10 m. The average error for the transformed wind speed is ±5%. ECHAM3/T42 CTRL for every model year from 11 to 40. However, uninterrupted long-term-series Fig. 1 shows the raw surface wind speed and the homogenized 10 m wind speed at Norderney of measurements in the area of the German Bight were only available at 12 UTC. That is why only and Boltenhagen. The raw surface wind speed of Norderney shows the change in the measurement this time was taken into account for the investigations in this study. height in 1978 and 1981. In 1978 the measurement height changed from 32 m to 21 m agl and in 1981 In contrast to observations, time series of climate model simulations last 30 days each month, the change was from 21 m to 12 m agl. The German Weather Service (DWD) carried out a i.e., every model year runs 360 days. Therefore, a small difference in the period between observa-field experiment with a period of 4 years with the additional aid of numerical model simulations to tional and modelled data is obtained. The smallest difference is obtained in winter (6-7 days for calculate transfer coefficients of the wind speed as a function of the wind direction and the measure-30 years) and the largest one for spring and summer (60 days for 30 years). Considering the ment heights of Norderney station (Schmidt and Pätsch, 1992). The results are transfer coefficients fact that the reference level for observational measured wind speed is 10 m the same height is chosen which give the 10 m wind speed over sea. The for the climate model. The 10 m wind speed in the climate model simulation is calculated using the logarithmic wind profile. The shearing stress speed u * is calculated with the lowest model level wind speed (#40 m) multiplied by a drag coefficient C D . This coefficient depends on the criteria of stability and roughness length (DKRZ, 1994). Observational data Observational data were taken from the weather stations Norderney and Boltenhagen (German Weather Service, DWD) and from the ECMWF Re-Analysis data of the period 1979 to 1993. Tenminute-average wind speeds of the period 1949 to 1996 are available for the Norderney station and of the period 1973 to 1993 for the Boltenhagen station. At Norderney there are continous measurements only at 06, 09, 12 and 15 UTC. The time periods chosen for this study relate to Norderney from 1966 to 1995 and in the case of Boltenhagen from 1973 to 1993. The Norderney and Boltenhagen observational data show inhomogeneities caused by different measurement heights and positions. Another important aspect is the surface roughness in the area adjacent to the wind measuring tower. This has to be considered if observational data are compared with results from model simulations. In this work results of a study from the DWD Fig. 1. The yearly average raw surface wind speed (grey (Schmidt and Pätsch, 1992) were used to homolines) and the homogenized 10 m wind speed (black genize the measured wind speeds. With this lines), 12 UTC. The solid lines are the 50% percentiles, method it is possible to transform the wind speed the dashed lines the 10% and the dotted lines the 1% The investigation into strong wind and storm weather situations is related to weather situations with a wind force of Bft 6 and those with Bft 7 and more. Table 1 shows the number of events for Table 3. One-step transition probability matrix Norderney, Boltenhagen and the corresponding (T PM) model grid points at 12 UTC in the periods considered. It can be seen that there are only noticable differences between observations and the climate model simulation for Boltenhagen and the corresponding model grid point for Bft 7 and more. To calculate the statistical significance of the seasonal distribution for these events, a technique developed by Kirchhoff and Kaminsky (Kirchhoff et al., 1989;Kaminsky et al., 1991) was used. This technique generates time series using in state i at date n changing to any state j at date the one-step Markov chain model. n+1, given k wind speed intervals. An example The procedure for generating wind speeds with of a TPM is shown in Table 4. A major advantage this model is based on a transitional probability of the procedure is its simple application to the matrix (TPM) for wind speed states. Initially, the generation of wind speed time series. The procedwind speed time series is converted to a time series ure is summarized as follows. A starting wind of wind speed states. The wind speed states which speed state is selected, for example state 1. Using were used in this study are presented in Table 2. the first row of the appropriate TPM, the next Once a wind speed state time series is produced, wind speed state is generated in a random manner a TPM can be calculated. The TPM matrix according to the probabilities of that row. This (Table 3) shows the probability p ij of a wind speed can be most easily achieved by accumulating the probabilities along each row of the TPM. Using a uniform random number generator that pro- Table 1. Number of events with a wind force of Bft 6 duces a value between 0 and 1, the next wind and Bft 7 and more for Norderney, Boltenhagen and speed state can be generated. The generated state the corresponding model grid points at 12 UT C; the is the one where the random number is greater period is 30 years for the simulation and Norderney than the cumulative probability of the previous and 21 years for Boltenhagen state but less than or equal to the cumulative 3. 2. An objective classification routine for detecting flow regime of the westerly type shows a high pressure area over the the central East Atlantic strong wind and storm weather situations in coarse resolution models and the location of the steering centres are over Scandinavia, the Norwegian or the North Sea. The objective classification routine (Zielke et al., The flow regime of the northwesterly type shows 1997) is able to detect single strong wind or storm an Azores High northeastward shifted up to the weather situations (Bft 7 and more) of two flow westerly Biscay and the location of the steering regimes for the area of the German Bight in centres are over Fennoscandinavia, the Norwegian winter. The criteria of this objective classification or the Baltic Sea. routine were laid down using the evaluation of To make a decision as to whether a weather storm weather situations in observations of the situation is one which causes a strong wind or period 1949 to 1996. In order to achieve this, storm weather situation in the German Bight, surface and upper level weather maps of the area several meteorological large-scale parameters are of the North Atlantic and Europe as well as considered taking into account the temporary observations and measurements of several weather shifting between the single criteria. This study has stations in the German Bight were considered. only been done for the German Bight, but the The routine is applied to the ECMWF Re-Analysis correlation for strong wind and storm weather (T42 resolution) and to a climate simulation of situations between the German Bight and the the northern Hemisphere, which was performed southern Baltic Sea is 93% for the westerly and with the atmosphere model ECHAM3/T42. 88% for the northwesterly type. That means for We classify the flow regimes of the strong wind example, that if there is a strong wind or storm and storm weather situations as a function of the weather situation of the westerly type in the accompanying European large-scale weather patsouthern Baltic Sea, there is one in the German terns (''Die Großwetterlagen Europas'') by Hess Bight with 93% probability. and Brezowsky (Gerstengarbe et al., 1993) into two groups, the westerly and the northwesterly type. These groups contains the following 3.2.1. T he criteria of the classification routine Flow regime westerly type: Storm weather situ-European large-scale weather patterns: (a) westerly type: WZ, WA, WS, WW, SWZ and SWA; ations within a circulation pattern of the westerly type shows a high pressure area over the central (b) northwesterly type: NZ, NA, NWZ, NWA and TRM. East Atlantic. The location of the steering centre is over Scandinavia, the Norwegian or the North In Section7, a description of the European largescale weather patterns used is given. Sea. In our classification there has to be a difference of 45 gpdam between the low and the high In Fig. 2b, d, the average geopotential heights of all the storm weather situations of the westerly pressure area. The way to detect the difference is to take 4 connected grid points with the lowest and northwesterly types for the period 1979 to 1993 are shown. This is the available period for geopotential height of the area of the steering centre and 12 connected grid points of the high the ECMWF Re-Analysis data at present. The Tellus 50A (1998), 4 .   . pressure area in 700 hPa. Then the average of the the dotted area in order to take the second positive decision on this criterion. 4 and the 12 grid points is calculated. If the difference is larger than 45 gpdam the first condition of this criterion is fulfilled. Fig. 2a shows the Flow regime northwesterly type: Storm weather situations within a circulation pattern of the north-areas of our mask where we look for the steering centre and the accompanying high pressure area. westerly type show an Azores High northeastward shifted up to the westerly Biscay. The steering Within the grey or the dash-shaded area, we look for the 4 connected grid points with the lowest centre is over Fennoscandinavia, the Norwegian or the Baltic Sea. The way to detect the differences geopotential heights and within the dotted area we look for the 12 connected grid points with the in the geopotential heights between the steering centre and the high pressure area is the same as highest geopotential heights. The average geopotential height of the shaded area has to be 30 for the westerly type. Only the value of the difference is 55 gpdam. Within the dash-shaded or gpdam below the average geopotential height of Tellus 50A (1998), 4 shaded areas (Fig. 2c), we detect the 4 connected type and −1.2 g kg−1 for the northwesterly type in 12-24 hours. grid points with the lowest geopotential heights, and within the dash-dot-shaded area, we look for the 12 connected grid points with the highest 3.2.2. T he choice and the verification of the classification routine. The above-mentioned criteria in geopotential heights. The difference between the average geopotential height of the grey area and themselves yield a variety of positive decisions. In our classification routine we need the positive the average geopotential height of the dotted area has to be below 15 gpdam in order to make a decision of all the single criteria. We start our coupling with the condition flow regime, than we positive decision. look for the criteria baroclinicity, relative topography and mixing ratio taking into account the Baroclinicity: For both groups of storm weather temporary shifting between the single criteria. situations the threshold value for the baroclinicity From these three intersections the overall intersecis 0.018 K km−1 in a meridional direction. The tion is calculated (Fig. 3). area where we look for this threshold value is the The verification of this classification routine North Atlantic at a height of 700 hPa. For the with the ECMWF Re-Analysis and the homogenwesterly type the area of interest is between ized time series of the wind speed of Norderney 45°N-60°N and for the northwesterly type shows that 85% of all observed storm weather between 55°N-70°N within 40°W-0°. situations of the westerly and the northwesterly type are detected. Strong wind situations are also Relative topography 500/1000 hPa: What is typical detected with a wind speed of approximate of storm weather situations is a high relative 16 m s−1 (Bft 7). The reason for is that it is topography of the warm sector. One way of checkimpossible to give such sharp definitions of the ing this is the relative topography 500/1000 hPa. large-scale parameters and the result is that not The threshold value for the westerly type is only weather situations with a wind speed of more 534 gpdam and for the northwesterly type than 17.2 m s−1 (storm weather situations, Bft 8) 528 gpdam in order to make a positive decision. are detected. The average 10 m wind speed of all For the westerly type the area of interest is between the weather situations which are detected with our 52°N-62°N within 5°W-10°E. and for the northclassification routine is 17.8 m s−1 for the westerly westerly type between 55°N-65°N within and 17.5 m s−1 for the northwesterly type. The 10°W-20°E. Whenever the average relative topo-15% of the observed storm weather situations graphy of 6 connected grid points within this area which are not detected are those without a distinct is larger than the threshold value, the first condihigh pressure area. In view of the separation of tion is fulfilled. Second, the longitudinal difference the westerly and the northwesterly types those of the relative topography from boxes (3×2 grid events are not chosen. The application of the points) within the defined areas to boxes with a routine also shows that all the detected situations distance of 8.4°, were calculated. The threshhold in the ECMWF Re-Analysis are strong wind or value is 6 gpdam in order to take the second storm weather situations. positive decision of this criterion. Mixing ratio: What is also typical of storm weather 4. Results situations is heavy precipitation; a way of checking this is the temporary change in the mixing ratio The results of the seasonal distribution of strong wind and storm weather situations are shown in in a defined area. The area where we look for the temporary change in the mixing ratio is the same Fig. 4. For the German Bight and the southern Baltic, one model grid point is selected for this as for the relative topography. The height is the 700 hPa level. If the average temporary change in presentation, because the seasonal distribution of strong wind and storm weather situations of the the mixing ratio of 6 connected grid points within this area is larger than the threshold value, this other model grid points shows the same results. It can be seen that in the model simulation the criterion of the classification is fulfilled. The threshold values are −1.8 g kg−1 for the westerly winter season is the most favoured for these Tellus 50A (1998), 4 .   . weather situations. Events with a wind force of can be seen that there are only noticable differences between observations and climate model Bft 6 have a frequency of occurrence of more than 40% (Fig. 4a, c) and events of Bft 7 and more simulation for Boltenhagen and the corresponding model grid point for Bft 7 and more. To compare have a frequency of occurrence of more than 50% (Fig. 4b, d) in the simulation in winter. In observa-the quantitative distribution of events with a wind force of Bft 6 and more in simulation and observa-tions this season is favoured as well, but with a lower percentage. The seasonal distribution of tions we chose Norderney station and the corresponding model grid point. Table 5 shows the strong wind and storm weather situations in winter and autumn is more similar in observations number of events with a wind force of Bft 6 and more. The climate model yields approximately than in simulation. In summer, strong wind and storm weather situations in the German Bight 35% more events of Bft 6 and more than observed in DJF. It is to consider that a GCM with a often result from developments on the regional scale, whereas climate models like the Gaussian grid of 2.8°is not able to reproduce the exact wind climate of observation. But if there is ECHAM3/T42 are not able to resolve them. Storm weather situations like those which occurred in an overestimation of events with a wind force of Bft 6 and more in DJF (30 year period) of 35% August 1990 are rare but they are not negligible. On the 20/21 August 1990 a storm cyclone resulted the reason could not be explain by the horizontal resolution of the climate model. in westerly winds with a wind force of Bft 9 to 10 in the German Bight. The 30-year monthly average wind speed for the homogenized time series of Norderney and Table 1 shows the number of strong wind and storm weather situations for Norderney, the selected grid points are presented in Fig. 5. It can be seen that the wind speed in summer is Boltenhagen and the corresponding model grid points at 12 UTC in the periods considered. It under-and in winter is overestimated in the Tellus 50A (1998), 4    simulation. The underestimation of the 10 m wind speed in summer could be explained by regional scale effects which could not be resolved by the Table 5. Number of events with a wind force of Bft 6 climate model. These are for example land-and and more for Norderney and the corresponding sea-breeze effects, developments of cyclones on a model grid point at 12 UT C; the period is 30 years regional scale, especially in coastal regions, and effects which are caused by thermals and turbu-Norderney GP II lence in the boundary layer (Hasse, 1974;Luthardt and Hasse, 1981). In winter the ECHAM3 shows continent (Kaurola, 1997). This results in an Tellus 50A (1998), 4 Downloaded by [Technische Informationsbibliothek (TIB)] at 05:46 09 October 2017 20% for the North Sea and Scandinavia. In winter the temperature difference is in the range of 1.5 K overestimated warm air advection and windspeed (Wild et al., 1996;Marinucci et al., 1995). and the mixing ratio difference is approximately 35%. The objective classification routine for detecting strong wind and storm weather situations was Strong wind and storm weather situations of the westerly type strongly depend on an above-used to validate the large-scale parameters of single strong wind and storm weather situations average warm warm-sector with high humidity. The climate model simulation yields an overes-in the climate model simulation. Only if all the criteria laid down in observations (see Section 3) timation of the humidity and the temperature in winter. This could be a possible reason for the are fulfilled is the decision positive. The strong wind and storm weather situations in the CTRL overestimation of those events. Furthermore, European large-scale weather patterns of the west-have the same flow regime as in observations. All the further criteria considered (relative topo-erly type are generally overestimated in the climate model simulation (Enke and Spekat, 1997). As a graphy, mixing ratio and baroclinicity) which are typical of strong wind and storm weather situ-consequence of this there are more cyclones of the westerly type which are able to become a storm ations in observations are simulated quite realistically by the climate model. cyclone than of other European large-scale weather patterns. The application of the objective classification routine to the ECMWF Re-Analysis and the ECHAM3/T42 CTRL shows that the frequency of strong wind and storm weather situations of 5. Conclusions the westerly type is overestimated in the CTRL (Table 6) in winter. The frequency of the north- The study shows that the number of strong wind and storm weather situations is approxi-westerly type is approximately the same in observations and CTRL. Possible reasons for this are mately the same in climate model simulation and observations for the area of the German Bight differences in the mixing ratio and temperature between CTRL and observations. Fig. 6 shows the and the southern Baltic. Differences exist for the seasonal distribution of these events. In the differences in the average mixing ratio and the temperature in 700 hPa for autumn and winter. ECHAM3/T42 CTRL the winter is the most favoured season for strong wind and storm Autumn shows the best agreement of strong wind and storm weather situations in observations and weather situations in the German Bight and the southern Baltic. However occurrences of these simulation. In this season, the differences in the temperature between ECMWF Re-Analysis and weather situations in winter are significantly overestimated in the climate model simulation. In CTRL for Europe are approximately zero. The difference in the mixing ratio is approximately contrast to winter, autumn shows considerable Tellus 50A (1998), 4 Fig. 6. Differences in the temperature in K and the mixing ratio in % between ECHAM3/T42 CTRL and ECMWF Re-Analysis in 700 hPa. (a) DJF temperature, ( b) SON temperature, (c) DJF mixing ratio and (d) SON mixing ratio. agreement in this investigation between observa-observations are simulated quite realistically by the climate model. tion and simulation. A comparision between ECHAM3/T42 CTRL and ECMWF Re-Analysis with the objective classification routine presented shows an overestima-6. Acknowledgements tion of strong wind and storm weather situations in the simulation in winter as well. This overes- The results of the 30-year period of the ECHAM3/T42 CTRL were kindly provided by timation results from situations of the westerly type. The single strong wind and storm weather the Max-Planck-Institut fü r Meteorologie, Hamburg. In particular we would like to thank situations in the ECHAM3/T42 CTRL have the same flow regime as in observations. All the further Arno Hellbach, who gave us assistance in copying the data from the archives of the Deutsches criteria considered (relative topography, mixing ratio and baroclinicity) which are typical of single Klimarechenzentrum (DKRZ) in Hamburg. For data transfer we used a software tool developed strong wind and storm weather situations in Tellus 50A (1998) T he European large-scale weather patterns SWA: Southwest anticyclonic weather condition. Mainly anticyclonic conditions in central Europe. In this appendix, a description of the European The frontal zone is located between an extensive large-scale weather patterns used from the catahigh pressure area over southern Europe and the logue by Hess and Brezowsky (Gerstengarbe et al., Mediterranean Sea and an extensive low pressure 1993) is given. A large-scale weather pattern area over the middle North Atlantic. It is orient-(''Großwetterlage'') is defined as a tropospheric ated from the Irish to the Baltic Sea. pressure and/or current pattern, which is unchanged in its essential features for a minimum (b) Northwesterly type of 3 days, especially in the location of the steering NW Z: Cyclonic northwest weather condition. centres and frontal zones (Brezowsky et al., 1951). Mainly cyclonic conditions in central Europe. We classify the European large-scale weather pat-There is a strong frontal zone between a nonterns used as follows: blocking subtropical high pressure area, northeastward shifted up to the westerly Biscay, and an (a) Westerly type extensive low pressure area over Scotland, the W Z: Cyclonic west weather condition. Mainly cyc-Norwegian Sea and/or Scandinavia. lonic conditions in central Europe. The Azores NWA: Anticyclonic northwest weather condition. high is in a normal position with a possible Mainly anticyclonic conditions in central Europe. extension to south France. Over the North There is frontal zone with a weak anticyclonal Atlantic and the Norwegian Sea is a low pressure curvature between a non-blocking subtropical area. The frontal zone lies in its normal position high pressure area, northeastward shifted up to between 50°N and 60°N. west Europe, and a low pressure area over Fennoscandinavia and the Norwegian Sea. WA: Anticyclonic west weather condition. Mainly anticyclonic conditions in central Europe. The NZ: Cyclonic north weather condition. Mainly Azores high may extend to South Germany and cyclonic conditions in central Europe. There is a the centre of the low pressure area is most cases blocking high pressure area over the easterly north 65°N. The frontal zone is shifted northward North Atlantic or a high pressure bridge reaching to approximately 60°N. from the Iberian Peninsula to Polar areas. Over Fennoscandinavia is an extensive low pressure W S: Southerly west weather condition. The Azores area. The frontal zone is orientated northeastward high extends to North Africa and the centre of to Iceland. the low pressure area is south of 60°N. The frontal zone is orientated from the Irish Sea to east NA: Anticyclonic north weather condition. Mainly anticyclonic conditions in Central Europe. A high Europe. pressure area is located over the British Isles, the T RM: T rough central Europe. A trough over north and central Europe is located on the edge of a Norwegian and the North Sea. In some cases there is a high pressure bridge reaching from the high pressure area over the easterly North Atlantic. The frontal zone reaches from northwest Iberian Peninsula to Polar areas. An extensive low pressure area (a trough is also possible) is to south Europe and turns back to northeast Europe further on. located over east Europe.
2019-04-19T13:07:02.157Z
1998-08-01T00:00:00.000
{ "year": 1998, "sha1": "3a680cca5a8f20f7614ed041186c253fcb169da4", "oa_license": "CCBYNC", "oa_url": "https://www.repo.uni-hannover.de/bitstream/123456789/2046/1/Study%20of%20storm%20weather%20situations%20in%20observation%20and%20ECHAM3%20T42%20model%20simulation.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "ad9e908536754988c9713eb23c361691997d15f8", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Environmental Science" ] }
259288654
pes2o/s2orc
v3-fos-license
Creating realistic nerve agent victim profiles for computer simulation of medical CBRN disaster response In the last decades, Chemical, Biological, Radiological and Nuclear (CBRN) threats have become serious risks prompting countries to prioritize preparedness for such incidents. As CBRN scenarios are very difficult and expensive to recreate in real life, computer simulation is particularly suited for assessing the effectiveness of contingency plans and identifying areas of improvement. These computer simulation exercises require realistic and dynamic victim profiles, which are unavailable in a civilian context. In this paper we present a set of civilian nerve agent injury profiles consisting of clinical parameters and their evolution, as well as the methodology used to create them. These injury profiles are based on military injury profiles and adapted to the civilian population, using sarin for the purpose of illustration. They include commonly measured parameters in the prehospital setting. We demonstrate that information found in military sources can easily be adjusted for a civilian population using a few simple assumptions and validated methods. This methodology can easily be expanded to other chemical warfare agents as well as different ways of exposure. The resulting injury profiles are generic so they can also be used in tabletop and live simulation exercises. Modeling and simulation, if used correctly and in conjunction with empirical data gathered from lessons learned, can assist in providing the evidence practices for effective and efficient response decisions and interventions, considering the contextual factors of the affected area and the specific disaster scenario. studies to assess exposure to all hazards (1). Managing mass casualties in a disaster situation or humanitarian crisis can no longer rely on goodwill and good intentions. An evidence-based practice is needed to achieve the objectives of the health disaster management due to the immediate impact on the community and especially on the healthcare system, the number and variety of injured or ill victims, an initial phase of disorder, the temporary lack of resources and limited output of medical teams directly after the disaster, the necessity to operate in multidisciplinary and complementary teams, and the multiplicity of tasks (2). Just as evidence-based decision making has gained momentum in medical research, simulation has emerged over the last decades as a useful tool in the study of disaster preparedness. Traditional analytic methods cannot fully capture the flow of disaster victims through a complex health disaster response system (3). In contrast to discussion-based, tabletop or live exercises, computer modeling, and simulation can enhance disaster preparedness by studying all operational assumptions and testing contingency plans in a virtual but controlled experimental environment. Simulation allows the integration of stochastic and dynamic aspects inherent to the health disaster response and to study possible relationships among any or all variables included in the scenario (4,5). Computer-generated victims can be created with vital parameters that are realistic for their injuries and with a randomly allocated severity of injuries based on probabilities for the different triage classes (6,7). The health status of the victims can be adapted to time and treatment or lack of treatment (8). In the last decades, Chemical, Biological, Radiological and Nuclear (CBRN) threats have become serious risks prompting countries to prioritize preparedness for such incidents (9). Numerous studies have highlighted worldwide serious gaps in the preparedness of healthcare providers to cope with CBRN incidents (9-11). Managing CBRN emergencies is substantially different to routine interventions of healthcare workers. Moreover, performance can seriously be reduced by stress and urgency generated by CBRN incidents (7). The expertise of CBRN responders has therefore an important impact on the mortality and morbidity rates of victims (12). However, rapid interventions can reduce the harmful effects of CBRN emergencies, requiring a training in the health aspects of CBRN agents (9). As CBRN scenarios are very difficult and expensive to recreate in real life, computer simulation is particularly suited for assessing the effectiveness of contingency plans and identifying gaps and areas of improvement (5,13). Simulation techniques allow healthcare providers to investigate the impact of different options and thus provide evidence that decision makers need in order to establish robust response strategies in the CBRN area. Sensitivity analysis may determine the extent of resources to meet the needs for managing CBRN emergencies (8). Victim profiles We set out to create a set of victim profiles based on the nerve agent sarin (GB) for use in computer simulation. We specifically chose the nerve agent sarin due to historical significance as well as the wide body of available research in the literature (14)(15)(16)(17). A literature study failed to identify victim profiles suitable for use in a civilian population. However, we identified a set of similar victim profiles in the -now superseded -North Atlantic Treaty Organization (NATO) Standardization Agreement (STANAG) 2,553 covering the Allied Medical Publication 8(C), the NATO Planning Guide for the Estimation of CBRN Casualties (AMedP-8(C)). This AMedP-8(C) describes a set of 6 victim profiles of increasing severity in symptomatology, each linked to a specific inhalation exposure interval. These military injury profiles specifically exclude the effects of treatment or medical countermeasures such as pre-exposure prophylaxis. They were created by NATO Subject Matter Experts (SMEs) incorporating available evidence from human and animal exposures. While the profiles were originally with the intended purpose of casualty estimation, our goal is to develop victim profiles that can also be used to estimate victim severity distributions, resource requirements and to train healthcare workers. Every AMedP-8(C) victim profile consists of 6 separate symptom severity evolutions. Every category is assigned a numerical value representing severity of compromise, ranging from 0 representing no symptoms up to 4 representing severe (life-threatening) compromise. Every category has a detailed description assigned to each severity value used. The categories described in the AMedP-8(C) publication are ocular (0-3), upper-gastrointestinal (0-2), lower-gastrointestinal (0-3), respiratory (0-4), muscular (0-4) and neurological (0-4) symptoms. Clinical evolution over time is described by the timing of the improvement of these categories (18). Injury profile 1 (IP-1) represents a very mildly intoxicated victim, only showing ocular symptoms, and spontaneously recovering after 6 h (or more quickly with adequate treatment). Injury profile 2 also represents a mildly intoxicated but a more severely intoxicated victim than IP-1. Injury Profile 2 (IP-2) victim exhibits respiratory symptoms (dyspnea and wheezing) for 2.5 h and spontaneously recovering ocular symptoms persisting over a week. Injury profile 3 (IP-3) represents a moderately intoxicated victim with a combination of mild gastro-intestinal and mild respiratory symptoms, which improve progressively after 1 to 2 days. Injury profile 4 (IP-4) represents a moderately intoxicated victim with severe respiratory distress and bronchorrhea. Victims belonging to IP-4 are expected to manifest severe respiratory insufficiency within the first 5 min after exposure. This is due to an irregular respiratory rate in combination with severe muscular fatigue. Both improve progressively after 6 to 16 h. Injury profile 5 (IP-5) represents a severely intoxicated victim and sustains a very severe intoxication with brief self-limiting seizures (and secondary respiratory arrest). Their condition is expected to spontaneously improve within 10-15 min after exposure. The IP-5 clinical condition improves progressively over the next 6 to 16 h, taking well over a week to return to baseline. Injury profile 6 (IP-6) represents the most severely intoxicated victims. They quickly show most severe symptoms because of respiratory insufficiency of combined neurological and origin. In AMedP-7.5, victims conforming to this profile are assumed to perish after 15 min when left untreated. However, when receiving adequate antidote treatment and respiratory support, they are expected to survive. Table 1 contains a brief description of the injury profiles adapted from AMedP-8(C). Frontiers in Public Health 03 frontiersin.org Assignment of clinical parameters Clinical decision making and therefore resource estimation is based on the patient's general condition and physiological parameters. Concise data on human clinical parameters of nerve agent exposures is very rarely reported in the detail required. Usually, these parameters are reported in general terms without clarification, such as bradycardia, tachycardia, confusion and decreased consciousness. The clinical parameter assignment algorithm is based on subject matter experts and victim profiles developed use in military and civilian training exercises (19,20). We attempted to validate the model by linking the reported parameters to injury profiles and time of exposure, but it should be noted that these simulation victims are also mainly expert opinion based and are inspired by organophosphate poisoning and do not quite correspond to victim descriptions from the Tokyo 1995 subway attack. For instance: bradycardia and hypotension are only described in a preterminal phase, and respiratory hypersecretions are less common (21)(22)(23). Using this information, a set of conversion rules are devised to convert the severity categories to clinical parameters. To create clinically relevant profiles, our choice of clinical parameters is limited to those measurable and treatable in a prehospital setting. The selected parameters are airway status, respiratory rate, respiration depth, respiratory rate, oxygen saturation, heart rate, blood pressure, Glasgow Coma Scale, and pupillary status. Unfortunately, these parameters are at most slightly hinted at in the description of the Injury Profiles in AMedP-8(C). We therefore created a set of simple algorithms to uniformly convert the severity categories and their interactions to clinical parameter ranges. The specific algorithms used are described in appendix 1. Airway status is based on the presence of severe respiratory and motor incapacitation, and ranged from normal, snoring to obstruction. Obstruction of the airway is assumed to be present when either the neurological or motor incapacitation reaches the worst severity. Mild respiratory involvement is described as having only mild shortness of breath, chest tightness, coughing and a runny nose. The patient becomes dyspneic when the respiratory incapacitation becomes moderate or severe, due to the increase in secretions. Very severe respiratory incapacitation leads to shallow respirations due to hypoxia secondary to secretions in combination with flaccid paralysis. We estimate that the respiratory rate will first rise to 20-30 respirations per minute when the mild respiratory impairment is modeled, increasing to 30+ with moderately affected respiration. Severe respiratory incapacitation will lead first to an increased respiratory rate and then to a decrease due to respiratory fatigue and intermittent apnea. Finally, the very severe respiratory impairment results in flaccid paralysis and secondary apnea. Oxygen saturation modeling is based on the categories defined by Raux et al.: no respiratory incapacitation corresponds to an oxygen saturation of 95%-100%. We assume a mild impairment results in an oxygen saturation of 90%-95%. Moderate impairment is assumed to result in a saturation of 85%-90%. Severe respiratory insufficiency corresponds to a saturation of 80%-85% and with very severe respiratory impairment corresponds to the tipping point where the patient can no longer sustain their own oxygenation, resulting in desaturation below 80% (24). Injury profile Description IP-1 • Brief episode of ocular symptoms (pain and miosis) only. • Respiratory symptoms improve after 1.5 h, and ocular symptoms improve after + − 16 h but linger for weeks. IP-3 • Moderate intoxication with mild GI and respiratory symptoms. • These symptoms last about a week. Ocular symptoms persist longer. • Improvement after 60-90 min, but mild ocular, respiratory and GI symptoms persist for days to weeks. IP-5 • Severe intoxication with respiratory insufficiency (central, muscular, and due to secretions), seizures and severe ocular and GI symptoms. • Brief seizures/coma (+ − 15 min) but severe respiratory, muscular, and neurological symptoms persist for 1-2 h, slowly improving over days to weeks. There is no mention of heart rate or blood pressure in the AMedP-8(C) injury profiles. The goal of these profiles is to provide realistic profiles for the general population, which has a large degree of uncertainty. We therefore chose not to report exact numbers and opted for relative values instead. Sarin and other G-agents have been described as having very strong nicotinergic effects on the cardiovascular system, leading to hypertension and tachycardia even in severe intoxications (25). Bradycardia and hypotension are therefore only expected as very late signs of the most severe intoxication due to a terminal hypoxic state, so they are assumed only to be present when respiratory and neurological impairment is so overwhelming that it causes death. In practice, this means that it is only assumed to be present in IP-6. There is not enough information available in the description of the neurological category of the Injury Profiles to correlate the full Glasgow Coma Scale (GCS) based on the description alone. Toxicological Physiologically-based Pharmacokinetic/ Pharmacodynamic (PBPK/PD) modeling has proven useful in modeling symptoms and NA countermeasures (26). Rodriguez and McClellan developed such a model for inhalational sarin battlefield exposures, using data from human and animal exposures found in the literature. It incorporates probability of mortality and the effect of atropine and/or oxime administration and bioscavengers on mortality, based on a modeling of the whole-body stimulated acetylcholinesterase receptor fraction by the sarin (27). This model has been extensively validated by comparison to existing models and physiological constants and its predictions of untreated progressions correspond very well to the neurological categories predicted by the AMedP-8(C) after a 1-min inhalational exposure. While the level of detail in the injury profile is scarce, certain relationships can be inferred. For instance, a neurological category of 4 corresponds to E1V1M1. Severe neurological compromise corresponds to V4 or lower. Mild and moderate neurological compromise. For the motor component of the GCS there are no relationships described in the injury profile description, but we can assume that a flaccid paralysis will lead to a value of 1. For the eye-opening component of the GCS, we assume that severe ocular involvement will prevent eye opening due to antalgic blepharo-and ciliary spasms, corresponding to an E value of 2. Miosis is described starting at mild ocular involvement. GCS Values that cannot be reliably linked to the level of neurological compromise are represented by an interval. Extrapolation of exposure levels to the general population To adapt the exposure intervals of AmedP-8(C) to a probit model, we referenced NATO AMedP-7.5, Edition A, Version 1, NATO Planning Guide for the Estimation of CBRN Casualties (October 2017) -which supersedes the aforementioned AmedP-8(C). It describes a methodology to estimate the proportion of a population affected by chemical and biological threats, for the purposes of casualty estimation. In this publication the authors present a probit model, as well as a methodology to apply it to the AMedP-8(C) injury profiles (28). Equation one is used to estimate the probability of an individual conforming to an injury profile assuming a normal healthy 70 kg victim with a respiratory rate of 15 liters per minute. P IP PS Exposure ECt In this equation, the ECt 50 represents the exposed concentration time (measured in mg.min.m −3 ) for which half the population will be affected in the way described by the injury profile (or worse). We assume this to be the mathematical average of the exposure range interval. The lack of upper bound on the dosage range for IP-6 creates a problem for this assumption. In the original AMedP-8(C) publication it is reported that the LD50 of sarin is located within the IP-6 range, implying that not all victims exposed to this range will be lethally injured. Since the development of AMedP-7.5 the LD50 value was decreased to 33 mg.min.m −3 . The probit model is centered around the ECT 50 value, which together with the assumption that IP-6 is a lethal profile, means the LD 50 value can be used as the ECt 50 . These ECt 50 values assume a body weight of 70 kg and a minute ventilation of 15 liters per minute. The probit slope (PS) is a parameter that represents the genetic and physiological variability in the response to the toxic agent. The lower the PS, the more varied the effects of an exposure. The total exposure is calculated as an aggregate concentration-time product (ECt) and is also expressed in mg.min.m −3 . Equation 1 returns a probability of an injury level being at least as severe as the IP. One can calculate the IP specific probability by subtracting the probability that a victim conforms to a worse IP. For example: if a victim is calculated to have a probability of 70% of presenting as IP-5 and a 40% probability of presenting as IP-6, then the final probability of the victim presenting as IP-5 is 30% and as IP-6 is 40%. To convert the military exposure ranges from the injury profiles to a general population, we used the extrapolation method described by Crosier and Somerville, assuming a normal population distribution for both military and civilian population (29). This method assumes that the military population consists of the healthiest 25% of the general population. They estimated an uncertainty factor based on either the top 25% of the bell curve (tail model), or a bell curve within a bell curve (bell model). The uncertainty factor for conversion of mean toxicity values between military and general population is estimated between 1.4 and 1.57 for a bell model and tail model, respectively. Similarly, the probit slope of 12 is widened to 5.9 and 7.2 for a bell model and tail model, respectively, for non-mild inhalational G-agent exposure. In the case of mild inhalation exposure, the resultant probit slope is 2.22 and 2.71, respectively, for a bell and tail model, resulting in an average probit slope of 2.5. Because neither bell nor tail model offers a theoretical benefit and the results for both models are comparable, we chose to use the average of both models for our final exposure range calculations. The resultant ECt 50 exposure intervals and PS parameters are presented in Table 2. A graphic depiction of the resultant distributions can be found in Figure 1. Table 3 displays the resulting clinical parameters and progression over time associated with every injury profile, using the algorithms described above. Discussion This methodology uses the most recent evidence available from respected and publicly available sources to create a set of injury profiles for simulation. It accounts for the genetic and physiological variation in response to OP exposure through stochastic variation using values compatible with the general population derived from solid assumptions. The described approach is applied to inhalational sarin exposure but can be extrapolated to other chemical warfare agents (CWAs) and other exposure routes by modifying the agent specific parameters and the timelines accordingly. We chose to neglect cutaneous exposure in the modeling of these specific injury profiles because they are several orders of magnitude greater than the inhalational exposure required to produce a similar effect. One of the prerequisites for assigning the victim's injury profile is an estimation of its exposed concentration time integral. These estimation methods are available in the AMedP-7.5 (30). The values calculated above using the presented method conform to the data reported in the literature. For example, Bide et al. used a bivariate surface model to estimate human toxicity from animal exposures, for which they report a lethal concentration for 50% of the population of 36 for an exposure of 2 min, as well as a slightly wider probit slope of 4.5 (31). However, they also state that their methodology might not be optimally suited for probit slope calculations due to selection bias and factors inherent to the animals used. We intend to use these injury profiles as part of a discrete event simulator called SIMEDIS, used to analyze the disaster medical response chain (8,32). These injury profiles are included in the building of a continuous victim model, where the evolution of a victim's health state is modeled over time as a categorical sum of the heart rate, GCS, respiratory rate, systolic blood pressure and oxygen saturation (33). Modeling assumptions The values presented in Table 2 are valid for inhalation exposure only. A similar methodology can be applied to the PS and ECt50 In this table the Distribution of injury profile probability by concentration time. On the X-axis the exposure is expressed in mg min m −3 . On the Y axis the cumulative probability can be found. Frontiers in Public Health 06 frontiersin.org values of cutaneous exposure, as well as different times due to the delayed absorption. Despite using available evidence, all injury profiles were initially created by SMEs and therefore are susceptible to bias. These injury profiles do not include the delayed or chronic effects such as OP induced peripheral neuropathy or OP intermediate syndrome. The victim profiles do not include treatment effects nor the timing of administration of treatment regarding the aging effect. There is no modeling of the effect of decontamination. If one were to model the effects of decontamination, one would have to include the method of exposure, absorption mechanics of the agent and the effectiveness of decontamination to remove the agent. To our knowledge there are no published reports on either of these properties for sarin. The exposure intervals are valid for very short exposures, assume no respiratory or pharmacological protection and are valid for individuals weighing 70 kg. To our knowledge there is no data available on differences between sexes. The values also assume the applicability of Haber's law. Haber's law is a toxicological concept that states that the concentration-time product required for a set of symptoms is a constant. This means that a long exposure to low concentrations and a short exposure to high concentrations should theoretically result in similar symptoms (34). Because Haber's law has been repeatedly demonstrated not to apply to CWAs, a toxic load exponent model is proposed for longer exposures, to compensate for metabolization of organophosphates by increasing the dose required to attain the intoxication level described by the injury profiles as a function of time. Bide et al. calculated this toxic load exponent to be 1.4, based on extrapolation from multiple animal exposure experiments, for exposures up to 30 min (31). The AMedP-8(C) injury profiles start after victims have received their full exposure. These injury profiles assume that the exposure happens over a relatively short (<1 min) timeframe and the exposure ends before the victim clinical evolution starts. Discussions with SMEs and data from a Rodriguez and McClellan's PBPK/PD model suggest that there will be a time delay of 1-2 min per injury profile before the sarin has reached adequate distribution and receptor binding for victims to exhibit their maximum symptoms. Methods of dispersion and release are outside the scope of this publication. However, slower exposures will result in an even more pronounced progressive degradation of the clinical condition. We would also like to point out that the estimation of exposure is considered outside the scope of this work due to additional complexities. There are several methods available, from simple gaussian puff models to the computationally intensive but current gold standard of computational fluid dynamics (35)(36)(37). We refer the reader to the AMedP-7.5 publication for more details on specific exposure settings, such as the effects of physical exercise and respiratory protection (30). Special considerations Treatment of these injuries is outside the scope of this publication, but discussed here Nerve agent treatment classically consists of the antimuscarinic agent atropine to reduce secretions and an oxime to reactivate the acetylcholinesterase (38). Incidental seizures are treated with benzodiazepines which are GABA-A receptor agonists and NMDA receptor antagonists such as ketamine when they become ineffective due to GABA-A receptor downregulation or when sedation is required (39)(40)(41). In the case of sarin, the oxime of choice is pralidoxime (42,43). Oximes are classically thought to be best administered within 1 h of exposure, however continued administration starting after 1 h has shown to still have significant beneficial effects (44). The modeling of the effect of these agents is beyond the scope of this work but reports from the 1995 Tokyo attack show a variable response to atropinization: both an increase and a paradoxical decrease of heart rate and systolic blood pressure have been reported. Seizures were successfully treated using diazepam titration. Oxime administration and an increase of plasma cholinesterase levels were associated with improvements in respiratory conditions and normalization of miosis (21)(22)(23). Reports and theoretical models show that exposures of up to threefold the LD50 values are considered survivable with rapid and adequate treatment and frequent re-administration of antidotes (27,45). Nerve agent exposure is also described to leave lasting effects such as OP-ester induced delayed neurotoxicity. These effects should be considered in victims of nerve agent exposure but are not modeled here as they do not alter the modeled parameters in a significant fashion. It is however assumed that early oxime administration lowers the probability of the development of long-term neurological side effects (41). Decontamination and secondary contamination are outside the scope of this work due to the lack of available data. Retrospective reports of the Japanese sarin attacks report moderate secondary contamination of health care workers in closed rooms (46). When designing a simulation exercise, close attention should be paid to the scenario to assess the effects of decontamination on delayed absorption. The exposure route as well as dispersion method can lead to significant differences in the symptomatology and onset thereof, as well as secondary contamination of health care providers (47). Possible applications of these injury profiles are serving as a basis to create scenarios of a sarin (or other nerve agent) mass casualty incident, as well as for creating realistic victims or determining the injury severity distributions of victims for an assumed exposure. This methodology has already been successfully implemented in a computer simulation model of an urban subway station chemical attack, as well as a live decontamination and victim reception exercise in a tertiary care hospital (48,49). Conclusion Modeling and simulation, if used correctly, in conjunction with empirical data gathered from lessons learned, can assist in providing the evidence practices for effective and efficient response decisions and interventions, considering the contextual factors of the affected area and the specific disaster scenario. In this paper a methodology is presented to create a set of realistic injury profiles and severity distributions for organophosphate chemical warfare agent victims for known exposures. Not included in this work are the effects of decontamination and secondary contamination, the progression of treated victims and long-term effects. Frontiers in Public Health 08 frontiersin.org Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
2023-06-30T13:21:05.009Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "b85ec5826e25e7d10b0160a8668d93987b2d55cf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "b85ec5826e25e7d10b0160a8668d93987b2d55cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237308723
pes2o/s2orc
v3-fos-license
Case Report: The Formation of a Truncated PAX5 Transcript in a Case of Ph-Positive Mixed Phenotype Acute Leukemia With dic(7;9)(p11-p13;p13) PAX5 plays a critical role in B-cell precursor development and is involved in various chromosomal translocations that involve the fusion of a portion of PAX5 to at least 49 different partners reported to date. Here, we identified a novel PAX5 fusion transcript in a Ph-positive mixed phenotype acute leukemia case with dic(7;9)(q13;q13), in which a translocation juxtaposes the 5’ region of PAX5 and the ubiquitin-conjugating enzyme E2D4 (UBE2D4) to generate a PAX5-UBE2D4 fusion gene. To further explore the general characteristics and function of PAX5-UBE2D4, we cloned the full-length cDNA, which was amplified from the bone marrow of the patient. Interestingly, the fusion was located in the nucleus and negatively affected PAX5 transcription activity. Importantly, the fusion promoted tumor growth in nude mice and the proliferation of NIH3T3 cells in vitro. In conclusion, the fusion resulted in partial oncogenic activity, in contrast to the tumor suppressor activity of wild-type PAX5. INTRODUCTION The transcription factor PAX5 plays a critical role in B-cell development and differentiation and has been considered to function as a tumor suppressor in B cell precursor acute lymphoblastic leukemia (BCP-ALL). PAX5 alterations, including deletions, mutations, and rearrangements, occur in approximately 30% of BCP-ALL cases. Chromosomal rearrangements account for 2-3% of cases (1)(2)(3). It has been well reported that a number of PAX5 rearrangements give rise to in-frame fusion transcripts that encode chimeric proteins that consistently retain the PAX5 DNA binding domain at the N terminus, but the C-terminal regions are derived from various partners, including transcription factors, kinases and structural proteins (4)(5)(6)(7)(8). To date, at least 58 fusions have been identified, and most of them have been found in association with BCP-ALL (9). Only a limited number of the reported fusions were recurrent, such as PAX5-ETV6, PAX5-ELN, and PAX5-PML, while most have been found in single cases, such as PAX5/ASXL1 and PAX5/FOXP1 (9). In addition, half of the rearrangements have resulted in PAX5 fusions to genes in the opposite orientation, out-of-frame fusions or the expression of truncated isoforms (6). Here, we first identified a novel chromosomal dic(7;9) (p13;p13) translocation in a Ph-positive mixed phenotype acute leukemia (MPAL) patient, resulting in a PAX5 out-of-frame fusion with the ubiquitin-conjugating enzyme E2D4 (UBE2D4), which functions as a truncated PAX5. In addition, the fusion showed partial oncogenic activity, which was in contrast with the tumor suppressor ability of wild-type (WT) PAX5. CASE DESCRIPTION A 16-year-old boy was referred to our hospital in January 2010 with recurrent fever and weakness for one month. Physical examination indicated axillary lymphadenopathy and hepatosplenomegaly without anemic conjunctiva. The peripheral blood counts at diagnosis revealed multilineage cytopenia: hemoglobin 12 g/dL, white blood cells (WBCs) 12.87 x 109/L, and platelets 31 x 109/L. Bone marrow (BM) aspiration showed hypercellularity with 89.2% blasts and lymphatic changes. Flow cytometric analysis revealed that 23.4% of the BM blast cells were positive for HLA-DR, CD10, CD20, CD19, CD13, CD33, CD34, MPO and CD79a but negative for CD117, CD14, CD15, CD2, CD3, and CD7 (Supplementary Figure 1). Then, the patient was diagnosed with MPAL with co-expression of myeloid and B lymphoid lineage antigen according to the 2016 WHO classification. The karyotype of the bone marrow cells was 45, XY, dic(7;9)(p11-13; p13), t(9;22)(q34;q11) (8) /46, XY (9). The BCR/ABL (p190) fusion gene was detected by multiplex reverse transcriptionpolymerase chain reaction (RT-PCR), thereby confirming the diagnosis of Ph-positive mixed phenotype acute leukemia. The patient accepted tyrosine kinase inhibitor therapy and achieved remission, which was followed by 2 DVP chemotherapy sessions (with 70 mg daunorubicin, 4 mg vincristine and 20 mg dexamethasone). Unfortunately, the patient finally had a cytological relapse in the bone marrow and died 5 months after the initial diagnosis. DISCUSSION AND CONCLUSION Based on the karyotype of the patient, array comparative genomic hybridization (array-CGH) analysis was performed, and the results indicated that the breakpoints were located in the PAX5 and UBE2D4 genes and revealed the deletion of large parts of 9p and 7p ( Figure 1A). When using the FISH (fluorescent in situ hybridization) probes RP11-652D9 and RP11-344B23 corresponding to the 5' and 3' sequences of the PAX5 gene, respectively, we observed a red signal and a yellow signal, which was consistent with the results of the array-CGH analysis ( Figure 1B and Supplementary Figure 2). Then, RT-PCR amplification revealed the presence of PAX5-UBE2D4 fusion transcripts (Supplementary Figure 3). Sanger sequencing confirmed the out-of-frame fusion of PAX5 exon 7 (NM_016734) with UBE2D4 exon 2 (NM_015983.4), resulting in the analogous truncated PAX5 protein with the DNA binding (PBD) domain, OCT domain and homeodomain (HD) of PAX5 and an additional 19-amino acid tail, which does not correspond to any predicted functional domain ( Figure 1C). To investigate the function of the fusion, we amplified the full-length cDNA sequence of PAX5 and UBE2D4 that was retained in the fusion found in the patient, cloned it into a lentiviral vector (LV5, GenePharm Inc., Shanghai) and the pcDNA3.1 vector, and fused it with a 3×FLAG-tag. As Figure 1D shows, we observed nuclear localization of the fusion, which was expected since the fusion retained the nuclear localization signal of PAX5 ( Figure 1D and Supplementary Figure 4). Furthermore, we co-transfected 293T cells with the CD19 promotor-LUC construct (PGL3), pcDNA-PAX5 and increasing amounts of the pcDNA-PAX5-UBE2D4 (PU) construct. The transcription of the luciferase reporter gene was significantly downregulated in the presence of the expression of PU alone compared with that observed in the presence of wt-PAX5 ( Figure 1E). In addition, after concomitant transfection of wt-PAX5 and PU, PAX5-driven reporter gene transcription was downregulated ( Figure 1E), indicating the dominant-negative activity of PU. To investigate the function of PU, HEL cells were transfected with PU (HEL-PU) and the vector (HEL-LV5). Then, the cells were subcutaneously injected into 6-to 8-week-old female nude mice (n=6-11). A total of 45.5% (5/11) of mice engrafted with HEL-PU cells developed tumors, which was obviously greater than the number of mice who developed tumors in the control (HEL-LV5, 33.3%, 2/6) group (Figures 2A, B). The mean volume of the tumors in the PU cohort was much larger than the control cohort ( Figure 2C). In addition, the mean weight of the tumors in the HEL-PU group was the heaviest when compared with control group ( Figure 2D). In contrast, the PU fusion showed at least partial oncogenic activity. Furthermore, NIH-3T3 cells expressing the PU fusion grew significantly faster than the control cells over 72 h and showed an increase in the number of colony forming units compared with the vector control-expressing cells (Figures 2E, F). Dicentric (7;9)(p11-p13;p11-p13) is a very rare but recurrent abnormality in BCP-ALL patients as well as a limited number of cases involving PAX5 rearrangement. Indeed, we identified only 7 cases of dic(7;9) from among approximately thousands of cases with karyotypic data (Supplementary Table 1). Most cases with the translocation, dicentric abnormality or derivatives of chromosomes 7 and 9 involving PAX5 rearrangement mainly presented PAX5-LOC392027, PAX5-POM121, PAX5-ELN, and PAX5-AUTS2 (4,(8)(9)(10)(11)(12)(13)(14). Some aberrant PAX5 transcripts have also been reported, such as a case of MPAL that harbored der(9)t (7;9)(q11.2;p13) (10). To our knowledge, this is the first case of PAX5 rearrangement in a Ph-positive MAPL patient with dic (7;9). Previous studies showed that most malignant cells carrying PAX5 fusions displayed a simple karyotype (6). Coexistence of the t(9;22)(q34;q11) translocation, which resulted in the formation of the BCR-ABL1 p190 fusion in this study, might contribute to the cytogenetic complexity and suggest a poor prognosis. The partner genes involved in the PAX5 fusions were heterogeneous, but a partner involving a ubiquitin-related gene was the first to be reported. Previous reports indicated that half of the PAX5 fusion genes gave rise to truncated PAX5 proteins, including those involving out-of-frame fusions (6). Consistently, the PAX5-UBE2D4 fusion showed the competitive inhibition of wt-PAX5 transactivating activity, similar to truncated PAX5. Furthermore, the PAX5-UBE2D4 fusion presented oncogenic activity in a nude mouse model. In contrast, WT PAX5 showed tumor suppressive ability both in vivo and in vitro. PATIENT PERSPECTIVE Since the diagnosis, the patient received and understood the cause of his illness, and the possible cause of premature death. Ultimately, he hoped to get the right treatment. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. ETHICS STATEMENT All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee(s) and with the Helsinki Declaration (as revised in 2013). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
2021-08-27T13:12:02.005Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "63dc1a086558bd2e916ffd7d8cd312ba9645c68a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.703612/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63dc1a086558bd2e916ffd7d8cd312ba9645c68a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
707
pes2o/s2orc
v3-fos-license
IEEE TRANSACTIONS ON SIGNAL PROCESSING, to appear 1 Design of Block Transceivers with Decision Feedback Detection This paper presents a method for jointly designing the transmitter-receiver pair in a block-by-block communication system that employs (intra-block) decision feedback detection. We provide closed-form expressions for transmitter-receiver pairs that simultaneously minimize the arithmetic mean squared error (MSE) at the decision point (assuming perfect feedback), the geometric MSE, and the bit error rate of a uniformly bit-loaded system at moderate-to-high signal-to-noise ratios. Separate expressions apply for the ``zero-forcing'' and ``minimum MSE'' (MMSE) decision feedback structures. In the MMSE case, the proposed design also maximizes the Gaussian mutual information and suggests that one can approach the capacity of the block transmission system using (independent instances of) the same (Gaussian) code for each element of the block. Our simulation studies indicate that the proposed transceivers perform significantly better than standard transceivers, and that they retain their performance advantages in the presence of error propagation. I. INTRODUCTION Block-by-block communication is an effective scheme for the transmission of data over dispersive media; e.g., [28]- [30], [41], [42]. In such "vector" communication schemes, blocks of data are transmitted in a manner that avoids interference between the received blocks, and hence the detector need only operate on a block-by-block basis. Two popular examples of block-by-block communication schemes are orthogonal frequency division multiplexing (OFDM) [5] and discrete multi-tone modulation (DMT) [8]. In addition, certain multiple antenna systems operate in a block-by-block fashion (e.g., [18], [20], [26], [36], [43], [45]), and block-by-block detection schemes appear in some multiuser detectors for synchronous CDMA systems [14], [15], [47]. In general, an optimal detector for a block transmission system must make a decision on the received data block as a whole, although in certain cases, such as OFDM and DMT, the elements of that block can be decoupled and simpler detection schemes obtained. Unfortunately, maximum likelihood detection of the transmitted vector can be rather computationally expensive, and simpler detectors based on linear equalization and (disjoint) symbol-by-symbol detection may incur a significant This work was supported in part by the National Science and Engineering Research Council of Canada. The work of the second author is also supported by the Canada Research Chairs program. The goal of the present paper is to jointly design the linear transmitter matrix and the receiver feedforward and feedback matrices so as to optimize the performance of a block-by-block communication system with an intra-block decision feedback detector (BDFD). The design is based on knowledge of the channel, and hence is an appropriate choice for systems in which there is timely, reliable feedback from the receiver to the transmitter. The proposed approach provides closed-form expressions for transceivers that minimize the arithmetic mean (over the block) of the expected squared errors (MSE) at the input to the (scalar) decision device that is implicit in the BDFD, under the standard assumption [3], [9], [10], [17], [40], [52] that the previous decisions were correct. The expressions depend on the nature of the BDFD, and separate expressions are provided for the zero-forcing (ZF) and minimum mean square error (MMSE) BDFDs. In order to help distinguish our designs from previous work, we point out that if one is given a transmitter matrix, the design of the feedforward and feedback matrices of a ZF or MMSE-BDFD that minimize the MSE is well known; e.g., [2], [4], [9], [10], [17], [20], [40]. However, the joint minimum MSE design of the transmitter and receiver matrices has previously been deemed to be difficult (e.g., [52, p. 1338]), and hence several authors have suggested minimizing a particular lower bound on the MSE, namely the geometric mean of the expected squared errors; e.g., [9], [10], [52]. We will minimize the geometric MSE as the first step in our approach, but we will also show how the unitary matrix that parameterizes the set of transceivers which minimize the geometric MSE can be chosen so that the (arithmetic) MSE attains its minimized lower bound. Transceivers designed in the manner we propose have several additional desirable properties. In particular, the inputs to the (scalar) decision device are uncorrelated and have equal signal-to-interference-and-noise ratios (SINRs). In fact, the minimum SINR over the elements of the block is maximized. As a result, the average bit error rate (BER) is (essentially) minimized. More precisely, for systems with a ZF-BDFD our design minimizes the average BER for (uncoded) uniform QPSK signalling at moderate-to-high signal-to-noise ratios (SNRs), and also minimizes the dominant components of the BER for uniform M-ary QAM signalling. 1 For systems with an MMSE-BDFD, our design minimizes the average BER under an assumption that the residual intra-block interference is Gaussian. For the MMSE-BDFD, it is reasonably well known [9], [10], [17], [40], [52] that any transmitter that minimizes the geometric MSE (including the proposed design) also maximizes the mutual information between the transmitter and receiver for Gaussian signals. However, the standard choice from the set of transmitters that minimize the geometric MSE does not minimize the (arithmetic) MSE and produces inputs to the decision device that have potentially different SINRs for each element of the block. Therefore, in order to achieve reliable communication at rates which approach the capacity of the block transmission system, different codes (and constellations) may need to be applied for each element of the block [10]. An advantage of the proposed design is that from within the set of transmitters that minimize the geometric MSE (and maximize the Gaussian mutual information), we obtain a transceiver that also minimizes the arithmetic MSE, minimizes the BER, and provides uncorrelated inputs to the decision device that have identical (and maximized) SINRs. Since the MMSE-BDFD is a "canonical" receiver [9], [10], [23], this suggests that by using the proposed design, reliable communication at rates approaching the capacity of the block transmission system can be achieved by using independent instances of the same (Gaussian) code in each element of block. As mentioned earlier, our designs are based on the standard assumption [3], [9], [10], [17], [40], [52] that the previous symbols were correctly detected. However, error propagation is not catastrophic in block-by-block communication schemes because errors can only propagate within a single block (e.g., [10] and Section II). Bounds for the conventional symbol-by-symbol decision feedback equalizer (DFE) [1], [16] also suggest that good performance should be maintained in the presence of error propagation, and our simulations confirm this prediction. Furthermore, our simulation studies indicate that the proposed transceivers perform significantly better than standard transceivers, and that they retain their performance advantages in the presence of error propagation. Notation: The notation adopted in this paper is fairly standard. We conform to the following conventions: scalars are denoted by lower case letters; vectors by bold lower case letters; and matrices by bold upper case letters. The symbol I N denotes the identity matrix of size N , and 0 N ×M denotes the N × M matrix of zeros. The symbol |A| denotes the determinant of a matrix A, and tr(A) denotes its trace. The symbol E[·] denotes the expectation operator; (·) H the complex-conjugate transpose operation; (·) T the transpose operation; and [·] ij denotes the element at the intersection of the ith row and jth column of a matrix. II. BLOCK-BY-BLOCK TRANSMISSION We consider the generic block-by-block transmission system with intra-block decision feedback detection illustrated in Fig. 1. In this system, a block of M data symbols, s, is linearly precoded to construct a block of K ≥ M channel symbols, u = Fs, which is transmitted over the channel. The receiver independently processes a block of P ≥ M received samples in order to detect the data vector s. The received block, y, can be written as where the P ×K matrix H captures the effects of the channel, and v is a length P vector of additive noise samples. We will assume that the noise is circularly symmetric [37] (or, proper [35]) and Gaussian, with zero mean and positive definite correlation matrix E[vv H ] = R vv . We will also assume that the data symbols have zero mean and are white, 2 of unit energy, and not correlated with the noise, (i.e., E[ss H ] = I and E[sv H ] = 0). The model in (1) is applicable in many applications, including zero-padded or cyclic-prefixed block transmission over a scalar finite impulse response channel that is constant over the duration of the block; e.g., [6], [12], [28]- [30], [41], [42], [44]. In the zero-padded case H is a tall, lower triangular, full column rank Toeplitz matrix whose columns contain the impulse response of the channel, and in the cyclicprefixed case H is a square circulant matrix whose columns contain the channel impulse response. The model in (1) is also applicable in: vector transmission over a narrowband multiple antenna channel (e.g., [18], [20]), in which case H has no deterministic structure; in space-time block transmission over a (quasi-static) narrowband multiple antenna channel (e.g., [26], [45]), in which case H has a block diagonal structure; and in block transmission over a (quasi-static) frequency-selective multiple antenna channel (e.g., [36], [43]), in which case H is either block Toeplitz or block circulant. The intra-block decision feedback detector first preprocesses the received block y with an M × P feedforward matrix W to form z = Wy. The states of that filter,s ℓ , are the previously detected symbols in the block and the filter coefficients are different for each element of the block (indexed by m). Once a given block has been detected, the states of the feedback filter are reset to zero. That is, the symbols are detected on a block-by-block basis and hence error propagation between blocks is avoided. the operation of the block transceiver in Fig. 1 is equivalent to successively making decisions on the elements of starting from the M th row. That interpretation leads to the convenient conceptual model in Fig. 2. We observe that when B = 0, the system in Fig. 2 reduces to a block transmission system with linear equalization and disjoint detection; e.g., [6], [12], [36], [41]- [43]. In fact, many of the results for the linear case can be obtained by setting B = 0 in the expressions we will derive herein. If we denote the error between the input to the detector and the transmitted data symbols by e =ŝ − s, then Under the assumption of correct past decisions (i.e., when deciding s m ,s ℓ = s ℓ for all m + 1 ≤ ℓ ≤ M ), e simplifies to The covariance of this error will play a key role in our designs. Under our statistical models for s and v, the covariance matrix of the error is The (arithmetic) MSE of the detector input is III. MINIMUM MSE TRANSCEIVERS In this section, our goal is to jointly design the transceiver elements F, B, and W so that the (arithmetic) MSE is minimized, subject to a bound, p 0 , on the average transmitted power, and constraints which ensure that the receiver performs either ZF or MMSE decision-feedback detection. The average transmitted power is given by E tr Fs(Fs) H = tr(FF H ), and hence the design problem can be stated as a functional relationship between F, B and W. The functional relationship between F, B and W determines whether the BDFD is of the ZF type or the MMSE type. This optimization problem is rather difficult to solve directly because it is not convex, and hence is subject to the standard difficulties associated with the potential for multiple local minima. However, we will use the following stages to find a solution (F, B, W) whose performance is optimal: 1) Obtain a (tight) lower bound on the MSE, and minimize that lower bound, subject to the constraint on transmission power. 2) Derive a triple (F, B, W) whose performance achieves the minimized lower bound. In the following subsections, we will perform the above stages to obtain the minimized lower bounds on the MSE and optimal transceivers for the ZF and MMSE BDFDs, respectively. The matrix H H R −1 vv H will play a key role in our designs. For later convenience we let represent the eigenvalue decomposition of H H R −1 vv H, with eigenvalues λ i arranged in non-increasing order along the diagonal of Λ. For an integer 1 ≤ k ≤ K, we also definẽ V k to be the first k columns of V andΛ k to be the upper left k × k block of Λ. In the development of our designs, we will find it convenient to parameterize the K × M precoder matrix F of rank q in terms of its singular value decomposition, where Θ contains q columns of a K × K unitary matrix, Φ is a diagonal positive definite q × q matrix, and Ψ is an M × M unitary matrix. A. Zero-forcing BDFD The zero-forcing criterion imposes the following relationship between W, F and B (see (4)): Given a P × K matrix H and an integer M ≤ min{P, K}, there exists a K × M matrix F, an M × P matrix W and an M × M strictly upper triangular matrix B such that (9) is satisfied if and only if rank(H) ≥ M , and we will make the assumption that this condition holds. 3 In order to satisfy (9), F must be chosen so that it has rank M and that rank(HF) = M . By substituting (9) into (4) and (5), the covariance matrix of the error can be written as If we defineW = WR 1/2 vv , then the design problem (6) can be re-written as (11) it is clear that for a given F for which there exists a solution to (11c) and a given B, the optimalW isW = (B + I)(HF) + , where (·) + denotes the (minimum-norm) Moore-Penrose pseudo-inverse. Therefore, the optimal receiver feedforward matrix can be written as SinceHF has at least as many rows as it has columns and has full column rank, If we let U = B + I, the design problem in (11) has been reduced to The first stage in our solution of (14) is to derive and minimize a lower bound on the objective function (14a). The lower bound that we will use is a simple consequence of the arithmetic-geometric mean inequality [27, p. 535]. In particular, for an M × M positive semidefinite matrix X, with equality holding if and only if X = αI for some α ≥ 0. For convenience, we will refer to (15) as the trace-determinant inequality. Applying (15) to (14a), a lower bound on the mean-square error is where we have used the fact that U is a unit-diagonal uppertriangular matrix and thus |U| = 1, and the expression for (HF) + in (13). Observe that (16b) depends only on the transmitter F and is independent of U = B + I. It is also of interest to point out that the bound in (16a) is equivalent to stating that the arithmetic MSE is bounded below by the geometric MSE; i.e. tr(R ee,ZF )/M ≥ |R ee,ZF | 1/M . Therefore, the problem of minimizing the lower bound in (16a) corresponds to minimizing the geometric MSE. The lower bound in (16) can be minimized simply by maximizing |F HHHH F|; i.e., by solving Using the ordered eigen-decomposition of H H R −1 vv H in (7), and applying the trace-determinant inequality (15), we have that Therefore, for any ZF-BDFD system, the (arithmetic) MSE is bounded below byē This bound depends only on the parameters M and p 0 , and the M largest eigenvalues of H H R −1 vv H. The second stage of the derivation of the proposed design is to determine matrices F and B so that the minimized lower bound on the arithmetic MSE in (19) is achieved, To do so, we point out that according to the trace-determinant inequality (15) and the eigenvalue decomposition of H H R −1 vv H in (7), the bound in (18b) holds with equality if and only if Φ = αI for some α > 0 and Θ =Ṽ M P, whereṼ M was defined after (7) and P is an arbitrary permutation matrix. According to the power constraint in (17b), the bound in (18c) is achieved if and only if α = p 0 /M . Therefore, precoders of the form F = p 0 /MṼ M Ψ, where Ψ is an arbitrary M × M unitary matrix, minimize the geometric MSE of a ZF-BDFD system. The remaining task is to determine matrices Ψ such that the bound in (16a) holds with equality. To do so, we observe that the trace-determinant inequality (15) holds with equality if and only if X = αI for some α ≥ 0. Therefore, (16a) holds with equality if and only if we can choose Ψ such that R ee,ZF = σ 2 e I, where σ 2 whereΛ M was defined after (7). By taking the Cholesky factor, solving (20) is equivalent to solving where Q is an M × M unitary matrix. That is, we can reduce the search for a pair (F, B) such that the minimized lower bound on the MSE is achieved to the search for a unit-diagonal upper-triangular matrix U, and unitary matrices Ψ and Q that satisfy (21). Substituting σ e into (21), we get: The following result, which is a special case of a more general result in [56], [57], indicates that a solution to (22) exists. Lemma 1: Let Γ be a diagonal non-singular M ×M matrix. There exists a unitary matrix S such that ΓS has an equaldiagonal 'R-factor' in its (standard) QR decomposition; i.e. ∃S such that ΓS = QR, where Q is an M ×M unitary matrix and R is an upper-triangular matrix with equal diagonal elements 2 The matrix S in Lemma 1 can be obtained by suitably modifying Algorithm 5 in [57]. The modified algorithm is provided in Appendix I. Using that algorithm, we can obtain Ψ in (22). By performing the QR decomposition ofΛ 1/2 M Ψ, we obtain an upper triangular matrixŪ whose diagonal elements are all equal to . Thus, we have established the following proposition: Proposition 1: The (arithmetic) mean-square error tr(R ee )/M of a block-by-block transceiver with a ZF-BDFD achieves its minimized lower bound of where Ψ ZF is obtained by applying the algorithm in Appendix I toΛ obtained from the QR decomposition in (22). Substituting such F and B into (12) yields the feedforward matrix W. 2 From the above derivation it is apparent that the precoder in Proposition 1, which minimizes the arithmetic MSE, also minimizes the geometric MSE. However, a precoder that minimizes the geometric MSE does not necessarily minimize the arithmetic MSE. B. MMSE-BDFD In this subsection, we consider joint transmitter-receiver design for a system based on the MMSE-BDFD. The approach is similar to that for the ZF-BDFD in the previous subsection, but the details are substantially different. Recall from Section II and Fig. 2 that the received vector is y = HFs + v. Hence, the error betweenŝ and s is e = Wy − (B + I)s. The covariance matrix of y is R yy = (HF)(HF) H + R vv , and cross-correlation matrix of s and y is R sy = (HF) H = R H ys . In order to determine the minimum MSE feedforward matrix, W MMSE , we exploit the standard first-order necessary condition for optimality known as the orthogonality principle [39], namely E[ey H ] = WR yy −(B+ I)R sy = 0. Therefore, Substituting (23) into (5), and invoking the Matrix Inver- [32], the covariance matrix of the error can be written as Our goal is to design the F and B to minimize the MSE subject to the power constraint. Letting U = B+ I, the design problem (6) can be rewritten as subject to tr(FF H ) ≤ p 0 , and (25b) U being a unit-diagonal upper-triangular matrix. (25c) Following the first stage outlined at the beginning of Section III, we now obtain and minimize a lower bound on the MSE. According to the trace-determinant inequality (15), we have that Therefore, the lower bound on the MSE can be minimized by solving: As in the ZF case, the problem of minimizing the lower bound depends only on the transmitter. We point out that the objective in (27a) is equivalent to minimizing the geometric MSE implicit in (26). Furthermore, the logarithm of the objective in (27a) is the mutual information between the transmitter and receiver for Gaussian signals. (An analogous observation has been made in several similar contexts [9], [10], [17], [40], [52].) Hence, minimizing the lower bound on the arithmetic MSE in (26) is equivalent to maximizing the Gaussian mutual information. Given that the problem in (27) is equivalent to maximizing the mutual information for Gaussian signals, the solution involves a "waterfilling" power allocation over the eigenvectors of H H R −1 vv H, [50]. More formally, the solution depends on a parameter r ≤ K which is the largest integer satisfying 1/λ r < p 0 + r j=1 λ −1 j /r. If we define q = min{r, M }, then the following set of precoders 4 minimize the lower bound [50], and Ψ is an arbitrary M × M unitary matrix. 5 In that case, the minimal value of the lower bound on the MSE generated by (26) and (27) is which is independent of our design parameters F and B. Moving to the second stage of our general approach, we now determine a transceiver that achieves the minimized lower bound in (29). For ease of exposition, we defineΦ = (24) and (25a), the arithmetic MSE is tr(R ee,MMSE )/M , where Using the trace-determinant inequality (15), for the MSE to achieve its minimized lower bound, we must choose U and Ψ so that R ee,MMSE =σ 2 e I, whereσ 2 e = q q/M p 0 + q j=1 λ −1 That is, a system of the form in (28) achieves the minimized lower bound on the MSE in (30) if and only if we can findǓ = (1/σ e )U and unitary matrices Ψ and Q so that According to Lemma 1, there exists a unitary matrix Ψ such that the QR decomposition of (I M +Φ TΛ qΦ ) 1/2 Ψ has an upper triangular "R-factor" with diagonal elements all equal to |(I M +Φ TΛ qΦ ) 1/2 Ψ| 1/(2M) . This unitary matrix can be obtained by applying the algorithm in Appendix I to (I M +Φ TΛ qΦ ) 1/2 . We summarize this result in the following proposition. Proposition 2: The mean-square error tr(R ee )/M for a block-by-block transceiver with an MMSE-BDFD achieves its minimized lower bound (29) (28), and Ψ MMSE is obtained by applying the algorithm in Appendix I to (I M + Φ TΛ qΦ ) 1/2 . The corresponding feedback matrix B = U − I, where U is the unit-diagonal upper-triangular matrix U = σ eǓ andǓ is obtained from the QR decomposition in (31). Substituting such F and B into (23) yields the feedforward matrix W. 2 As was the case for the ZF-BDFD in Section III-A, the precoder in Proposition 2, which minimizes the arithmetic MSE, lies within the set of precoders that minimize the geometric MSE, but a precoder chosen arbitrarily from the set of precoders that minimize the geometric MSE does not necessarily minimize the arithmetic MSE. This observation provides a connection between the proposed design and an earlier design for a more general overlapping block transmission system in which the transmitter was designed to minimize the geometric MSE [52]. In the context of the block-by-block transmission schemes that we have considered, the design in [52] corresponds to choosing Ψ = I M , rather than choice of Ψ = Ψ MMSE in Proposition 2. While the choice of Ψ = I M results in a system that minimizes the geometric MSE, it does not minimize the arithmetic MSE in the general case. In addition, the SINR for each element of the block may be different. In contrast, the choice of Ψ = Ψ MMSE minimizes the geometric MSE and the arithmetic MSE, and provides an equal SINR for each element of the block. The choice of Ψ also has an impact on the nature of coding strategies for approaching the capacity of the block-by-block transmission system. From the discussion following (27) it is evident that the Gaussian mutual information is maximized by choosing M = r and employing a transmitter matrix of the form F =Ṽ r ΦΨ, where Φ satisfies (28) and Ψ is an arbitrary r × r unitary matrix. Since the MMSE-BDFD is a "canonical" receiver 6 for Gaussian signals [9], [10], [23], this suggests that by using sufficiently powerful codes, reliable communication at rates approaching the capacity of the block transmission system can be achieved by employing any F of this form and the MMSE-BDFD [9], [10], [23]. The choice Ψ = I r results in a "vector coding" scheme [10], [29], [30], [36], [41] in which the feedback component of the MMSE-BDFD is inactive; i.e., B = 0. Vector coding induces an equivalent system with r parallel Gaussian subchannels, each with a possibly different SNR ρ i . (Standard discrete multitone (DMT) modulation schemes [5], [8] are a class of vector coding schemes.) Therefore, one can approach the capacity of the block transmission scheme by choosing the code for the ith element of the block to be one that approximates the ideal Gaussian code of rate b i = log 2 (1 + ρ i ) bits per channel use. (Such approximations will often involve the selection of a constellation for each element of the block.) The choice Ψ = Ψ MMSE results in a system in which the feedback component of the MMSE-BDFD is active, and the inputs to the decision device are uncorrelated and have identical SINRs ρ. Since the MMSE-BDFD is a canonical receiver, this suggests that one can also approach the capacity of the block transmission system by employing an independent instance of the same approximation of the ideal Gaussian code of rate b = log 2 (1 + ρ) for each element of the block. The MMSE-BDFD used when Ψ = Ψ MMSE is more complicated 6 The term "canonical" is used to denote the fact that in the absence of error propagation, employing an MMSE-BDFD in place of the optimal detector does not reduce the achievable data rate [9], [10]. Methods for exploiting this property of the MMSE-BDFD were described in [24], [48]. to implement than the linear detector of the vector coding scheme because of the need to compute the feedback signal. However, the vector coding approach requires the design (and implementation) of (up to) r codes, one for each element of the block, whereas the proposed design requires the design of only one code. IV. BIT ERROR RATE PERFORMANCE In this section, we show that the (F, B) pairs designed in Section III to minimize the arithmetic MSE also minimize the (dominant components of the uncoded) bit error rate (BER) of a block transmission system with uniform bit loading at moderate-to-high block SNRs. We define the average BER of the detected signal to be the average of the probability of error of each element of the block; i.e., where P e,i denotes the BER of the ith symbol s i . For ease of exposition, we will deal with the ZF and MMSE-BDFDs separately. We will begin with the case of the ZF-BDFD. A. ZF-BDFD For the ZF-BDFD 7 and for square 8 QAM signalling with 2b i bits per symbol, if all the previous decisions are correct P e,i is closely approximated 9 by [7] P e,i ≈P e,i = α i erfc where erfc(x) = (2/ √ π) ∞ x e −z 2 dz is the error function complement, ρ i,ZF is the decision point SNR for the ith symbol in the block, α i = Under the assumption that all the previous symbols were correctly detected, we have that Therefore, the average BER can be closely approximated by 7 We implicitly assume that rank(H) ≥ M so that the ZF-BDFD exists. 8 For notational simplicity we have restricted our attention to square QAM constellations. The extension to rectangular QAM constellations can be derived in a straightforward manner using the BER expressions in [7], [53]. 9 In the case of QPSK signalling, the expression in (33), in which ζ i = 0, is exact. Since our precoders generate equal decision point SNRs for each element of the block, we will assume uniform bit-loading in the remainder of this section, and therefore we will drop the element index, i, in α i , β i and ζ i . When [R ee,ZF ] ii < 2β/3, which corresponds to moderate-to-high SNRs,P e is a convex function of [R ee ] ii , [12], [13], [36]. By applying Jensen's inequality [11] to (36), we obtain the following lower bound on the average BER P e ≥ α erfc βM/ tr(R ee,ZF ) + ζ erfc 3 βM/ tr(R ee,ZF ) . (37) Equality in (37) holds if and only if the diagonal elements of R ee,ZF are equal. Equation (37) exposes an intriguing relationship between the (arithmetic) MSE and the BER. Since minimizing tr(R ee,ZF ) simultaneously minimizes both terms in the summation on the right hand side of (37), minimizing the lower bound onP e in (37) is equivalent to minimizing the MSE; i.e., it is equivalent to minimizing tr(R ee,ZF ). Therefore, the lower bound onP e achieves its minimum value if the MSE is minimal. However, for the actualP e to achieve its lower bound (i.e., for (37) to hold with equality), the diagonal elements of R ee,ZF must be identical. 10 Fortunately, the design proposed in Proposition 1 results in R ee,ZF = σ 2 e I, and hence the proposed design, which minimizes the (arithmetic) MSE of a ZF-BDFD, also minimizes the BER of the ZF-BDFD at moderate-to-high SNRs, in the sense that it minimizesP e in (36). B. MMSE-BDFD The analysis of the previous section can be extended to the case of the MMSE-BDFD if the residual intra-block interference on each element of the block is approximated by a Gaussian random variable. For large block sizes, this approximation is (almost surely) sufficiently accurate for all but the last few elements of the block (c.f., [25], [38], [54]), and hence it is appropriate for our analysis. In order to account for the bias in the MMSE-BDFD (e.g., [9]), we can express the BER as a function of the decision point SINR of the ith element of the block [9], [10], [36], As was the case for the ZF-BDFD, this function is convex in [R ee,MMSE ] ii when [R ee,MMSE ] ii is below a (reasonably large) threshold [6], [36], and hence for a system in which uniform bit loading is applied, Jensen's inequality can be used to show thatP e ≥ α erfc β M/ tr(R ee,MMSE ) − 1 + ζ erfc 3 β M/ tr(R ee,MMSE ) − 1 , (40) with equality holding when the diagonal elements of R ee,MMSE are equal. Hence, using similar arguments to those used in the case of the ZF-BDFD, the design proposed in Proposition 2, which minimizes the arithmetic MSE of the MMSE-BDFD and results in R ee,MMSE =σ 2 e I, also minimizes the BER of the MMSE-BDFD at moderate-to-high SNRs, in the sense that it minimizesP e in (39). 11 V. PERFORMANCE ANALYSIS In Section IV it was shown that the precoders that we designed in Section III (essentially) minimize the BER of the BDFD, under the assumption that the decisions that are fed back in the receiver are correct. It can also be shown (see Appendix II) that under the same assumption the optimized system for an MMSE-BDFD provides a lower BER than the optimized system for a ZF-BDFD, and that each optimized BDFD system provides a lower BER than the optimized system for the corresponding linear detector; c.f., [6], [12], [36]. That said, an incorrect decision in a BDFD can make it more likely that subsequent errors will occur by feeding back incorrect decisions. This may lead to error propagation across the block. (Recall that error propagation between blocks is explicitly avoided in block-by-block communication systems.) A standard bound on the probability of error of a conventional decision feedback equalizer in the presence of error propagation is a simple multiple of the probability of error in the absence of error propagation [16]. This suggests that the systems designed in Section III should perform well in the presence of error propagation. (A bound that is sometimes tighter [1] generates similar insight.) In this section, we seek to verify these suggestions by analyzing, via simulation, the (uncoded) BER performance of the system when error propagation may occur. We will consider two communication scenarios: zeropadded block transmission [41], [42], [44] through a (quasistatic) scalar finite impulse response (FIR) frequency-selective fading channel that is constant over the length of the block; and transmission through a narrowband (i.e., frequency-flat) multiple antenna fading channel with at least as many receive antennas as transmit antennas [18]. In the first scenario, the channel matrix H is a tall, lower triangular, Toeplitz matrix, but in the second scenario H does not possess any deterministic structure. We will evaluate the average BER performance of various transceivers for these channels in the presence of additive white Gaussian noise at the receiver; i.e., R vv = σ 2 I. We will plot the BER performance curves as a function of the (system) SNR, which we define as being the ratio of the transmitted energy per symbol to the noise variance; i.e., (p 0 /M )/σ 2 . In addition to the transceivers we designed for the ZF-BDFD and MMSE-BDFD in Section III, for which the precoders are denoted by F OPT-ZF-BDFD and F OPT-MMSE-BDFD , respectively, when M = K we will also consider the direct transmission scheme, for which the precoder is and the discrete Fourier transform (DFT) precoded scheme, for which the precoder is where D is the normalized M × M DFT matrix. For the precoders in (41) and (42), the receiver matrices B and W are chosen according to the (separate) design procedures for the ZF-BDFD and MMSE-BDFD in [44]. (Note that the precoders in the direct and DFT schemes are channel independent.) For all these precoders, we provide BER curves for the idealized detector, in which the decisions that are fed back are correct, and for the practical detector, in which the actual decisions are fed back (and hence error propagation may occur). In order to assess the extent of the performance gains (derived in Appendix II) of the optimized BDFD systems over the optimized system for the corresponding linear detector, we will include the performance of systems with linear ZF and MMSE detection and precoders designed so that the BER at moderate-to-high block SNRs is minimized [6], [12], [36]. Using the notational conventions in Sections II and III, in particular the ordered eigen decomposition H H R −1 vv H = VΛV H , a minimum BER precoder for the linear ZF detector is [12] F OPT-ZF-L = p 0 / tr(Λ and one for the linear MMSE detector is [6], [36] where the integer k = min{ℓ, M }, where ℓ is the largest integer such that and Υ is a k × k diagonal matrix with diagonal elements satisfying A. Scalar frequency-selective fading channel In this section we consider the case of zero-padded block transmission through a (quasi-static) scalar FIR frequencyselective fading channel. In this case, the direct transmission scheme in (41) is sometimes referred to as the "single-carrier zero-padded" (SCZP) scheme [49], and the DFT precoded scheme is sometimes called the "zero-padded OFDM" (ZP-OFDM) scheme [34]. We consider a scenario in which the channel is of length L + 1 = 5 and L zeros are appended to each block of channel symbols u. The symbol block s is of length M = 16, and we consider square precoders F. (Hence, K = 16 and P = K + L = 20.) Each element of s is an independently selected symbol from the 4-QAM constellation, with each constellation point being equally likely. In Fig. 3 we plot the BER for the ZF-BDFD transceivers, averaged over ten thousand channel realizations. (In the optimized designs, the transceiver was re-designed for each channel realization.) For each channel realization the tap coefficients were generated independently from a zero-mean circular complex Gaussian distribution and then normalized so that the impulse response had unit energy. It is clear from the solid curves in Fig. 3 that in the absence of error propagation, the design proposed in Proposition 1 performs better than all the other transmission schemes, 12 although the SNR gain over the direct transmission (SCZP) scheme is rather small (around 0.5 dB at a BER of 10 −4 ). Furthermore, the dashed curves demonstrate that this performance advantage is maintained in the presence of error propagation. In particular, the performance of the proposed scheme in the presence of error propagation is as good as the performance of the SCZP scheme in the absence of error propagation. The combination of the DFT transmitter (ZP-OFDM) and the ZF-BDFD performs poorly at moderate-to-high block SNRs. In fact, it is apparent from Fig. 3 that the linear ZF detection scheme with its minimum BER precoder [12] performs better than the combination of the DFT transmitter and the ZF-BDFD. However, as predicted by the analysis in Appendix II, the optimal precoder for the ZF-BDFD provides substantially better performance than the combination of the linear ZF detector and its minimum BER precoder. The corresponding results for the MMSE-BDFD are provided in Fig. 4. The same trends are observed and the SNR gains are at least as large. Furthermore, the improved BER performance of the optimized MMSE-BDFD system over the optimized ZF-BDFD system predicted by the analysis in Appendix II can be clearly observed. In both Figs 3 and 4, the performance of the optimized scheme in the absence of error propagation is indistinguishable from the corresponding bound onP e in Section IV; c.f., (37) and (40), respectively. An interesting by-product of the above performance evaluation is the good performance provided by the (channel independent) direct transmission scheme (SCZP). In fact, the SCZP scheme is an optimal channel independent transmission scheme for systems that employ linear [31] or maximum likelihood [49], [55] detection, and it approaches the diversitymultiplexing trade-off for a standard class of FIR channels as the block length grows [21]. These desirable characteristics are due, in part, to the fact that the SCZP scheme preserves the good conditioning properties implicit in the tall lowertriangular Toeplitz structure of the channel matrix. 12 As predicted by the derivation in Section IV-A, the proposed precoder performs better than all other transmission schemes for each realization of the channel. B. Multiple antenna systems In this example, we consider the case of narrowband transmission over a multiple antenna channel with at least as many receiver antennas as transmitter antennas. In this scenario, the combination of the direct transmission scheme and a BDFD is sometimes referred to as (uncoded) V-BLAST with a (fixedorder) "nulling and cancelling" receiver [4], [18], [20]. We consider a standard Rayleigh model for the channel in which the paths between antennas are modelled as independent zeromean circular Gaussian random variables of unit variance. We will focus on scenarios with K = 3 transmitter antennas and P = 3 or 4 receiver antennas in which M = K = 3 symbols are transmitted per channel use. Each element of s is an independent and equally-likely 4-QAM symbol. Therefore, the bit rate of each scheme is 6 bits-per-channeluse (bpcu). In Figs 5 and 6, we plot the average BER performance over ten thousand channel realizations of the various transmission schemes with the ZF receivers, and in Figs 7 and 8 we plot the corresponding curves for the MMSE receivers. While most of the basic trends from the case of the scalar frequency-selective channels are maintained in the multiple antenna scenario, the performance advantages of the precoders designed in Section III are much greater. (The SNR gains are of the order of 6-8 dB at a BER of 10 −4 .) This can be attributed to the fact that the channel matrix H does not possess any deterministic structure. In particular, the probability of encountering a channel matrix that does not have M substantial singular values is not negligible. Since the proposed designs provide significantly better performance in those cases, the average performance is also substantially improved. As expected, the performance of the optimized ZF-BDFD scheme in the absence of error propagation in Figs 5 and 6 is equal to the lower bound onP e in (37). (Recall that we are using 4-QAM signalling.) However, in the MMSE-BDFD case, the lower bound onP e in (40) is distinguishable from the simulated BER in the absence of error propagation. This is due to the fact that the block size (M = 3) is small enough for the inaccuracy of the Gaussian approximation of the residual interference to result in a discernible difference between the BER andP e . That said, even for this small block size,P e is an accurate approximation of the BER in the absence of error propagation. A few other features of Figs 5-8 are worthy of note. First, the average performance of the direct and DFT transmission schemes are essentially the same. This is to be expected because the statistics of H are unitarily invariant. Second, the increase in the diversity provided by the channel when using P = 4 receiver antennas rather than P = 3 is clear from the different slopes of the BER curves at high SNR. Finally, the performance advantage of the optimized MMSE-BDFD scheme over the optimized ZF-BDFD scheme is significant in the case of P = 4 receiver antennas and is substantial in the case of P = 3. The performance advantage of the optimized MMSE-BDFD scheme is due, in part, to the fact the power allocated to the first M eigenmodes of H H R −1 vv H depends on the corresponding eigenvalues. In particular, weak eigenmodes might not be allocated any power at all. In contrast, the optimized ZF-BDFD scheme allocates power uniformly over these eigenmodes. The larger performance advantage of the optimized MMSE-BDFD scheme in the case of P = 3 is due to the larger probability of encountering a channel matrix such that H H R −1 vv H does not have M = 3 significant eigenvalues. For reference, we have included the performance of a standard orthogonal space-time block coding (OSTBC) scheme in Figs 5-8. (Like the direct and DFT transmission schemes, OS-TBC schemes were designed to be applied without knowledge of the channel at the transmitter.) We have used the (symbol) rate 3/4 code in [19] (which is a simplified version of that in [45]), and hence in order to achieve a bit rate of 6 bpcu, a natural choice for the underlying constellation is 256-QAM. (We assume that the channel is constant for the four channel uses that are required to transmit the codewords.) As expected, at high SNR, the OSTBC scheme provides better BER performance than that direct transmission (V-BLAST) scheme. However, the proposed precoder (which exploits knowledge of the channel) provides substantially better performance when P = 4 receiver antennas are employed, and when P = 3 and the MMSE-BDFD receiver is used. When P = 3 receiver antennas are employed and the ZF-BDFD is used, the OSTBC scheme performs better than the optimized scheme at high SNRs. This does not contradict the optimality of the proposed transceiver design, because the values of M , K and P , and the structure of the channel matrix, are different for the OSTBC scheme. 13 The good performance of the OSTBC scheme at high SNRs is simply a manifestation of the trade-off between error rate (achievable diversity) and symbol rate in multiple antenna fading channels without outer codes [46]. (That trade-off is related to the fundamental diversity-multiplexing trade-off [58].) The symbol rate of the OSTBC scheme is significantly lower than that of the proposed scheme. 14 Hence, in the range of SNRs in which noise dominates the error performance, the proposed scheme provides better performance than the OSTBC scheme, but in the SNR range in which the channel condition dominates the error performance, the OSTBC scheme provides better performance. To illustrate that point, in Fig. 5 we plotted with unmarked curves the performance of the proposed ZF-BDFD scheme with a symbol rate of M = 2 (as distinct from the scheme with M = 3 described above). In order to maintain a bit rate of 6 bpcu, the elements of s were taken, in an independent and equally-likely fashion, from an 8-QAM constellations, and for consistency, the SNR was defined to be (p 0 /3)/σ 2 . Over the range of SNRs considered, the performance of the proposed ZF-BDFD scheme with M = 2 is substantially better than that of the OSTBC scheme, with SNR gains of over 7 dB. VI. CONCLUSION In this paper, we have jointly designed the precoder and the feedback matrix of a block-by-block transmission scheme equipped with a zero-forcing or minimum mean-square error (MMSE) intra-block decision feedback detector (BDFD). The designs minimize the arithmetic mean of the expected squared errors at the decision point, under the standard assumption that the previous symbols were correctly detected. The covariance matrix of the minimized error is white, and hence the proposed designs also minimize the (dominant components of the) bit error rate of a uniformly bit-loaded transmission system. In our simulations, the proposed systems performed significantly better than standard precoding systems, and retained their performance advantages in the presence of error propagation. In the case of the MMSE-BDFD, the proposed design also maximizes the Gaussian mutual information. Since the MMSE-BDFD is a "canonical" receiver [9], [10], [23], this suggests that by using the proposed transceiver design, one can approach the capacity of the block transmission system using (independent instances of) the same (Gaussian) code for each element of the block. APPENDIX I ALGORITHM FOR LEMMA 1 To state the algorithm succinctly, we make the following definitions: g = M k=1 γ 2 k 1/M ; [S] ·k denotes the kth column of S and s ℓk denotes its elements; Z k denotes the first k columns of S and Z ⊥ k denotes its orthogonal complement; For convenience, we assume that the elements of Γ are arranged in non-increasing order. The algorithm proceeds as follows: 1) Initialization: Set k = 1. An explicit solution for the first column of S is Fig. 6. Average BER performance of the ZF-BDFD for the various precoders and the linear ZF detector with its optimal precoder in the narrowband multiple antenna scenario in Section V-B with 3 transmitter antennas and 4 receiver antennas. The legend is the same as that in Figure 5. APPENDIX II ANALYTIC PERFORMANCE COMPARISONS It was shown in Section IV that the precoders designed in Section III achieve the minimized value of the lower bound onP e ; c.f., (37) and (40). Therefore, the relative BER performance of the optimized ZF-BDFD and MMSE-BDFD systems in the absence of error propagation can be determined by simply comparing the optimal values of the MSE,ē 2 = tr(R ee )/M . (A preliminary version of this appendix appeared in [33], and related results on the MSEs of conventional decision feedback equalizers appear in [2,Chapter 8].) In order to ensure that the ZF systems exist, we will assume that rank(H) ≥ M , and to simplify the comparisons, we will also assume that the transmitted power p 0 is large enough that q = M in (28) for the MMSE-BDFD and ℓ = M in (44) for the linear MMSE detector. 15 Proposition 1 states that the SinceΛ M is positive definite,ē 2 OPT-MMSE-BDFD <ē 2 OPT-ZF-BDFD , and hence, in the absence of error propagation, the optimized MMSE-BDFD system will provide a lower BER than the optimized ZF-BDFD system. While it is intuitively obvious that for a given precoder, the MMSE-BDFD will provide a lower MSE than the ZF-BDFD, in the case of optimized precoders, this lower MSE leads directly to a lower BER. The analysis of Section IV remains valid for systems with linear detectors, so long as the constraint B = 0 is enforced. Therefore, we can compare the BER performance of an optimized BDFD system with that of the system that is optimized for the corresponding linear detector by simply comparing their minimum MSEs. The minimum MSE of a system with a linear ZF detector is [12] where we have used the trace-determinant inequality (15). Therefore, in the absence of error propagation the optimized system for the ZF-BDFD will provide a lower BER than the optimized system for the linear ZF detector. Similarly, the Fig. 8. Average BER performance of the MMSE-BDFD for the various precoders and the linear MMSE detector with its optimal precoder in the narrowband multiple antenna scenario in Section V-B with 3 transmitter antennas, 4 receiver antennas. The legend is the same as that in Figure 7. minimum MSE of a system with a linear MMSE detector is [6], [36] and hence the optimized system for the MMSE-BDFD provides a lower BER than the optimized system for the linear MMSE detector. As observed in [6],ē 2 OPT-MMSE-L ≤ē 2 OPT-ZF-L , and hence the optimized system for the linear MMSE detector provides a lower BER than the optimized system for the linear ZF detector.
2014-10-01T00:00:00.000Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "ed2385d5ad54d55436f55007572431adc6ef6438", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cs/0504015", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ed2385d5ad54d55436f55007572431adc6ef6438", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
257767289
pes2o/s2orc
v3-fos-license
Refined Majorana phase diagram in topological insulator-superconductor hybrid system The edge state of the topological insulator coupled to a superconductor system is able to simulate the Majorana fermion in zero energy mode since the Kitaev-type pairing is induced by exchanging quasi-excitations in electron tunneling. However, the present study has revealed that this physical simulation is not valid for a larger surface gap, which is the energy gap of the insulator's surface states. To address this issue, a refined pairing term that depends on the surface gap has been obtained as a second-order effect of the proximity effect, whereas the lowest order produces a constant pairing strength. By carefully considering the dependence of pairing strength on the surface gap, the Majorana phase diagram is re-achieved and a significant difference from previous work is observed, where the pairing strength was assumed to be independent of the surface gap and resulted in a conical phase boundary. The edge state of the topological insulator coupled to a superconductor system is able to simulate the Majorana fermion in zero energy mode since the Kitaev-type pairing is induced by exchanging quasi-excitations in electron tunneling. However, the present study has revealed that this physical simulation is not valid for a larger surface gap, which is the energy gap of the insulator's surface states. To address this issue, a refined pairing term that depends on the surface gap has been obtained as a second-order effect of the proximity effect, whereas the lowest order produces a constant pairing strength. By carefully considering the dependence of pairing strength on the surface gap, the Majorana phase diagram is re-achieved and a significant difference from previous work is observed, where the pairing strength was assumed to be independent of the surface gap and resulted in a conical phase boundary. I. INTRODUCTION The zero-energy state of quasi-excitation in a hybrid system, known as Majorana zero mode, has been used to physically simulate the Majorana fermion [1][2][3][4][5][6][7][8][9][10]. Over the last two decades, two common proposals for simulating Majorana fermions have been developed based on hybrid condensed matter systems: the topological insulators (TI) [11][12][13][14][15][16] in proximity to an s-wave superconductor (TI-SC) [3,6] and the nanowire with spin-orbit coupling in contact with an s-wave superconductor [4,5]. A Kitaev-type pairing term on the surface of TI or nanowire can be induced by the proximity effect [17], which is a virtual process of exchanging the quasiexcitation in the superconductor. In previous theoretical studies [3,6,9,18], the Majorana zero mode was predicted when the Kitaev-type pairing was always induced by the proximity effect in the lowest order. Therefore, whether the pairing can be effectively induced to lead to a topological phase transition depends on certain conditions. Moreover, there have been controversies about the existence of chiral Majorana fermion modes in some experiments, including a retracted paper [19] that supported the lower-order theory, but was later disproved by another experiment [20]. To resolve these issues, a more precise low-energy effective theory with higher-order proximity effect was considered by the conventional Fröhlich-Nakajima (Schrieffer-Wolff) transformation [21][22][23][24] to refine the phase diagram . Our study finds that the pairing strength depends on the surface gap m, which is the energy gap of the surface states of the insulator, in comparison with the pairing strength that was previously thought to be independent of m. We take into account the surface gap dependence and achieve a closed topological phase diagram in m−µ−∆ space, where µ represents the chemical potential and ∆ represents the constant pairing strength. This closed phase diagram is different from the previous open phase diagram of the conic shape [6]. Lastly, we note that the pairing strength becomes divergent as the magnitude of surface gap |m| approaches the superconducting gap. * suncp@gscaep.ac.cn II. LOW-ENERGY EFFECTIVE HAMILTONIAN FOR TI-SC SYSTEM Unlike ordinary insulators, topological insulators (TIs) exhibit surface states in the vicinity of the Fermi level, which can be described by the two-dimensional massless Dirac Hamilto- [16,25,26]. Here, v denotes the Fermi velocity and σ x , σ y are the Pauli matrices. By doping the TI material with magnetic elements such as Fe or Cr, a mass term mσ z can be introduced [27], which opens a band gap of 2m for the surface state. The magnitude of the mass m (hereafter referred to as the surface gap) depends on the magnetic ordering structure and can be tuned by an external magnetic field. In reality, the low-energy physics of TI thin films is more accurately described by a four-band model, which includes tunneling between the opposite surfaces [9,27]. For the sake of clarity, we employ a reduced two-band Hamiltonian that captures the essential features of TIs: H TI = d 2 k/(2π) 2 ϕ † k · H k · ϕ k [6,18], where the Hamiltonian matrix is given by Here, µ is the chemical potential and m 1 k 2 := m 1 (k 2 x +k 2 y ) is the parabolic band component, which is crucial for determining the topological properties [6]. The TI thin film is placed in contact with an s-wave superconductor, which is described by the BCS Hamiltonian where c K = [c K↑ , c † −K↓ ] T represents the Nambu spinor, ∆ s is the superconducting gap, and s = K 2 /(2m s ) − µ s is the the kinetic energy of electron above the Fermi level µ s . The interaction between the TI surface and the superconductor is modeled by the electron tunneling Hamiltonian: Here, we have assumed that the momentum paralleled to the surface of TIs (k = K // ≡ (k x , k y ), K ⊥ ≡ k z ) and the spin are conserved during the electron tunneling process. To eliminate virtual processes in the electron exchange between the TI surface and the superconductor, we apply the Fröhlich-Nakajima (Schrieffer-Wolff) transformation, resulting in an effective low-energy Hamiltonian for the TI-SC system (see Here, the pairing strength∆ induced by the SC proximity effect is∆ where ∆: = J 2 m s /(2µ s ) is the constant pairing strength. Notably, the pairing strength∆ depends on the ratio of the surface gap and the superconducting gap, i.e., m/∆ s . Moreover, there is an overall correction term proportional to∆/∆ s for original TI Hamiltonian. As a result, the renormalized surface gap and the chemical potential of TI become, respectively, This parametric dependence of the pairing strength∆ on the surface gap m leads to a significant change in the topological phase diagram, as will be discussed in detail in the next section. III. CHERN NUMBER AND PHASE DIAGRAM Typically, the parameter space is partitioned into distinct regions based on the topological invariant, such as the Chern number, to obtain the phase diagram with the topological phases located in those regions. In the case of the hybrid system described by the low-energy effective Hamiltonian H eff , the Chern number can be evaluated as follows: It follows from Eq. (7) that the topological phase transition occurs at the conditionm 2 =∆ 2 +μ 2 , which seems the same as the formerly results: m 2 = ∆ 2 + µ 2 in ref. [6,18]. However, the renormalized surface gapm, the chemical potential µ and the induced pairing∆ all depend on the surface gap m, which is different from the formerly results where the three parameters are independent of each other. By combining Eqs. (5, 6, 7), we can obtain the the phase diagram of the TI-SC system in m−µ−∆ space, as presented in Fig. 1(a). The phase boundry exhibits a double-peak structure that divides the phase space into three regions with distinct Chern numbers. The left peak (m < 0) has a Chern number of N = 2, while the right peak has N = 0. Outside of the peaks, the Chern number is N = 1. Next, we compare the phase boundary for µ = 0, as illustrated in Fig. 1(b), with previous works on the m − ∆ phase diagram that has a linear phase boundary described by ∆ = ±m [6]. Additionally, we present the phase boundary when ∆ = 0.125∆ s in Fig. 1(c). It is worth noting that as the constant pairing ∆ increases, the region that corresponds to N = 2 and N = 0 becomes smaller, eventually shrinking to a point when ∆ = 0.30∆ s , which corresponds to the maximum value of ∆ on the phase boundry curve in Fig. 1(b). Beyond this value (i.e., when ∆ > 0.30∆ s ), the Chern number will always be 1, and no phase transition will occur when m is adjusted. It is important to note that the pairing strength∆ becomes divergent when the surface gap |m| approaches the superconducting gap ∆ s . Because of the divergent behavior, the boundary sharply gets closed when |m| is near ∆ s . The divergence indicates that the perturbation theory fails when |m| is approximately equal to or greater than ∆ s . In such cases, it remains an open question whether an effective Hamiltonian can be used to describe the hybrid system. When the surface gap is sufficiently small (|m| ∆ s ), the pairing strength becomes constant, and the m − µ − ∆ phase diagram reverts to the corn form presented in previous research [6], resulting in a linear phase boundary for µ = 0. Meanwhile, the m−µ phase diagram returns to a parabolic curve, as described in previous studies of nanowire-superconductor systems [7] (the surface gap m in TI system will correspond to the external magnetic field B in nanowire system) The refined phase diagram shown in Fig. 1(b), in comparison with the previous one [6], reveals a smaller range of values for the topological Majorana phase (Chern number N = 1). Therefore, it is crucial to compare the surface gap of TI materials with the superconducting gap in experiments to simulating Majorana fermions. Currently, the superconducting gaps of commonly used materials for Majorana detection, such as Nb, Al, and NbSe 2 , are 1.5, 2.0, and 2.15 meV, respectively [19,20,28,29]. While the surface gap of TI thin films is typically of a larger order than the superconducting gap, such as a first-principles calculation of the Hall conductance for Fe-doped Bi 2 Se 3 , which shows that the surface gap for 3, 4, and 5 quintuple layers are 90, 42, and 21 meV, respectively [27]. Moreover, in another experimental work using angleresolved photoemission spectroscopy, the measured surface gap of Bi 2 Se 3 doped with 1% Mn is 7 meV [30]. Both of these data suggest that the surface gap may exceed the superconducting gap in actual experiments, raising questions about the effectiveness of simulating Majorana fermions in these systems. IV. CONCLUSION After re-examining the preconditions of the effective Hamiltonian for a topological insulator (TI) in proximity to an s-wave superconductor, we have found that the pairing strength, denoted as∆, is dependent on the surface gap m of TI to a higher-order than previously assumed. By considering this higher-order proximity effect, we have achieved a refined topological phase diagram in the m − µ − ∆ space (µ represents the chemical potential and ∆ represents the constant pairing term), which is different from the conic shape phase boundary resulting from the lower-order approximation used in previous literature [6]. Moreover, we have found that the pairing strength∆ becomes divergent as |m| approaches the superconducting gap ∆ s . Therefore, the validity of the effective Hamiltonian and conductance signatures for Majorana fermion detection are only credible when |m|<∆ s , and the confidence of the results deteriorates as |m| is closer to ∆ s . Therefore, the Majorana fermion simulation for TI-SC system should be realized with a modest surface gap, |m|<∆ s . However, the existing experimental data suggests that the surface gap may be beyond the effective range given by our refined phase, and we are eagerly awaiting a response to this problem. Appendix A: The low-energy effective Hamiltonian of topological insulator-superconductor system In the appendix, we derive the low-energy effective Hamiltonian (4) for the topological insulator (TI) in proximity to an swave superconductor (SC) system by the Fröhlich-Nakajima (Schrieffer-Wolff) transformation. The total Hamiltonian for the TI-SC system includes three main terms: H = H TI + H SC + H T ≡ H 0 + H 1 . And the surface state of a magnetic TI thin film can be described by a two band model: with ϕ kσ annihilating an electron of momentum k and spin σ =↑, ↓. Here, m k = m + m 1 (k 2 x + k 2 y ) is the mass term with the surface gap m, µ is the chemical potential and v is the Fermi velocity. The s-wave SC providing the superconducting proximity effect for the TI film is described by the BCS Hamiltonian (under self-consistent field approximation) with the superconducting gap ∆ s and the kinetic energy of electron s = K 2 /(2m s ) − µ s above the Fermi level µ s . The tunneling interaction between TI and SC by the contact plane z = 0 reads as: Here, J is the tunneling strength, and ϕ σ (x), c σ (X) are the inverse Fourier transforms of ϕ kσ , c Kσ : Notice that x, k are the two-dimensional component (parallel to TI surface) of X, K respectively. By the Fourier transformation, we can rewrite the tunneling Hamiltonian (A3) in momentum space: To describe the quasi-particles in SC, we introduce the Bogoliubov transformation as follows with tan 2θ K = ∆ s / s . Then the BCS Hamiltonian can be diagonalized as where E s = 2 s + ∆ 2 s is the energy spectrum of the quasi-partical in SC. Similarly, we rewrite the tunneling interaction (A5) with Bogoliubov quasi-particle operators η Kσ as Now, we apply the Fröhlich-Nakajima (Schrieffer-Wolff) transformation to eliminate the quasi-excitation of the SC. Treating H 1 as a perturbation term, we perform a canonical transformation e S to the total Hamiltonian: Moreover, we require that the transformed Hamiltonian has no first order term, i.e. [H 0 , S] + H 1 = 0, and the ansatz for the anti-Hermitian transformation S is set as: By satisfying the condition [H 0 , S] + H 1 = 0, the undetermined coefficients of the transformation S in Eq. (A10) can be obtained as When the surface gap m is not too large (compared to the superconducting gap ∆ s ) and the electron tunneling strength J is weak (i.e. |Π ± | J), the effective Hamiltonian of the TI-SC in second-order perturbation is further obtained as H eff = H 0 + 1 2 [H 1 , S], where the second-order term can be calculated as (A12) In the above calculation (A12), we have utilized the relations of the coefficients: and discarded the superconducting terms η † η † , η † η, ηη due to the decoupled hybird system in the second-order approximation. By substituting the coefficients in Eq. (A11) into Eq. (A12), the effective Hamiltonian of TI dressed by the superconducting proximity effect by is obtained as where the renormalized mass termm k , the chemical potentialμ, the Fermi velocity of TIṽ and the induced pairing terms with the same spin∆ k and the opposite spinΛ k are respectivelỹ with Π ≡ E 2 s − m 2 k − v 2 k 2 . In the above simplification of (A14), we have considered that the chemical potential is much smaller than the SC gap (i.e. µ ∆ s ), so the higher orders of µ/E s are ignored. Besides, the renormalized chemical potentialμ is obtained by considering the small mass term and the low-energy state of TI (m k , vk E s ) simultaneously. Finally, we need to finish the calculation of the two integrals that remain in Eq. (A14), one is:
2023-03-28T01:22:28.464Z
2023-03-26T00:00:00.000
{ "year": 2023, "sha1": "fa3e3cfe045a8869f618b392a6e818d09dca2b2e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fa3e3cfe045a8869f618b392a6e818d09dca2b2e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12577782
pes2o/s2orc
v3-fos-license
A Method in Security of Wireless Sensor Network based on Optimized Artificial immune system in Multi-Agent Environments Security in computer networks is one of the most interesting aspects of computer systems. It is typically represented by the initials CIA: confidentiality, integrity, and authentication or availability. Although, many access levels for data protection have been identified in computer networks, the intruders would still find lots of ways to harm sites and systems. The accommodation proceedings and the security supervision in the network systems, especially wireless sensor networks have been changed into a challenging point. One of the newest security algorithms for wireless sensor networks is Artificial Immune System (AIS) algorithm. Human lymphocytes play the main role in recognizing and destroying the unknown elements. In this article, we focus on the inspiration of these defective systems to guarantee the complications security using two algorithms; the first algorithms proposed to distinguish self-nodes from non-self ones by the related factors and the second one is to eliminate the enemy node danger.The results showed a high rate success and good rate of detecting for unknown object; it could present the best nodes with high affinity and fitness to be selected to confront the unknown agents. Introduction Security in wireless sensor network is important to insure the safety of exchanged information. Lots of techniques have been proposed through Media Access Control (MAC) protocols in computer networks 1 for intelligent selection of sensors 2 , headship, and control to increase the data security level. Some algorithm like distributed track finder algorithms (GGWRR2: greedy weighted algorithms) 3 try to find the constant signal field (error) in network through a decoding algorithm including a set of compressed nodes. Computer access protocols (i.e., S-MAC, one of the most important ones) also controls and influences through determining which one of the nodes can access the physical layer (media). The purpose of reducing transfers that occurs at the same time during network access can cause security breach. Artificial immune system algorithms 4 are among of the security methods that draw a lot of attentions by research community. These algorithms inspired by the work of our body's natural immune system tries to recognize unknown agents from self nodes, transmitting signals about intrusion in self nodes, and finally confront these agents to eliminate the danger by ending its power source. Just like natural immune system fighting pathogens, artificial immune systems can be used to protect computer from threats 4,5 . However, natural immune system has other functions such as minimizing the threat to body and making sure that human body system can perform its essential duties normally. The challenge faced by the immune system, when a pathogen inters the body, is to recognize and destroy pathogens (self or non-self) while recognizing self antigens and body molecules 6 . Artificial immune algorithms can be used in broad cases such as pattern recognition and classification 6 , and optimization analysis and inquiry. Computer security can be considered as the most important function of artificial immune algorithms 7 . In this paper, we present the review of some existing immune systems and the way of making action when confronting enemy agents in wireless sensor networks. We will also discuss representing necessary and practical algorithms for intrusion detection and confronting the intruder by an exclusive method using the available agents in the networks and representing a software simulation. bacteria and viruses. Human immune system has the ability to actively recognize and consecutive destroy pathogens. There are firm difficult divisions between body system parts that form innate and adaptive immunity 3 . Natural immune system is known as a multilayer immune system including organic barriers, innate immunity, and adaptive immunity. Organic Barrier: The first inner layer of human body consists of skin and muscular organ level. Skin provides the first line of defense to invader agents including bacteria, viruses and fungi, by providing both physical and chemical barriers 8 . Innate immunity: Innate immunity (figure 1) or unspecified immunity is the first line of defense against the aggressive agents. It comprises some physical and cellular barriers and mechanisms that provide immediate non-specific defense to the infectious agents. Figure-1 Innate immunity parts in Artificial immine system Physical barriers: Epithelial surfaces provide an impermeable physical barrier to a verity of infectious organisms. Peristalsis and cilia eliminate external particles from the respiratory and gastrointestinal tracts; mucus traps bacteria and viruses; adhered bacteria to the skin epithelia can be removed by desquamation. The phagocyte cells barrier: Some different specific cells like phagocytes, neutrophills and natural killer cells can devour and digest all kind of pathogenic micro-organisms. Quick response: Active phagocyte tissues produce a variety of signaling compounds named cytokines. These molecules act like hormonal harbingers that send a fiery and firm response to protective organs in the body. i. Adaptive Immunity. Adaptive immunity, also known as specified or acquired immunity, is a part of immune system composed of specialized immune cells which can clearly recognize and destroy pathogenic microorganisms. Self and Non-self recognition: The immune system can distinguish between self and non-self cells and response only to non-self molecules. Immunity memory cells: After recovering from a disease, memory cells are produced from B-cells. Second immune response will be faster and much stronger, when the immune system facing again to the same infectious agent 8 . The consisting elements of human natural body immune system have been shown in (figure 2). Figure-2 The consistin elements of human immune system The immune system is responsible for running a fast response when recognizing a harmful potential pathogen. This process is possible by recognizing some special molecules on the cell surface of pathogens followed by surrounding and destruction of the pathogens 3 . The noticeable point in adaptive immune system would be the white cells or lymphocytes which are divided in two classifications, B-cells and T-cells. Cytotoxic T-cells and T helper cells (Th cells) are two subsets of T-cells. Th cells, also known as CD + T-cells, can recognize exogenous antigens when they are presented with MHC (Major Histo-compatability complex) class II molecules on the cell surface of a APC (antigen-presenting cells) (figure 3). It has been shown that during a selection process, available T-cells in thymos take the ability of recognition of self cells from non-self. During this process, T helper cells with the affinity of MHC molecules will be placed in positive group; during a negative selection, the cells which are compatible with the self proteins with be eliminated. The remained mature cells have a close affinity to the MHC protein type I and II, however, they have no affinity to the self proteins 2 . In contrast, B-cells can recognize the pathogens using specific receptors for antigens named antibody. Specific antigenantibody binding stimulate the B-cells to up take the pathogen during a process called phagocytosis and destroy it by proteolysis enzymes 1 . Together with MHC class II molecules, the process antigens are displayed on the surface of B-cells, where they can be recognized by T helper cells. Once a specific antigen expose to the immune system again, the memory B-cells response result in an immediate and fast reactions without any need to clonal selection or affinity maturity. The details of affinity to antigen are shown in (figure 4). Figure-4 The affinity maturity process chart in antibodies Generally, the observed theory and functions from the immunization of artificial immune system have been inspired by the natural immune system principles; these models are used in wide and complicated range of subjects 3 . Although, clonal selection and negative selection are the important part of this system, one of the other algorithms which may have a very high influence on the security of wireless sensor network is an algorithm that first determines the node getting nearer or farther from the network. Thereafter, it determine whether the node is self or non-self and eventually, in case of node being self or non-self, determines the cope kind against the aggressive node. This algorithm is a combination of greedy algorithm and Artificial Immune System (AIS). Negative and clonal selections are accomplished after distinguishing self nodes from non-self nodes to select the best antibodies with the most affinity to the non-self agent. To recognize which nodes are friends and which are enemies, we need to find an algorithm which can recognize chaos and intrusion pattern in non-self (enemy) agents. These nodes can be confronted by adapting suitable techniques, so we will talk briefly about intrusion detection and then represent an algorithm for recognition of the enemy agents. Intrusion Detection Every kind of anomaly (abnormal activity and movements) or incorrect use of computer systems and their source are called intrusion. Intrusion has been divided to the following major categories: i. Computer misuse: forbidden activities by allowed users. ii. Recognition: defining systems or services which may be used in an incorrect way. iii. Intention of intrusion: the forbidden activities to access calculated sources. iv. Intrusion (penetrate): successful access to calculated sources by forbidden users. v. Trojan attacks: the forbidden (unauthorized) process appearance and activities. vi. Denial of access to a service: an attack which result to choke the calculated sources access. As shown in figure 4, intrusion detection can be seen in two type, anomaly and misuse.Anomaly consists of blocking the network traffic by or toward the host. Hacking changing or erasing the files contents and a system processes results in decreasing the system competency.The misuse is kind of intrusion which intruder tries to use the final user as a back door to upload information and files from the user's computer and also to spread viruses and computer worms. Anomaly detection works by recording the user activities and making profiles from the normal and abnormal behavior of the host network and comparing them to find the intrusion. Whereas,in misuse detection,patterns from MLI (Mark Left by Intrusion) are determined and subsequently the intrusion is recognized by comparing these patterns with the intrusion patterns which were prepared before 9 (figure 5). Figure-5 Two different types of intrusion and the confronting method Intrusion/anomaly detection has been turned to an important part of computer security by making a new defensive layer against the misuse (abuse) of computer subsequent to the physical layer (like firewall), the identity confirmation layer and access control layer. This layers act in a way that immediately after a node entering our system acceptable limits, the physical layer start working first and closes all the system related entrance and exit for this node. Then, the identity confirmation layer compares the node identity to recognize whether it belongs to this system or it has the membership abilities in the system. In fact, the identity confirmation layer duty is to confirm the identity of related node according to a set of correctness recognition algorithms. If the identity confirmation is done successfully, it means that the considered node can enter the system and acts with the system agents by exerting of a set of access control and limits in access control layer. When the identity confirmation is not successful, and all the node paths to the system will be closed; all the transaction with the system agents will be destroyed; and all of the sources that exchanged in this direction will be thrown away. Anomaly / intrusion detection has the ability to control every node which passed the 3 above layers and entered the system and recognize it, in case of observing abnormal movements or the intention of intrusion in these nodes. Energy in wireless sensor network is very important because the mobile networks are energy-dependant networks 10 and therefore, the anomaly/intrusion detection confronts these nodes by actions like informing other agents and energy reduction of them. The security activities in this layer are done in a way that the security system monitors the entire unique path which are produced the computer user system and programs. It uses a set of different statistical analysis during this supervision to derive the order in this agent's behavioral patterns. This idea originates from the point which the aggressive and invader agent's behavior is considerably different from the legal user's behavior and the detection of this behavior is possible through monitoring these unique tracks. In contrast with the natural human body system, the danger and threat level is very significant and highly common in computer and computer calculations and this danger level may be increased surprisingly because of some factors like element's destruction and submersible actions 10 . Adavanced algorithm for the detection of node getting nearer or farther and being self or non-self (node filtering): In wireless sensor networks, especially in mobile networks, the nodes getting nearer to our network or their decision to leave the network radiance is very important. Because the exchanged information in WSN can be very crucial, compromising this information can result in changes in battlefield scene (in crucial situations like electronic wars) and changing the winner to a loser or vice -versa. Therefore, presenting an algorithm which can start recognizing the dangerous agents once they enter and then recognizing their types (self or non-self) and finally destroying them from the field may have an effective role in the security of our networks. Since, this algorithm has been inspired by human immune system, knowing the relation between human immune system parts which are directly involved with pathogens and the artificial immune system parts in wireless sensor networks can be useful for a better competence of algorithm, as it shown in table 1. For example, the sent and received string bits in artificial immune system of receivers have been inspired by natural immune system or the lymphocyte T-cells and B-cells are the detector nodes in artificial immune system. In node filtering algorithm, every node is categorized based on its getting closer or farther and authorized or threat in a wireless sensor network (Algorithm 1). Although this network contains some sensor which each of them prepares just one bit information, a model is used here to determine whether the target is getting closer or farther. In this algorithm, distribution is estimated by random disconnected measurements (see formula 1) defined by nodes and their based weights. An approximation of probability distribution function is derived based on random sampling in the first stage. In the second stage, weights are calculated and normalized for each node based on the math model of formula 1 and in the final stage, a new set of nodes are also called to recognize the unknown agents and to determine whether the node are self or non-self. This node is very important to recognize the nodes which are getting closer or getting farther or suspicious, because if a node which defined as a new node enters the limit of system is suspicious, it will show every behavior except the two predicted situations while identifying by networks agents. Fourmula-1 Calculation probablity distribution function and normalizing the node weight according to the node getting nearer or farther Scan all sensors; Select a specified node; M, N; C (a); Closer node C (b); farther node While m=1 do m=m+1 Repeat Create an initial random set of antibodies, A for all patterns in S do Determine the affinity with each antibody in A Generate clones of a subset of the antibodies in A with the highest affinity The number of clones for an antibody is proportional to its affinity mutate attributes of these clones to the set A, and place copy of the highest affinity antibodies in A into the memory set ,M Replace the n lowest affinity antibodies in A with new randomly generated antibodies End Algorithm-3 clonal selection algorithm All the found nodes are first entitled in this security algorithm where a number is specialized to each node. In the next stage, it checks what status the node is in and whether it is getting closer or father to the network (it means if the nodes are entering are getting out of area of scanned nodes). After determining the node status, the system determines whether the node shows a suitable behavior (Good intention) or not from itself, The information sent to the base station from sensor can be another node or access point which comprised of two parts; the first part is sensor code that is able to localize the information sent by the sensor; the second part contains a bit of data which indicates nodes getting closer or farther when sampling a sensor. Thus, it checks for any new enterer node whether it has the valid bits and error code bit (parity bit) or not. If the node is qualified, the next stage of node testing begins ( figure 6). Fake packages with un-crucial information will be sent for a new node. Paying attention to this point is necessary that the enemy node has only the duty of collecting and sending data to enemy base station and it doesn't matter what kind of information they are. If this new node sends the new received information to the base, the system would identify this node as enemy and put a suitable way to confront it in its schedule (figure 7). In this situation, using the CRC method to correct errors in parity bit can effectively help in confirming the validity of error code. Adding the validity confirmation stage can result in monitoring and tracking nodes with high accuracy, otherwise the information maybe received inversely to a base computer on our side in an electronic war 11,12 . Negative Selection: The purpose of negative selection is to make a tolerance for self cells. It is to recognize unknown antigen by the immune system without showing any reaction toward the self cells. During the production of T-cells, receivers are made during a random rearranging process of random genetic. Then, they are exposed to a supervision procedure in thymus called negative selection. The T-cells which reacts toward the self protein would be destroyed, therefore only the cells which do not glue to the self protein are allowed to exit thymus. These mature T-cells patrol in whole body to perform the protective and securitize duties against foreign aggressive anti-gens. The related algorithm to negative selection for WSN is shown in the algorithm 2. In fact, the negative selection presents another alternative way to recognize patterns. The purpose of pattern recognition is to find the pattern and useful relation from the information in common perspective. This pattern is recognized for saved information about them (patterns) and is considered the difference between these patterns. The complement set of information about the patterns which are saved in special place in this point of view is recognized as available patterns with this related information 8 . The negative selection focuses on anomaly detection problems like intrusion detection in computers and networks (wireless sensor network). As it was discussed before, the purpose of negative selection in natural immune system is self cells tolerance testing or tolerance outset. The T-cells which are in combination with MHC and self amino acids will not be able to finish this test successfully. Virus finding is performed by monitoring the changes in protected and program files by the negative selection algorithm. This algorithm has a lot of advantages in comparison to change monitoring and recognition technique in other methods. The agents that are used in this system are as follow 5,10 (figure 8). i. Supervisor agents, ii. Connector agents, iii. Decided agents which is divided to 3 groups, iv. Helper agents, v. Destroyer agents, vi. Protective agents. Figure-8 System Agents Our system can be located according to the network area situation in the below conditions while the above agents are active: i. Sensing mode: The information sources are monitored in the network area by detection system and are usually used in primary stages to find the abnormal modes. ii. Recognition mode: It makes the suitable decision when a dangerous agent is recognized based on the defensive policies. iii. Response mode: The detection system put it in performance stage by one of the destroyer, helper or pre-emptive factors after deciding about suitable operations to destroy danger-maker agents 10 (figure 9 and 10). Clonal Selection Mechanism In clonal selection, the cells which recognize the anti-gen duplication will be selected. The basic characteristics of clonal selection are: i. The new cells are a copy of their parent's cells (reproduction) which have been exposed to a very high speed maturity mechanism. ii. Destroying the recent changed selfreaction lymphocytes. iii. Changing and duplicating while the mature cells confronting anti-gens. Antibody-antigen reaction can stimulate B-cells to reproduce its own antibodies. This maturity would be very fast, usually about 1 maturity per 1 cell division. This maturity level lets the immune system to have a fast reaction against antigens. The artificial immune system work method related shown in algorithm 3. Responding method to non-self agents Threat: In artificial immune system, non-self agents such as enemy agents and spy nodes are detected by scanning all the available agents and nodes in the widthwise and linear operation area. This called Local scan because the connection level (affinity) of all the nodes toward an unknown node is measured and the node with the most affinity will be recognized with a put in the head of the confronting team toward the unknown node. This agent (detector) sends messages to close-by nodes to gather around the unknown node. The close nodes will gather around the unknown node once these messages are sent and received by the nearby agents. This action is exactly like cloning (reproduction) in the natural immune system, however, because there is no reproduction is expressed in nodes, the system portraits reproduction play by recalling the nearby node. In confrontation stage against the unknown node, a new innovation was created where ending non-self node energy source technique is used to confront and defeat the node in a way that all the neighbor nodes of the unknown node will send fake packages of information to the unknown node with the purpose of destroying these nodes energy sources. As it was mentioned before, only taking these packages and sending them to enemy base is the duty of these nodes. Gathered agents around the unknown node continue sending fake packages to this node until its energy would be finished; decreasing the node energy to the lowest level shows successful defeating and destruction of the node. In some more complicated systems, there is also a counter-attack after this stage in which a self node misrepresents itself as a non-self node in the system; in other words, it shadows the destructed nodes IP in itself and plays the role of spy node and actually changes the roles in an electronic war (figure 11). Figure-11 How to confront non-self agents The operational examination of negative and clonal selection algorithm in a functional and opertational area: The algorithm which we got familiar with in section 4 of this paper and executed in an operation area and parameters which we used in clonal and negative level are in a way that by using Artificial immune system it will select the most fitness in operational area with the number of antibodies (detectors) of 50, the reproduction (node's recalling) of 20, maturity level (tolerance level) of 80 for antibodies and maximum genes (operational nodes) of 600. The fitness was measured using the following equation: ‫ݏݏ݁݊ݐ݅ܨ‬ = ሺ15 × ‫ݔ‬ × ‫ݕ‬ × ሺ1 − ‫ݔ‬ሻ × ሺ1 − ‫ݕ‬ሻ × sinሺ9 × ߨ × ‫ݔ‬ሻ × sinሺ9 × ߨ × ‫ݕ‬ሻሻ ଶ Where x and y represent the scanned nodes widthwise and linear location in an operational area (two dimensional), respectively. A fitness of 0.8789 can be obtained by the above presupposition information; however, a more fitting result can be reached by changing parameters like antibodies, maximum genes ( figure 12). Figure-12 The resulted function in provided software operational area The meaning of X and Y in our operational area is the widthwise and linear coordinates of found nodes by detector nodes in a scanned operational area where node fitness was achieved based on their affinity. A three dimensional chart of provided software running result shows the found nodes with the highest fitness in the highest peak of the chart. This figure was provided based on presented presupposition information in the mathematical software. Red peaks in the highest level show best fitness for our experiment and the blue peaks in the lowest level show the worst fitness. Conclusion To confront against aggressive agents, the traditional immune systems have a lot of problems such as the time duration between recognition and action, the lack of recognition and action toward unknown objects until presenting an unresonable behaviour, lack of learning and the lack of relation being in contrast, modern systems, especially the immune systems based on the artificial immune system we discussed in this article do not have these problems. Furthermore, because it has been inspired by the human body immunity system and also because of the learning ability existence, it confronts very effectively against aggressive agents. This system was shown to have a very high rate success and good rate of detection for unknown objects; in this system, could present the best nodes with high affinity and fitness to be selected to confront the unknown agents.
2015-08-07T16:32:10.000Z
2015-08-07T00:00:00.000
{ "year": 2015, "sha1": "f8c706d2da9a2186dd1754950d9df35ddd2d5f9f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f8c706d2da9a2186dd1754950d9df35ddd2d5f9f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247353824
pes2o/s2orc
v3-fos-license
Epigenetic Activation of lncRNA MIR155HG Mediated by Promoter Hypomethylation and SP1 is Correlated with Immune Infiltration in Glioma Purpose The lncRNA MIR155 host gene (MIR155HG) plays a role in the progression of several malignant cancers. However, the specific mechanisms of MIR155HG in glioma progression have not been clearly established. The purpose of this study was to investigate the function of MIR155HG in glioma at the transcriptome level and relationship with immune infiltration. Patients and Methods Totally, 697 RNA-seq and 594 DNA methylation data were retrieved from The Cancer Genome Atlas (TCGA) dataset while 325 RNA-seq data were retrieved from the Chinese Glioma Genome Atlas (CGGA) dataset. The DNA methylation levels of MIR155HG CpG islands were assessed through bisulfite amplicon sequencing (BSAS). The regulatory mechanism of SP1 on MIR155HG was examined by chromatin immunoprecipitation (ChIP) and luciferase reporter assays. R language was used as the main tool for statistical analysis and graphical work. Results MIR155HG was predominantly expressed in the isocitrate dehydrogenase (IDH) wild-type as well as mesenchymal subtype gliomas. Promoter methylation levels of MIR155HG in glioblastoma (GBM) were remarkably decreased compared with those in lower-grade glioma (LGG). In addition, there were negative correlations between promoter methylation levels and MIR155HG expressions but positive correlations with patients’ overall survival. In vitro studies further revealed that MIR155HG expression was regulated by DNA promoter methylation and transcription factor (SP1) binding to the promoter. Moreover, there was a close association between MIR155HG expression and immune as well as stromal cell infiltrations, inflammatory activities, and immune checkpoints. Clinically, univariate and multivariate Cox analyses revealed that MIR155HG is an independent prognostic marker for glioma patients. Conclusion Our results established that MIR155HG is a potential biomarker for prognosis and an immunotherapeutic target in glioma. Introduction In adults, glioma is a very prevalent brain tumor of which glioblastoma (GBM) is the most aggressive, malignant subtype. 1 Improvements in conventional treatment options, including surgical resection, followed by radiotherapy as well as chemotherapy have not improved the median survival time for glioma patients, which is still < 2 years. 2 Given the poor outcome after standard treatment, it's urgent to develop new therapeutic approaches. Malignant solid tumor tissues are composed of tumor cells as well as tumor-associated nontumor cells, including normal epithelial, stromal, vascular, and immune cells. 3 The tumor microenvironment (TME) has a key role in glioma progression. 4,5 For instance, stromal cells within the TME may promote glioma expansion and invasion by elaborating numerous compounds that stimulate angiogenesis, neoplastic cell growth, and modify the extracellular matrix (ECM). 6 Meanwhile, tumor-associated macrophages (TAMs) could promote angiogenesis, maintain the glioma stem-like cell phenotypes, and secrete VEGF, PDGF, IL-6, TGFβ1, MMP2 and MMP9 to regulate ECM degradation, resulting in glioma progression. 7,8 Thus, the TMEtargeted therapies are promising novel approaches for glioma. Long non-coding RNAs (lncRNAs) are non-protein-coding transcripts whose nucleotide length exceeds 200. LncRNAs have critical, complex roles in cancer development as well as progression, including glioma. 9,10 LncRNA MIR155HG is located at chromosome 21q21 and has a length of 1.5kb, which is also considered as the primary microRNA of miR-155. Previously, we have reported that MIR155HG is a prognostic factor for GBM and is involved in promoting the progression of glioma via miR-155/PCDHs/β-catenin pathway. 11 Recently, two other studies further confirmed that MIR155HG contributes to GBM growth and progression by sponging miR-185 to increase ANXA2 expression, and promotes temozolomide resistance by binding to PTBP1. 12,13 Additionally, MIR155HG was involved in a lncRNA signature related to ferroptosis, tumor progression, and microenvironment for stratifying the prognosis of glioma patients with adequate predictive performance. 14 Moreover, MIR155HG was identified as a m 6 A-related prognostic lncRNA for patients with primary glioblastoma by constructing the m 6 A-lncRNA co-expression networks. 15 Another research demonstrated that the MIR155HG/has-miR-129-5p/C1S axis is a potential marker and therapeutic target for glioblastoma. 16 MIR155HG also has a vital role in facilitating the malignancy of laryngeal squamous cell carcinoma (LSCC), 17 pancreatic cancer (PC), 18 clear cell renal cell carcinoma (ccRCC) 19 and non-small cell lung cancer (NSCLC). 20 LncRNA expression is epigenetically regulated. 21 Of all epigenetic modifications, DNA methylation induces heritable alterations in gene expression, but does not affect DNA sequences. It is well known that promoter hypermethylation has an association with transcriptional repression, while hypomethylation has an association with transcriptional activation. 22 Currently, whether and how MIR155HG is regulated by DNA methylation in glioma remains unclear. In addition, lncRNAs exert crucial functions in the immune and inflammatory responses. 23,24 For instance, in dendritic cells and macrophages, lncRNA-Cox2 activates the toll-like receptor (TLR) signaling pathway to increase IL-23 and IL-6 secretion after microbial infection. 25 Although the immunoregulatory role of miR-155 has been most extensively studied in many tumors, especially in hematologic tumors, 26,27 the research on the immunoregulation of MIR155HG is still in its infancy. Recently, a bioinformatic data analysis demonstrated that MIR155HG is correlated with immune checkpoints and infiltrations in various cancers, including glioma. 28 Nevertheless, the underlying function and mechanism of MIR155HG in tumor immunology still require further studies. In the present study, we combined experimental detection with bioinformatic analysis to further explore the MIR155HG expression and clinicopathologic characteristics and its correlation with immune infiltration in glioma. The results would give us a better understanding of MIR155HG and provide a molecular basis for anti-MIR155HG treatment in glioma immunotherapy. Study Participants and Samples We obtained 697 RNA-seq and 685 DNA methylation data, as well as related phenotypic data from The Cancer Genome Atlas (TCGA, (http://cancergenome.nih.gov/), and an additional dataset of 325 RNA-seq data from the Chinese Glioma Genome Atlas (CGGA, http://www.cgga.org.cn/). In addition, 10 tissue samples, including 5 lower grade glioma (LGG) samples as well as 5 GBM samples were obtained from the Affiliated Wuxi No.2 People's Hospital of Nanjing Medical University. This study was approved by Ethics Committee of the Affiliated Wuxi No.2 People's Hospital of Nanjing Medical University and informed consent was obtained from all the patients. This study was conducted in accordance with the Declaration of Helsinki. Cell Lines Human glioblastoma cell lines (U87, U251) and the human embryonic kidney (HEK) cell line (293T) were obtained from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). The cell lines were routinely cultured in https://doi.org/10.2147/OTT.S349078 Plasmid Construction and Luciferase Reporter Assay To construct SP1 overexpression vector, the amplified coding region of the human SP1 gene was subcloned into a pcDNA3.1 vector. The MIR155HG proximal promoter region sequences containing the CpG islands (+400 bp to −400 bp) were amplified and inserted into a pGL3-basic luciferase vector (Promega), named pGL3-MIR155HG-WT. Generation of the mutated version (pGL3-MIR155HG-Mut) was achieved by mutating the putative binding sites of SP1. For in vitro DNA methylation, pGL3-MIR155HG-WT was treated with the CpG methylase M.SssI (NEB, Ipswich, MA). Then, 48 h post-transfection, the Dual Luciferase Reporter Assay System (Promega, Madison, WI) was used to assess luciferase activities. Statistical Analysis R language was used for statistical analyses and to generate figures with several publicly available packages, including survival, pheatmap, ggplot2, circlize, corrgrams, pROC, and corrplot. Comparisons of differences between groups were done by two-tailed Student's t-tests. Kaplan-Meier curves were used to calculate overall survival differences. The statistical significance threshold was set at p < 0.05. MIR155HG Expression Was Elevated in IDH Wild-Type Glioma The IDH mutation status is a clinically important molecular marker for glioma progression. In glioma, especially GBM, MIR155HG is highly upregulated. 11 In this study, we found that MIR155HG expressions were significantly increased in IDH wild-type glioma than that in IDH mutated glioma in both datasets ( Figure 1A and C). Additionally, IDH wild-type glioma showed remarkably elevated MIR155HG expressions than IDH mutated glioma across different grades 222 ( Figure 1B and D). Then, we constructed receiver operating characteristic (ROC) curves to assess the diagnostic values of MIR155HG for the IDH wild type glioma. Interestingly, the respective areas under the curve (AUCs) were 89.8% and 89.3% for TCGA as well as CGGA datasets ( Figure 1E and F). These findings suggested that MIR155HG is specifically highly expressed in IDH wild-type glioma. MIR155HG Expressions Correlated with Mesenchymal Subtype Next, we evaluated MIR155HG distributions in 4 TCGA-defined molecular subtypes. MIR155HG was apparently elevated in the mesenchymal subtype relative to other subtypes, except for the classical subtype in TCGA dataset (Figure 2A and B). To further confirm this, we performed ROC curves for MIR155HG expressions in the mesenchymal subtype. The respective AUCs were 85.9% and 84.6% for TCGA and CGGA datasets ( Figure 2C and D). Therefore, these results indicated that MIR155HG was mainly enriched in the mesenchymal subtype. MIR155HG Expression Was Associated with Promoter DNA Methylation To discover the genetic determinant of MIR155HG endogenous expression, we investigated the association between MIR155HG expression levels and its DNA methylation. Integration of TCGA glioma datasets identified that 594 patients had both MIR155HG expression and DNA methylation data. Particularly, six CpG sites (cg19769982, cg17265380, cg17297071, cg23433889,cg12749863 and cg14315558) were localized in the MIR155HG promoter region. 223 Interestingly, DNA methylation beta values at all the six sites were significantly higher in LGG than in GBM ( Figure 3A) and were remarkably elevated in the low MIR155HG expression group, relative to the high MIR155HG expression group ( Figure 3B). In addition, there was a negative association between promoter DNA methylation beta values and MIR155HG expressions in all gliomas as well as in GBM ( Figure 3C and D). Meanwhile, patients with elevated methylation levels exhibited markedly better overall survival when compared to the low methylation level group ( Figure 3E). In summary, these results suggested that MIR155HG expression is regulated by promoter DNA methylation. Therefore, this epigenetic modification may be a prognostic biomarker for glioma. DNA Methylation Profiles of CpG Islands in MIR155HG Promoter Region Next, we evaluated DNA sequences of the MIR155HG promoter region and identified two CpG islands via the MethPrimer 2.0 (http://www.urogene.org/methprimer2/) ( Figure 4A). CpG island 1 spans −399 to −290, and CpG island 2 covers −82 to +396 when compared to the MIR155HG transcription start site (TSS). And then, we performed bisulfite amplicon sequencing (BSAS) to detect the DNA methylation levels around the CpG island region in glioma samples. The results revealed that LGG tissues exhibited a denser methylation pattern at both CpG islands in MIR155HG promoter region compared with GBM tissues ( Figure 4B and C), consistent with the analysis of the TCGA data. Then for further verification of the effect of DNA methylation on MIR155HG expression, U87, as well as U251 cells, were treated with a DNA methyltransferase inhibitor (5-Aza-dC) at 0, 2.5, 5, and 10 μM, and the MIR155HG expression levels were subsequently examined by qPCR. After DNA methylation inhibition, MIR155HG expression levels were markedly elevated in both cell lines in a dose-dependent manner ( Figure 4D). Transcriptional Activity of MIR155HG Promoter is Regulated by SP1 and DNA Methylation Subsequently, we performed bioinformatic analysis using the JASPAR database (http://jaspar.genereg.net/) for the prediction of transcription factor binding sites in the MIR155HG promoter region. We identified two transcription factor SP1 putative binding sites located at site 1(+60 to +69), and site 2(+136 to +145) (Supplementary Figure S1). SP1, a DNA-binding protein, specifically binds the promoter's GC-rich sequences. This binding capacity has a close association with CpG dinucleotide methylation. Thus, we performed qPCR assay and found that overexpression of SP1 significantly increased MIR155HG expression levels in U87 as well as U251 glioma cell lines ( Figure 4E). Moreover, dual luciferase reporter assay was used to confirm SP1 binding to MIR155HG promoter. As shown in Figure 4F, SP1 overexpression dramatically induced the wild-type MIR155HG promoter activation, while having no effect on the mutated promoter activity. Furthermore, ChIP assay revealed that SP1 could bind to the MIR155HG promoter at site 2 (+136 to +145), whereas no binding was observed at site 1 (+60 to +69) ( Figure 4G). To investigate whether DNA methylation affects the MIR155HG promoter activity, the wild-type MIR155HG promoter luciferase reporter vector, containing 2 CpG islands, was methylated using CpG methyltransferase M.SssI. Luciferase reporter analysis revealed that M.SssI significantly repressed the MIR155HG promoter reporter gene activity compared with no methylase ( Figure 4H). In addition, luciferase assay following cotransfection of methylated/unmethylated MIR155HG promoter constructs and SP1 expression plasmids revealed that SP1-induced activation of MIR155HG promoter was significantly abolished by M.SssI ( Figure 4H). Collectively, these data indicated that SP1 transcriptionally upregulated MIR155HG expression levels by binding its proximal promoter region after DNA hypomethylation. MIR155HG Was Closely Related to Immune Function in Glioma GO enrichment analysis was performed to identify the biological features of MIR155HG. First, genes strongly correlated with MIR155HG expression were selected (Pearson |R|>0.5) from the TCGA as well as CGGA datasets. Biofunctions for these genes were evaluated by GO analysis on DAVID Bioinformatics Resources (https://david.ncifcrf.gov/). In biological processes analysis, genes that had a positive association with MIR155HG expressions were enriched in inflammatory and immune responses (Supplementary Figure S2A and B). Regarding cell components, the enrichment of these positive genes was mainly in the extracellular regions (Supplementary Figure S2C and genes in the TME of glioma. Moreover, molecular function analysis revealed gene enrichment in protein binding (Supplementary Figure S2E and F). In summary, these results suggested that MIR155HG might participate in the regulation of the glioma immune environment. MIR155HG Was Associated with Inflammatory Activities in Glioma To specifically evaluate the function of MIR155HG in the glioma inflammatory response, we analyzed its relationship with seven well-established metagenes using the method described previously. 29 Figure 5A and B show that MIR155HG expression levels were positively correlated with the majority of the clusters in the TCGA and CGGA datasets apart from IgG, which represented B cell activities. Furthermore, corrgrams were generated by R language based on Pearson R values between MIR155HG expression and the 7 metagenes ( Figure 5C and D). There was a positive association between MIR155HG and HCK, MHC-I, LCK, MHC-II, Interferon, and STAT1, as well as negative associations with IgG, in accordance with the above findings. These findings confirm the important immune functions of MIR155HG in glioma. MIR155HG Expressions Were Correlated with Glioma Immune Infiltrations Immune as well as stromal cells are crucial constituents of the TME. Considering the results of GO analysis, we further analyzed the correlation between MIR155HG and infiltrated cells in glioma. First, we performed the ESTIMATE algorithm method as reported by Yoshihara to investigate the association between MIR155HG expression levels and ESTIMATE scores. As shown in Figure 6A and B, there were significant positive correlations between MIR155HG expressions and immune, stromal, as well as ESTIMATE scores in glioma in both datasets, indicating that it predominantly influences immune as well as stromal cell infiltrations. Furthermore, we employed the CIBERSORT algorithm to evaluate the proportions of 22 subpopulations of infiltrating immune cells in glioma and then investigated the association between MIR155HG and particular cell populations in the glioma TME. A comparative summary of immune cell proportions in various MIR155HG expression groups is shown in Supplementary Figure S3. Several kinds of immune cells were enriched in different groups. Notably, we established that M0 and M2 macrophage populations were significantly higher in the high MIR155HG expression group, while plasma cells and monocytes were significantly enriched in the low MIR155HG expression group. Furthermore, MIR155HG expression levels were significantly positively correlated with 9 cell types and negatively correlated with 7 cell types ( Figure 6C). Interestingly, MIR155HG expression showed a strong correlation with M2 macrophages as well as neutrophils which were considered to be immunosuppressive cells. 30 In summary, MIR155HG played a precise role in immune as well as stromal cell infiltrations in glioma. MIR155HG Was Associated with Immune Checkpoints Immune checkpoint-targeting agents have been extensively evaluated in preclinical as well as clinical trials for various solid tumors, including glioma. Thus, the available immune checkpoint blocking agents, such as TIM-3, PD-1, B7-H3, PD-L1, CTLA4, and PD-L2, were enrolled in the analysis. Surprisingly, MIR155HG showed strong positive correlations with all these molecules in glioma in both datasets ( Figure 7A and B). In GBM, MIR155HG exhibited a steady correlation with the above immune checkpoints, except for PD-1 and CTLA4 in TCGA dataset (Supplementary Figure S4A and B). These findings indicate the potential regulatory effects of MIR155HG on immune checkpoints in glioma. MIR155HG Was Correlated with Immunosuppressive Properties Accumulating evidence has demonstrated that increased tumor associated macrophage (TAM) infiltrations predict poor prognostic outcomes for GBM patients and correlate with upregulated glioma progression and tumor grade. 31,32 Therefore, targeting TAMs is a potential therapeutic approach for intractable glioma. Herein, we further analyzed the correlation between MIR155HG and the surface markers of macrophages. Expressions of MIR155HG were markedly associated with CD14, CD163, CD68, CD11b, and CD204 in glioma in TCGA dataset ( Figure 7C). Moreover, we observed that MIR155HG was closely related to several key factors that mediate M2 macrophage differentiation, 33 OncoTargets and Therapy 2022:15 https://doi.org/10.2147/OTT.S349078 227 including CSF-1, IL-6, CCL2, IL-10, STAT3, and TGFβ ( Figure 7D). Similar results were shown in CGGA dataset (Supplementary Figure S4C and D). Next, correlation analysis was performed to evaluate the association between MIR155HG expression levels and critical factors involved in the recruitment of TAMs, tumor-associated neutrophils, myeloid-derived suppressor cells, and the immunosuppressive factors that are secreted by these cells. Interestingly, MIR155HG expressions were strongly positively correlated with the majority of the factors involved in immunosuppressive cell recruitment 33 as well as immunosuppressive factors 33,34 (Figure 7E and F). In glioma, MIR155HG is a mesenchymal transition-associated lncRNA. 11 Consistently, in this study, the correlations between common EMT Figure S5A and B). Therefore, in glioma, MIR155HG has a potential immunotherapeutic value. Overexpression of MIR155HG Predicted Worse Survival in Glioma Considering that MIR155HG expression was abnormal in glioma and correlated with the histological grade as well as molecular subtypes, we explored the prognostic significance of MIR155HG in glioma. Figure 8A and B show that glioma patients with high MIR155HG expression levels exhibited significantly short overall survival times than those with low MIR155HG expressions. Moreover, univariate analysis showed that MIR155HG expression, diagnostic age, and WHO grades had a significant association with overall survival outcomes ( Figure 8C). In multivariate analysis, after adjustment of the above factors, MIR155HG expression was still a significant prognostic factor ( Figure 8D). Similar results were obtained in CGGA dataset (Supplementary Figure S6A and B). In summary, MIR155HG was an independent prognostic factor for glioma patients. Discussion Gliomas are prevalent and lethal brain tumor types in adults. Despite the advances in the combination of surgery, chemotherapy, and radiotherapy, improvements in patient survival are very limited. 35 Therefore, exploring new therapeutic approaches to prolong the survival of glioma patients is urgently needed. Immunotherapy is a potential option for glioma. 36 In particular, immune checkpoint blockade (eg, CTLA4, PD-1) 37,38 and chimeric antigen receptor T-cell immunotherapy (CAR-T) 39 have exhibited potential benefits in the treatment of glioma. Nevertheless, current immunotherapies for glioma are difficult to be widely applied because of the low immunogenic response rate, therapeutic resistance, as well as autoimmune-like side effects. It is thus of great importance to develop an effective immunotherapeutic option for glioma by targeting novel molecular markers as well as molecules that have crucial roles in immunosuppression. Previously, we found that MIR155HG is elevated in GBM and predicted a worse overall survival for GBM patients. 11 Herein, we further identified that MIR155HG expression levels are elevated in IDH wild-type and mesenchymal subtype gliomas, indicating that MIR155HG was associated with more malignant biological processes. In addition, MIR155HG may serve as an indicator of IDH wild-type as well as mesenchymal subtype, and an independent prognostic factor for glioma. DNA methylation, which is the main epigenetic modification, plays various roles in the initiation and progression of different tumors. 40,41 Aberrantly methylated promoters are potential markers for glioma diagnosis as well as prognosis. 42,43 We took advantage of TCGA RNA-seq and DNA methylation data and demonstrated that the promoter region of MIR155HG was aberrantly hypomethylated in GBM than in LGG. The promoter methylation levels were inversely correlated with MIR155HG expression and had a predictive value for glioma patients, which was consistent with the results found for mRNA levels. In vitro experiments further confirmed that GBM tissues had markedly low methylation levels of the MIR155HG promoter region when compared to LGG tissues, and MIR155HG expression was remarkably dose-dependently upregulated after 5-Aza-dC treatment in glioma cell lines, which indicated that MIR155HG promoter region hypomethylation may be highly associated with the elevated MIR155HG expression levels in glioma. DNA demethylation is involved in the transcriptional reactivation of viral genes in cancer. 44 Abnormal DNA methylation may also lead to abnormal lncRNA expression in tumor progression. DNA demethylation enhances the accessibility of promoter regions to transcription factors. In this study, specific SP1 binding sites were found to be localized in the CpG island region of MIR155HG promoter. SP1 is ubiquitously expressed and activates target gene transcription by binding GC-rich promoter elements, which are often interrupted by DNA methylation. We found that overexpression of SP1 enhanced the expression of MIR155HG and the transcriptional activity of MIR155HG promoter in glioma cells. However, M.SssI-induced hypermethylation of the MIR155HG promoter region abrogated this activation, indicating that SP1-associated transcriptional activation of MIR155HG promoter was suppressed by DNA methylation. MIR155HG promoter hypomethylation may result in increased SP1 binding and subsequent transactivation. This partially explained why the expression of MIR155HG in GBM with low MIR155HG promoter methylation was markedly higher than that in LGG with high methylation. Immune checkpoint blockade, the most promising approach to achieve encouraging antitumor effects, has shown great progress for the treatment of various advanced solid malignancies. 45,46 Immune checkpoint blockade takes advantage of tumor immune cell infiltration to reactivate an anti-tumor immune response that is effective. We evaluated the association between MIR155HG and immune checkpoints, such as TIM-3, PD-1, B7-H3, PD-L1, CTLA4, and PD-L2, and found that MIR155HG had a high concordance with these checkpoint members, consistent with previous findings. 28 These findings implied the potential regulatory effects of MIR155HG on immune checkpoints and provided potential strategies for glioma treatment. The TME is comprised of a diverse mixture of nontumor cells, including immune cells, stromal cells, and vascular endothelial cells, as well as ECM. The TME usually suppresses efficient lymphocytic initiation, prevents their infiltration, and inhibits effector cell infiltration, leading to tumor progression, drug resistance, immune evasion, and immune suppression. 47 The composition of TME has also been shown to influence outcomes of immune checkpoint blockade therapy. Herein, we found that MIR155HG plays a role in immune and inflammatory response as well as in the TME of glioma. Furthermore, there was a significant correlation between MIR155HG expression and infiltrating stromal and https://doi.org/10.2147/OTT.S349078 DovePress OncoTargets and Therapy 2022:15 232 immune cells, such as plasma cells, monocytes, macrophages, and neutrophils. In addition, MIR155HG was prominently correlated with a series of immunosuppressive factors in glioma, implying a vital function of MIR155HG in tumor immune regulation. Besides, MIR155HG expression levels were positively related to marker genes of macrophages and key factors that promote macrophage differentiation towards the M2 phenotype, suggesting that MIR155HG has a regulatory function in TAMs polarization. Moreover, MIR155HG expression levels were notably positively associated with chemokines or cytokines involved in immunosuppressive cell recruitment, including tumor-associated neutrophils, TAMs, and myeloid-derived suppressor cells, as well as immunosuppressive factors secreted by these cells. The EMT has a vital role in tumor immunosuppression as well as immune evasion. 48 The interaction between EMT genes and the TME may facilitate tumor progression. Additionally, EMT is associated with the activation of different immune checkpoint molecules, such as PD-1, PD-L2, TIM-3, PD-L1, B7-H3, and CTLA4. 49 EMT scores may also provide a novel biomarker for the prediction of clinical responses to immune checkpoint blockade. 50 Previously, we have found that MIR155HG promoted the progression and EMT of glioma. 11 Additionally, Cui et al showed that MIR155HG promoted laryngeal squamous cell carcinoma EMT through miR-155-5p/SOX10 axis regulation. 17 In the present study, we further confirmed that MIR155HG-related genes are mainly enriched in the extracellular region and MIR155HG was significantly associated with EMT markers. Accordingly, we speculated that MIR155HG is involved in immunosuppressive microenvironment regulation and facilitates tumor immune evasion in glioma by regulating M2 polarization, recruiting immunosuppressive cells, increasing the secretion of immunosuppressive factors, and promoting EMT. Mechanistically, MIR155HG may modulate the tumor immunity through Wnt/β-catenin signaling pathway activation. Previously, we showed that MIR155HG activated β-catenin expression via miR-155/PCDHs axis in glioma. Recently, He et al found that MIR155HG enhances temozolomide resistance through Wnt/β-catenin pathway activation by binding PTBP1 in glioma. These findings highlight the important role of Wnt/β-catenin signaling pathway in the downstream regulation of MIR155HG. Aberrant Wnt/β-catenin pathway activation is implicated in various cancer types and is a key regulator of the TME. 51 Noteworthily, Wnt/β-catenin signaling as well as its interactions with immune cells have both negative and positive effects on anticancer immunosurveillance. It is involved in leucocyte maintenance and renewal while promoting immune tolerance, immune evasion and limits the antitumor immune response. 52 Furthermore, Wnt/βcatenin signaling also plays a crucial role in EMT, thus, it enhances cancer stem cell maintenance. Based on these findings, we postulate that MIR155HG modulates tumor-immune interactions via the Wnt/β-catenin signaling pathway. These findings should be verified by further molecular biology or cell experiments. Conclusion In conclusion, we established a novel role of MIR155HG in the promotion of malignant phenotypes which enhance glioma immune evasion. Therefore, MIR155HG is a potential novel immunotherapeutic target for glioma.
2022-03-10T16:14:08.357Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "2a8e28f6c3460d76757ea5a57e3778bce0aa6ede", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=78951", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea831be942ac7b1cb9ffb6eb426193a75714adc0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233396274
pes2o/s2orc
v3-fos-license
Room-Temperature Structure of Xylitol-Bound Glucose Isomerase by Serial Crystallography: Xylitol Binding in the M1 Site Induces Release of Metal Bound in the M2 Site Glucose isomerase (GI) is an important enzyme that is widely used in industrial applications, such as in the production of high-fructose corn syrup or bioethanol. Studying inhibitor effects on GI is important to deciphering GI-specific molecular functions, as well as potential industrial applications. Analysis of the existing xylitol-bound GI structure revealed low metal occupancy at the M2 site; however, it remains unknown why this phenomenon occurs. This study reports the room-temperature structures of native and xylitol-bound GI from Streptomyces rubiginosus (SruGI) determined by serial millisecond crystallography. The M1 site of native SruGI exhibits distorted octahedral coordination; however, xylitol binding results in the M1 site exhibit geometrically stable octahedral coordination. This change results in the rearrangement of metal-binding residues for the M1 and M2 sites, the latter of which previously displayed distorted metal coordination, resulting in unstable coordination of Mg2+ at the M2 site and possibly explaining the inducement of low metal-binding affinity. These results enhance the understanding of the configuration of the xylitol-bound state of SruGI and provide insights into its future industrial application. Introduction Glucose isomerase (GI, D-xylose ketol isomerase; EC 5.3.1.5), also known as xylose isomerase, catalyzes the reversible isomerization of D-glucose and D-xylose to their ketoses D-fructose and D-xylulose, respectively [1]. In GI-catalyzed interconversion, the R-hydroxy aldehyde and R-hydroxy ketone of sugars are isomerized through the formal transfer of two hydrogens [2]. These sugars produced by GI are not only involved in crucial metabolic roles but also widely applied in various industries. In particular, GI is among the most important and highly applicable enzymes employed in the food industry for isomerizing starch-based D-glucose into D-fructose to produce high-fructose corn syrup (HFCS) [3][4][5], which is widely used in food industries, including as a sweetener in soft drinks and other food products, where it replaces beet or cane sugar. Additionally, HFCS is used in the pharmaceutical industry [3,6]. Moreover, the high catalytic activity of GI from Saccharomyces cerevisiae has been utilized for the economical production of bioethanol from cellulosic biomass [7,8]. A previous study solved the crystal structure of xylitol-bound GI from Streptomyces rubiginosus (SruGI; PDB code: 5Y4J) [18]. In that study, SruGI crystals containing only one metal at the M1 site were used, and xylitol was stably bound to the M1 site. These results only demonstrated that xylitol is able to bind in one metal-binding mode [18]. During extensive review of xylitol binding to SruGI, evaluation of four other crystal structures of xylitol-bound SruGI (PDB code: 1XIG [19], 2XIS [9], 3GNX (unpublished), and 4DUO [20]) revealed that the metal ion at the M2 site commonly has a higher atom displacement parameter (ADP, also known as B-factor) value than that at the M1 site (Table 1). However, the reason for the low metal occupancy at the M2 site remains unknown. In the present study, two possible scenarios are speculated for low metal occupancy at the M2 site: (i) Radiation damage of the metal ion at the M2 site by X-ray exposure. In general, data collection using conventional X-ray crystallography produces both global and specific radiation damage [21]. Because elements with a high atomic number (Z) strongly absorb ionizing radiation [22], the electron density of the metal-binding site in SruGI can be affected by X-ray irradiation. (ii) Xylitol induces low metal occupancy at the M2 site. This indicates that xylitol affects the residues comprising the M2 site and is involved in its binding during the process of maintaining the interaction with the metal of the M1 site. [20] 5Y4J 100 11.87 7.13 6.02 (Mg 2+ ) 9.71 (Water) [18] a Two SruGI molecules are occupied in asymmetric unit. Of the two hypotheses, radiation damage of metal sites can be minimized using serial crystallographic methods [23,24]. In serial crystallography approaches, multiple crystal samples are continuously delivered to the X-ray interaction position, and the crystal sample is exposed only once to the X-ray [24][25][26], thereby allowing the roomtemperature crystal structure to be observed with minimal radiation damage. In particular, this method is advantageous for allowing observation of radiation-sensitive metalloproteins and structural flexibility at room temperature [27,28]. To explain the low metal occupancy at the M2 site of the xylitol-bound state of SruGI, fixed-target serial-millisecond crystallography was performed, and room-temperature structures of native and xylitol-bound SruGI were determined at 1.5 Å and 1.4 Å resolution, respectively. Structural comparison of native and xylitol-bound SruGI showed that xylitol induces a more stable metal-coordination geometry at the M1 site, while distorting the metal coordination geometry of the M2 site. These results not only provide a biologically relevant room-temperature structure of xylitol-bound SruGI with minimized radiation damage through serial crystallography but also offer insights into the molecular mechanism associated with how xylitol affects metal coordination at the M2 site. Data Collection To provide the accurate structural information for the xylitol effect on the M2 site of GI, fixed-target serial-millisecond crystallography was performed at room temperature. For native SruGI, a total of 38,400 images were collected over a period of 1.1 h ( Table 2). Among these, 19,461 images contained diffraction patterns, with a crystal hit rate of 50.67%. Of these, 23,552 indexed diffraction patterns were obtained from 15,759 indexed images (Supplementary Figure S1). The indexing rate and multi-crystal hit rate were 80.97% and 49.45%, respectively. Native SruGI data were processed up to 1.5 Å, with an overall completeness, SNR, R split , and CC of 100, 3.46, 20.10, and 0.9484, respectively. For xylitol-bound SruGI, a total of 59,400 images were collected over a period of 2 h (Table 2). Among these, 11,799 images contained diffraction patterns, with a crystal hit rate of 19.86%. Of these, 8271 indexed diffraction patterns were obtained from 7604 indexed images (Supplementary Figure S2). The indexing rate and multi-crystal hit rate were 64.40% and 8.77%, respectively. Native SruGI data were processed up to 1.4 Å, with an overall completeness, SNR, R split , and CC of 100, 4.15, 18.33, and 0.9520, respectively. Native and xylitol-bound SruGI crystals displayed similar unit cell parameters and contained one molecule per asymmetric unit in the orthorhombic I222 space group ( Table 2). As a consequence, the crystallographic results of native SruGI and xylitol-bound SruGI obtained from soaking in xylitol molecules were almost the same. The electron density maps were obtained by molecular replacement, with both the room-temperature native and xylitolbound SruGI structures determined at 1.5 Å and 1.4 Å resolution, respectively. The overall electron density of native (Tyr3 to Arg387) and xylitol-bound (Tyr3 to Ala386) SruGI was clearly observed (Figure 1a,b). For xylitol-bound SruGI, a total of 59,400 images were collected over a period of 2 h ( Table 2). Among these, 11,799 images contained diffraction patterns, with a crystal hit rate of 19.86%. Of these, 8271 indexed diffraction patterns were obtained from 7604 indexed images (Supplementary Figure S2). The indexing rate and multi-crystal hit rate were 64.40% and 8.77%, respectively. Native SruGI data were processed up to 1.4 Å, with an overall completeness, SNR, Rsplit, and CC of 100, 4.15, 18.33, and 0.9520, respectively. Native and xylitol-bound SruGI crystals displayed similar unit cell parameters and contained one molecule per asymmetric unit in the orthorhombic I222 space group ( Table 2). As a consequence, the crystallographic results of native SruGI and xylitol-bound SruGI obtained from soaking in xylitol molecules were almost the same. The electron density maps were obtained by molecular replacement, with both the room-temperature native and xylitol-bound SruGI structures determined at 1.5 Å and 1.4 Å resolution, respectively. The overall electron density of native (Tyr3 to Arg387) and xylitol-bound (Tyr3 to Ala386) SruGI was clearly observed (Figure 1a Overall Structure The SruGI monomer consisted of 15 α-helices and eight β-strands, which form a TIMbarrel that contains an active site and an extended C-terminal α-helix domain (Figure 2a). The metal-binding site and xylitol molecule are located on the TIM-barrel domain. The monomer subunit further forms the functional tetrameric assembly observed in the crystal symmetry ( Figure 2b). The subunit topology and tetrameric assembly of SruGIs were identical with previously reported crystal structures of SruGI [10,11,13,29,30]. The structure of native SruGI was highly similar to other room-temperature crystal structures of native SruGI determined by serial crystallography (PDB codes: 6KCA, 6KD2, 7BVL, and 7CK0) [10,11,29,30], with r.m.s.d. values ranging from 0.088 Å to 0.286 Å (Supplementary Figure S3). Moreover, the structure of xylitol-bound SruGI was similar to other crystal structures of xylitol-bound SruGI determined by traditional X-ray crystallography (PDB codes: 1XIG, 2XIS, 3GNX, 4DUO, and 5Y4J), with r.m.s.d. values ranging from 0.123 Å to 0.273 Å (Supplementary Figure S3). The structures of native and xylitol-bound SruGI in this study were highly similar, with an r.m.s.d. of 0.112 Å and with no significant difference in ADP for the overall structure (Supplementary Figure S3). These results suggested that xylitol-binding caused no significant structural changes to the overall structure of SruGI. Overall Structure The SruGI monomer consisted of 15 α-helices and eight β-strands, which form a TIM-barrel that contains an active site and an extended C-terminal α-helix domain ( Figure 2a). The metal-binding site and xylitol molecule are located on the TIM-barrel domain. The monomer subunit further forms the functional tetrameric assembly observed in the crystal symmetry ( Figure 2b). The subunit topology and tetrameric assembly of SruGIs were identical with previously reported crystal structures of SruGI [10,11,13,29,30]. The structure of native SruGI was highly similar to other room-temperature crystal structures of native SruGI determined by serial crystallography (PDB codes: 6KCA, 6KD2, 7BVL, and 7CK0) [10,11,29,30], with r.m.s.d. values ranging from 0.088 Å to 0.286 Å (Supplementary Figure S3). Moreover, the structure of xylitol-bound SruGI was similar to other crystal structures of xylitol-bound SruGI determined by traditional X-ray crystallography (PDB codes: 1XIG, 2XIS, 3GNX, 4DUO, and 5Y4J), with r.m.s.d. values ranging from 0.123 Å to 0.273 Å (Supplementary Figure S3). The structures of native and xylitol-bound SruGI in this study were highly similar, with an r.m.s.d. of 0.112 Å and with no significant difference in ADP for the overall structure (Supplementary Figure S3). These results suggested that xylitol-binding caused no significant structural changes to the overall structure of SruGI. The Metal-Binding Sites and Xylitol-Binding in SruGI Refinement of the structure of room-temperature native SruGI in the absence of metal ions revealed spherical, strong Fo-Fc omit maps (>7 sigma) corresponding to Mg 2+ ions in both the M1 and M2 sites (Figures 3a and S4), indicating that Mg 2+ was bound in both the Comparison of Native and Xylitol-Bound SruGI To identify how the xylitol molecule affects the observed low metal occupancy at the M2 site, the metal-binding sites in xylitol-bound SruGI were compared with those in native SruGI. Because xylitol is bound in the M1 site, this allows it to affect the overall site geometry. Two water molecules were observed around the M1 site in native SruGI, each of which lies at a distance of 3.00 Å and 3.56 Å from the Mg 2+ . These two water molecules with Asp245 (OD2) and Glu217 (OE2) form the octahedral mirror plane, with bond angles for Glu217 (OE2)-M1-Asp245 (OD2), Glu217 (OE2)-M1-W1, Asp245 (OD2)-M1-W2, and W1-M1-W2 of 109.31°, 103.73°, 91.03°, and 55.40°, respectively (Figure 5a). Additionally, the bond angle of Glu181 (OE2)-M1-Asp287 (OD2), which corresponds to the axis of octahedral coordination, is 148.20° (Figure 5a). As a result, the M1 site in native SruGI shows a distorted octahedral coordination. In xylitol-bound SruGI, the water molecules coordinated in the M1 site of native SruGI are replaced by the O2 and O4 atoms of xylitol, which display shorter bond distances (2.34 and 2.28 Å, respectively) from Mg 2+ in the M1 site. The O2 and O4 atoms of xylitol with Asp245 (OD2) and Glu217 (OE2) form the octahedral Comparison of Native and Xylitol-Bound SruGI To identify how the xylitol molecule affects the observed low metal occupancy at the M2 site, the metal-binding sites in xylitol-bound SruGI were compared with those in native SruGI. Because xylitol is bound in the M1 site, this allows it to affect the overall site geometry. Two water molecules were observed around the M1 site in native SruGI, each of which lies at a distance of 3.00 Å and 3.56 Å from the Mg 2+ . These two water molecules with Asp245 (OD2) and Glu217 (OE2) form the octahedral mirror plane, with bond angles for Glu217 ( (Figure 5a). As a result, the M1 site in native SruGI shows a distorted octahedral coordination. In xylitol-bound SruGI, the water molecules coordinated in the M1 site of native SruGI are replaced by the O2 and O4 atoms of xylitol, which display shorter bond distances (2.34 and 2.28 Å, respectively) from Mg 2+ Accordingly, xylitol binding in the M1 site results in rearrangement of the metalbinding residues for both the M1 and M2 sites in SruGI. Mg 2+ in the M1 site shifts 0.24 Å towards the center of the M1 octahedron, while the water molecule in the M2 site is shifted 0.89 Å farther from the M1 site, relative to the Mg 2+ ion in the native structure ( Figure 6). Consequently, the centers of both the sites shift away from each other, resulting in separation distances between sites in the native and metal-bound SruGI of 4.96 Å and 5.69 Å, respectively. For the metal-binding residues in the M1 site, there was no significant change in conformation in the side chains of Glu181 and Asp245 responsible for metal-binding platform. By contrast, the Glu217 and Asp287 side chains were rotated by ~10° and 20°, respectively, toward the metal in the M1 site, which resulted in a shift <0.77 Å in the atoms responsible for metal-binding ( Figure 6). In the M2 site, the side chain of Glu217 involved in metal coordination was shifted 0.51 Å toward the M1 site. Moreover, the Asp257 side chain shifted 0.48 Å toward the M2 site, whereas that of His220 shifted 0.40 Å in the direction of the bound xylitol in the M1 site. Specifically, Asp255 side chain was rotated ~120°, which shifted the orientation of the metal-binding atom >2.0 Å relative to the position observed in native SruGI. Furthermore, the Glu186 side chain was rotated by ~67° away from the M2 site, which appeared to be necessary to avoid steric hindrance with the altered orientation of the Asp255 side chain. This result demonstrated that xylitol binding with the metal bound in the M1 site has a structural effect on not only the M1 site but also the M2 site and its surrounding residues. Accordingly, xylitol binding in the M1 site results in rearrangement of the metalbinding residues for both the M1 and M2 sites in SruGI. Mg 2+ in the M1 site shifts 0.24 Å towards the center of the M1 octahedron, while the water molecule in the M2 site is shifted 0.89 Å farther from the M1 site, relative to the Mg 2+ ion in the native structure ( Figure 6). Consequently, the centers of both the sites shift away from each other, resulting in separation distances between sites in the native and metal-bound SruGI of 4.96 Å and 5.69 Å, respectively. For the metal-binding residues in the M1 site, there was no significant change in conformation in the side chains of Glu181 and Asp245 responsible for metalbinding platform. By contrast, the Glu217 and Asp287 side chains were rotated by~10 • and 20 • , respectively, toward the metal in the M1 site, which resulted in a shift <0.77 Å in the atoms responsible for metal-binding ( Figure 6). In the M2 site, the side chain of Glu217 involved in metal coordination was shifted 0.51 Å toward the M1 site. Moreover, the Asp257 side chain shifted 0.48 Å toward the M2 site, whereas that of His220 shifted 0.40 Å in the direction of the bound xylitol in the M1 site. Specifically, Asp255 side chain was rotated~120 • , which shifted the orientation of the metal-binding atom >2.0 Å relative to the position observed in native SruGI. Furthermore, the Glu186 side chain was rotated by~67 • away from the M2 site, which appeared to be necessary to avoid steric hindrance with the altered orientation of the Asp255 side chain. This result demonstrated that xylitol binding with the metal bound in the M1 site has a structural effect on not only the M1 site but also the M2 site and its surrounding residues. Figure 6. Superimposition of the metal-and xylitol-binding sites of native (green) and xylitol-bound (cyan) SruGI at room temperature. Discussion GI is an important enzyme commonly used in the production of HFCS and bioetha nol. Biological and structural analyses of GI substrates or inhibitors, such as xylitol, pro vide important information that can be employed in enzyme engineering for industria applications. This study was conducted to determine why metal occupancy in the M2 sit is low in xylitol-bound SruGIs. The hypotheses were that this was an artefact of X-ray related radiation damage to the metal bound in the M2 site or structural changes to th M2 site conformation by xylitol. To test the first hypothesis, room-temperature structure for both native and xylitol-bound SruGI were determined by serial crystallography in o der to minimize radiation damage. The native SruGI structure clearly showed electro density corresponding to the Mg 2+ in both M1 and M2 sites. This suggested that radiatio did not affect metal occupancy in the M2 site. For the second hypothesis, comparison o the native structure with the xylitol-bound structure showed variations in bond lengt and angles between the metal and metal binding residues at the M1 site. Specifically, th bond lengths and angles associated with the octahedral geometry ranged from 2.00 Å t 2.20 Å and from 81.47° to 106°, respectively, whereas those for the M2 site ranged from 2.10 Å to 2.60 Å and from 76.91° to 99.96°, respectively. This indicated that the M1 sit demonstrated a higher-affinity interaction with Mg 2+ than the M2 site, whereas in term of octahedral geometry, the bond angles in the M2 site appeared more stable. Upon xylito binding to the M1 site, configuration changes in the metal-binding residues of both th sites were observed. In particular, the Mg 2+ in the M1 site shifted 0.28 Å toward the Asp24 side chain, and the Glu217 side chain shifted 0.30 Å toward the bound Mg 2+ and farthe from the Mg 2+ bound in the M2 site by 0.60 Å relative to the native SruGI structure. Th resulted in a Glu217-Mg 2+ -Asp245 bond angle that was close to an ideal octahedral geom etry (105.91° to 98.65°). Thus, the xylitol-bound SruGI shows bond lengths and angles be tween the metal-binding residues and Mg 2+ in the M1 site of 1.97 Å to 2.20 Å and 92.81° t 98.65°, respectively, versus 2.57 Å to 3.36 Å and 101.22° to 112.12° in M2, respectively. A a result, when xylitol binds to the M1 site, metal binding remains strong relative to tha observed in native SruGI, whereas the water molecules bound in the M2 site demonstrat weak affinity and distorted octahedral geometry. The room-temperature structures of native and xylitol-bound SruGI offered insigh into the mechanism underlying metal release at the M2 site by xylitol binding to the M site (Figure 7). In native SruGI, the Mg 2+ bound in the M1 and M2 sites maintains octahe dral geometry through interactions with Glu217. Upon xylitol binding to the Mg 2+ in th M1 site, rearrangement of xylitol-binding residues does not alter this geometry in M whereas coordination of the Mg 2+ in M2 is changed through interactions with Glu217 an His220 located proximal to the M1 site. This resulted in distortion of the M2 site, thereb causing unstable coordination of Mg 2+ , subsequent release of Mg 2+ from M2, and a confo mational change in Asp255 and Glu186. Discussion GI is an important enzyme commonly used in the production of HFCS and bioethanol. Biological and structural analyses of GI substrates or inhibitors, such as xylitol, provide important information that can be employed in enzyme engineering for industrial applications. This study was conducted to determine why metal occupancy in the M2 site is low in xylitol-bound SruGIs. The hypotheses were that this was an artefact of X-ray-related radiation damage to the metal bound in the M2 site or structural changes to the M2 site conformation by xylitol. To test the first hypothesis, room-temperature structures for both native and xylitol-bound SruGI were determined by serial crystallography in order to minimize radiation damage. The native SruGI structure clearly showed electron density corresponding to the Mg 2+ in both M1 and M2 sites. This suggested that radiation did not affect metal occupancy in the M2 site. For the second hypothesis, comparison of the native structure with the xylitol-bound structure showed variations in bond length and angles between the metal and metal binding residues at the M1 site. Specifically, the bond lengths and angles associated with the octahedral geometry ranged from 2.00 Å to 2.20 Å and from 81.47 • to 106 • , respectively, whereas those for the M2 site ranged from 2.10 Å to 2.60 Å and from 76.91 • to 99.96 • , respectively. This indicated that the M1 site demonstrated a higher-affinity interaction with Mg 2+ than the M2 site, whereas in terms of octahedral geometry, the bond angles in the M2 site appeared more stable. Upon xylitol binding to the M1 site, configuration changes in the metal-binding residues of both the sites were observed. In particular, the Mg 2+ in the M1 site shifted 0.28 Å toward the Asp245 side chain, and the Glu217 side chain shifted 0.30 Å toward the bound Mg 2+ and farther from the Mg 2+ bound in the M2 site by 0.60 Å relative to the native SruGI structure. This resulted in a Glu217-Mg 2+ -Asp245 bond angle that was close to an ideal octahedral geometry (105.91 • to 98.65 • ). Thus, the xylitol-bound SruGI shows bond lengths and angles between the metal-binding residues and Mg 2+ in the M1 site of 1.97 Å to 2.20 Å and 92.81 • to 98.65 • , respectively, versus 2.57 Å to 3.36 Å and 101.22 • to 112.12 • in M2, respectively. As a result, when xylitol binds to the M1 site, metal binding remains strong relative to that observed in native SruGI, whereas the water molecules bound in the M2 site demonstrate weak affinity and distorted octahedral geometry. The room-temperature structures of native and xylitol-bound SruGI offered insights into the mechanism underlying metal release at the M2 site by xylitol binding to the M1 site ( Figure 7). In native SruGI, the Mg 2+ bound in the M1 and M2 sites maintains octahedral geometry through interactions with Glu217. Upon xylitol binding to the Mg 2+ in the M1 site, rearrangement of xylitol-binding residues does not alter this geometry in M1, whereas coordination of the Mg 2+ in M2 is changed through interactions with Glu217 and His220 located proximal to the M1 site. This resulted in distortion of the M2 site, thereby causing unstable coordination of Mg 2+ , subsequent release of Mg 2+ from M2, and a conformational change in Asp255 and Glu186. To date, five xylitol-bound SruGI structures (PDB codes: 1XIG, 2XIS, 3GNX, 4DUO, and 5Y4J) have been determined using traditional X-ray crystallography. These structures provide useful information for understanding the xylitol-bound state of SruGI; however, they are less biologically relevant because of radiation damage associated with use of the X-ray. Moreover, in the 3GNX structure, the xylitol and metal ions were not accurately oriented according to the electron density map [18]. Therefore, the room-temperature xylitol-bound SruGI determined by serial crystallography potentially provides more biologically relevant structural information. For comparison, the structure of xylitol-bound SruGI determined by serial crystallography was superimposed with the structures of xylitol-bound variants previously determined using traditional crystallographic techniques (PDB codes: 1XIG, 2XIS, 4DUO, and 5Y4J) (Supplementary Figure S6). The results revealed clear differences in the conformations of the M1 and M2 sites ( Figure 8). However, given the differences in crystallization conditions and data-collection environments, direct structural comparisons are inappropriate. For example, the metal ions in sites M1 and M2 in the 1XIG and 3GNX structures were modeled with Mn 2+ rather than with Mg 2+ , used in the present study. Additionally, the final structures of 1XIG, 2XIS, 3GNX, and 4DUO were refined with a metal ion bound in the M2 site, which affects the conformation of neighboring residues. Nevertheless, in the 2XIS, 4DUO, and 5Y4J structures, the Glu186 and Asp255 side chains changed conformations upon xylitol binding to M1 ( Figure 8). As noted, the ADP values of the metals in the M2 site of the xylitol-bound 1XIG, 2XIS, 3GNX, and 4DUO structures were higher than those for the overall proteins ( Table 1), suggesting that the metal bound in the M2 site in xylitol-bound SruGI is released or has low metal occupancy. To date, five xylitol-bound SruGI structures (PDB codes: 1XIG, 2XIS, 3GNX, 4DUO, and 5Y4J) have been determined using traditional X-ray crystallography. These structures provide useful information for understanding the xylitol-bound state of SruGI; however, they are less biologically relevant because of radiation damage associated with use of the X-ray. Moreover, in the 3GNX structure, the xylitol and metal ions were not accurately oriented according to the electron density map [18]. Therefore, the room-temperature xylitol-bound SruGI determined by serial crystallography potentially provides more biologically relevant structural information. For comparison, the structure of xylitol-bound SruGI determined by serial crystallography was superimposed with the structures of xylitol-bound variants previously determined using traditional crystallographic techniques (PDB codes: 1XIG, 2XIS, 4DUO, and 5Y4J) (Supplementary Figure S6). The results revealed clear differences in the conformations of the M1 and M2 sites ( Figure 8). However, given the differences in crystallization conditions and data-collection environments, direct structural comparisons are inappropriate. For example, the metal ions in sites M1 and M2 in the 1XIG and 3GNX structures were modeled with Mn 2+ rather than with Mg 2+ , used in the present study. Additionally, the final structures of 1XIG, 2XIS, 3GNX, and 4DUO were refined with a metal ion bound in the M2 site, which affects the conformation of neighboring residues. Nevertheless, in the 2XIS, 4DUO, and 5Y4J structures, the Glu186 and Asp255 side chains changed conformations upon xylitol binding to M1 ( Figure 8). As noted, the ADP values of the metals in the M2 site of the xylitol-bound 1XIG, 2XIS, 3GNX, and 4DUO structures were higher than those for the overall proteins ( Table 1), suggesting that the metal bound in the M2 site in xylitol-bound SruGI is released or has low metal occupancy. Accordingly, re-refinement of the 3GNX and 4DOU structures was performed, where the bound metals were replaced with water molecules in the M2 site using the deposited structure factors from PDB (structure factors of 1XIS and 2XIS are not available in PDB). The results showed ADP values of water at M2 site of 3GNX and 4DOU close to those of the overall protein along with the absence of positive Fo-Fc electron density map corresponding to the metals (Table 3 and Supplementary Figure S7). These findings indicate that the metal bound in the M2 site of previously determined xylitol-bound SruGIs might not be fully occupied or water binding, which is consistent with our findings related to the xylitol binding inducing the release of the metal bound in the M2 site. On the contrary, the high ADP value of the M2 site of SruGI may have an effect because of incorrect assessment of the metal cation. To confirm this, Mg 2+ or Mn 2+ was added to the metal sites of SruGI-SX, 3GNX, 4DUO, and 5YJA, followed by refinement, and evaluated using the CheckMyMetal server. In all the xylitol-bound SruGI structures, it was confirmed that the M2 site still showed a high ADP value (Supplementary Table S1). This indicates that the metal is not fully occupying the M2 site of xylitol-bound SruGI. Accordingly, re-refinement of the 3GNX and 4DOU structures was performed, where the bound metals were replaced with water molecules in the M2 site using the deposited structure factors from PDB (structure factors of 1XIS and 2XIS are not available in PDB). The results showed ADP values of water at M2 site of 3GNX and 4DOU close to those of the overall protein along with the absence of positive Fo-Fc electron density map corresponding to the metals (Table 3 and Supplementary Figure S7). These findings indicate that the metal bound in the M2 site of previously determined xylitol-bound SruGIs might not be fully occupied or water binding, which is consistent with our findings related to the xylitol binding inducing the release of the metal bound in the M2 site. On the contrary, the high ADP value of the M2 site of SruGI may have an effect because of incorrect assessment of the metal cation. To confirm this, Mg 2+ or Mn 2+ was added to the metal sites of SruGI-SX, 3GNX, 4DUO, and 5YJA, followed by refinement, and evaluated using the CheckMyMetal server. In all the xylitol-bound SruGI structures, it was confirmed that the M2 site still showed a high ADP value (Supplementary Table S1). This indicates that the metal is not fully occupying the M2 site of xylitol-bound SruGI. These observations of the possible mechanism of metal release from the M2 site by xylitol binding to the M1 site of GI at room temperature is useful for industrial applications. For example, binding of xylitol or a similar inhibitor to M1 will release the metal bound in the M2 site, resulting in no isomerization activity of GI even if the inhibitor is subsequently released from the M1 site. Use of this configuration in the food or bioethanol industry will require removal of the inhibitor and subsequent addition of metal ion to allow binding to the M2 site in order to promote catalysis. Sample Preparation Glucose isomerase from Streptomyces rubiginosus (Cat. No. HR7-102) was purchased from Hampton Research (Aliso Viejo, CA, USA) as a crystal suspension containing crystals of various sizes (5-60 µm) in a solution of 6 mM Tris-HCl, pH, 7.0, 0.91 M (NH 4 ) 2 SO 4 , and 1 mM MgSO 4 . This crystal suspension was used directly for X-ray data collection without further purification. The crystal suspension (200 µL) was transferred to a 1.5 mL microcentrifuge tube using a pipette and left at room temperature for 20 min to allow the crystals to settle. The supernatant was then removed so that the volume ratio of crystals and supernatant was 1:1. A nylon mesh-based sample holder was used for sample delivery during fixed-target serial crystallography as previously reported [31]. The 30 µL crystal suspension was loaded into a nylon-mesh-based sample holder (dimension: 8 × 8 mm) with a nylon mesh of 60 µm pore size. After spreading the crystal sample evenly over the film with a pipette tip, 10 µL of the supernatant was removed using a pipette. Subsequently, a second polyimide film was used to cover the sample holder to prevent dehydration of the crystal sample. For preparation of xylitol-bound SruGI crystals, the crystal supernatant was transferred into a 1.5 mL microcentrifuge tube, centrifuged at 12,032× g for 5 min, and the supernatant was transferred to another microcentrifuge tube. The xylitol powder (Cat No. X3375; Sigma-Aldrich, St. Louis, MO, USA) was dissolved in the supernatant to reach a final concentration of xylitol solution at 20 mM. This xylitol solution was added to the crystal suspension and incubated for 1 h. The method used to load the sample onto the nylon mesh-based sample holder was the same as that for native SruGI sample preparation. Data Collection Fixed-target serial millisecond crystallography experiments were performed at beamline 11C at the Pohang Accelerator Laboratory (Pohang, South Korea) [32]. The beam size of the focused X-ray at the sample position was 6.5 µm (vertical) × 8.5 µm (horizontal) full width at half maximum. The X-ray wavelength and flux were 0.9795 Å and 1.2 × 10 12 photons/sec, respectively. The nylon mesh-based sample holder containing the crystal samples was mounted on the goniometer. Raster scans were performed at a 50 µm scan interval in both vertical and horizontal directions. The data collection strategy was similar to that reported previously [33]. The crystals were exposed to the X-ray beam for 100 ms, with an oscillation of 0.007 • at each raster scan point. Diffraction data were recorded with a PILATUS 6M detector (Dectris, Baden-Dättwil, Switzerland) with a 10 Hz readout. Diffraction datasets were collected at room temperature (25 • C). Data collection statistics are shown in Table 2. Data Processing and Structure Refinement Among the total collected images, hit images containing >20 Bragg's peaks with signalto-noise ratios (SNRs) >5 were filtered using the Cheetah program [34]. These images were further indexed and scaled using the CrystFEL program [35]. The phasing problem was solved using the molecular-replacement method, as implemented in MOLREP [36], and the crystal structure of SruGI (PDB code: 7CK0) [29] as the search model. Model building was performed using the COOT program [37], and structural refinement was performed using REFMAC5 [38]. The geometries of the final models were checked using MolProbity [39]. Refinement statistics are shown in Table 2. The figures were generated by PyMOL (Schrödinger, LLC, New York, NY, USA). The structure factor and coordinate files were deposited in the Protein Data Bank under the accession code 7DFJ (native SruGI) and 7DFK (xylitol-bound SruGI)]. The diffraction images were deposited in CXIDB under ID 173 (native SruGI) and 174 (xylitol-bound SruGI). Re-Refinenment of the Deposited Xylitol-Bound SruGI Structure The structure factors and coordinates for xylitol-bound SruGI (PDB code: 3GNX and 4DUO) were download from the Protein Data Bank. After replacing the metal at the M2 site with water molecules, re-refinement was performed through REFMAC5 [38]. The metal binding sites were validated using the CheckMyMetal (CMM) server [40]. Conclusions In summary, the room-temperature structures of native and xylitol-bound SruGI were determined using serial millisecond crystallography in order to avoid possible radiation damage and to obtain a more biologically relevant structure than currently available. Structural comparison of native and xylitol-bound SruGI suggested that xylitol binding in the M1 site induces the release of the metal bound in the M2 site and involved in enzyme activity. These results provide insights into the effect of the xylitol inhibitor on the SruGI activity and its possible future industrial applications. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijms22083892/s1, Figure S1: Diffraction image of native SruGI. Figure S2: Diffraction image of xylitol-bound SruGI. Figure S3: Structural comparison of crystal structures of SruGIs. Figure S4: The Fo-Fc omit electron density map for the metal-binding site of native and xylitol-bound SruGI. Figure S5: 2Fo-Fc and Fo-Fc electron density map of xylitol-bound SruGI during the structure refinement. Figure S6: Superimposition of xylitol-bound SruGI. Figure S7: Re-refinement of the deposited xylitol-bound SruGIs. Table S1. Atomic displacement parameter (ADP) of the re-refined of metal-binding site of xylitol-bound SruGIs. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-04-27T05:13:44.975Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "cd04f2ea228d1f4d08a9878694ca0730ae5f82a5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/8/3892/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd04f2ea228d1f4d08a9878694ca0730ae5f82a5", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256097102
pes2o/s2orc
v3-fos-license
The ANA-reflex test as a model for improving clinical appropriateness in autoimmune diagnostics Reflex tests are widely used in clinical laboratories, for example, to diagnose thyroid disorders or in the follow-up of prostate cancer. Reflex tests for antinuclear antibodies (ANA) have recently gained attention as a way to improve appropriateness in the immunological diagnosis of autoimmune rheumatic diseases and avoid waste of resources. However, the ANA-reflex test is not as simple as other consolidated reflex tests (the TSH-reflex tests or the PSA-reflex tests) because of the intrinsic complexity of the ANA test performed by the indirect immunofluorescence method on cellular substrates. The wide heterogeneity of the ANA patterns, which need correct interpretation, and the subsequent choice of the most appropriate confirmatory test (ANA subserology), which depend on the pattern feature and on clinical information, hinder any informatics automation, and require the pathologist’s intervention. In this review, the Study Group on Autoimmune Diseases of the Italian Society of Clinical Pathology and Laboratory Medicine provides some indications on the configuration of the ANA-reflex test, using two different approaches depending on whether clinical information is available or not. We further give some suggestions on how to report results of the ANA-reflex test. Introduction The term reflex test indicates a ''cascade'' diagnostic approach where a positive initial (first level) test automatically triggers further (second level) tests based on predefined rules applied to information systems. Cascade algorithms have been used for some time in autoimmune diagnostics, in particular for the detection of anti-nuclearcytoplasmic antibodies (ANA) [1][2][3], but in spite of its obvious contribution in terms of diagnostic appropriateness, the ANA-reflex test is not yet widely implemented [4,5]. As we shall see shortly, this is related to the complexity of the diagnostic algorithm of the ANA-reflex test which does not rely on informatics automatism, but rather on the intervention of a pathologist based on clinical information and preceding results [6], and should, in fact, be more appropriately defined ''ANA-reflective'' testing [7]. Be that as it may, for simplicity, custom, and convenience, in this text, we will refer to ''ANA-reflex'' testing. The ANA-reflex test differs from other current laboratory reflex tests both conceptually and organizationally. For example, the thyroid-stimulating hormone (TSH)-reflex test relies on the sequential execution of specific tests, inserted into a well-defined algorithm based on the TSH test result, without the need for decisional intervention by the operators. ANA-reflex testing is certainly more complex than TSH-reflex or other reflex testing for several reasons. First and foremost, ANA testing has a very low predictive value. Second, and by no means less importantly, ANA is a first-level test not for the diagnosis of a sole condition, but for several systemic autoimmune rheumatic diseases (systemic lupus erythematosus, mixed connective tissue disease, undifferentiated connective tissue disease, Sjögren's syndrome, and scleroderma), as well as autoimmune hepatic disorders. Third, ANA is detected by indirect immunofluorescent antibody assay (IIF), a subjective interpretative assay, with all the associated variables applicable to this type of method. Furthermore, ANA testing by IIF is complicated by the number of positive patterns attainable ([50), which necessitates interpretation by a pathologist and requires appropriate confirmation tests. Another peculiar characteristic is that ANA testing at the dilution of 1:40 in IIF can be positive in up to 20-30 % of healthy subjects, and finally, certain positive ANA patterns, like dense fine speckled or DFS70 if monospecific, are not associated with systemic autoimmune disorders even at high titres [8][9][10]. For these reasons, the introduction of an ANA-reflex test is an intriguing challenge both in terms of approach and algorithm construction. Whichever these difficulties should not impede the application of ANA-reflex testing considering its undeniable advantages. ANA-reflex testing could, indeed, be useful to the general practitioner or to the nonrheumatology specialist who entrusts the seroimmunological investigation of a patient with a potential systemic autoimmune rheumatic disorder to the laboratory. The objective is to simplify the patient work-up: a single visit to the doctor's surgery, a single visit to the laboratory, and thus a more rapid clinical diagnosis. The economic implications of ANA-reflex testing would be very relevant if its application lead to a reduction of second-level tests, e.g., antibodies to intracellular-specific antigens (so-called ENA) and anti-dsDNA. At present, laboratories in some jurisdictions are in fact ''obliged'' to execute these second-level tests on demand, irrespective of the ANA test result, which leads to increased spending in the absence of any clinical and diagnostic justification [3]. This document proposes one ANA-reflex algorithm to confirm a diagnosis of an ANA-associated rheumatic disease (AARD) based exclusively on the laboratory result for laboratories without access to clinical information, and another based on both laboratory results and clinical information. These two algorithms then merge into a common pathway. ANA-reflex test procedure with titres ‡1:160 and typical patterns Table 1 indicates which reflex tests should be executed based on the pattern type observed on the HEp-2 cells. The evaluation of the ANA test pattern is fundamental to the execution of the second-level tests. The specific autoantibodies responsible for typical ANA patterns are clearly described in the literature [11][12][13][14][15] and for certain fluorescent patterns, such as homogeneous, speckled, fine grainy (Scl70-like), nucleolar, centromeric or speckled cytoplasmic, the identification of precise autoantibody markers is considered essential, while for others it is not deemed to be necessary. The second-level testing for antibodies to intracellular specific antigens involves a screening test for antibodies directed against the classical antigens (Ro60 and Ro52, La, Sm, RNP, Jo1, CENP-B, Scl70, and dsDNA). This selection is based on the fact that the antibodies directed against these antigens are more frequently associated with autoimmune rheumatic diseases, and the tests are readily available commercially. The use of the HEp-2 cell line for the execution of the ANA test allows for the identification of numerous other patterns defined as rare, cytoplasmic or in cellular replication phase that may, in selected cases, provide the clinician with useful information. In most cases, these patterns do not require further testing inasmuch as the antigenic target is neither known nor confirmable with specific tests. ANA-reflex test procedure with dense fine speckled-DFS70 pattern The DFS70 pattern deserves particular consideration, since recent evidence highlighted it as one of the most frequent findings in ANA-IIF testing. From a morphological perspective, the DFS70 pattern is well characterized: HEp-2 cell presents fairly course granular fluorescence of the nuclei sparing the nucleoli, while the chromatinic region of mitotic cells is intensely fluorescent, maintaining the typical granularity. This pattern should urge the pathologist to perform a confirmation test to identify anti-DFS70 specificity [16]. If isolated anti-DFS70 is confirmed in the absence of signs and symptoms suggestive of AARD, the pathologist should indicate in his/her report that the evidenced ANA pattern, even at very high titres, is generally not indicative of AARD. In the event that the execution of a specific anti-DFS70 test is not possible, it is recommended that a descriptive comment of the pattern is inserted on the report along with any possible diagnostic correlations. It goes without saying that whenever signs and symptoms of autoimmune rheumatic disease are present, anti-dsDNA and anti-intracellular specific antigen antibodies should be tested, even in the presence of an anti-DFS70 pattern. The anti-DFS70 pattern at high titre might in fact ''mask'' ANA positivity with a different pattern [16]. Subsequently, we suggest an approach to the further steps necessary to diagnose ANA-reflex test in subjects who were identified as symptomatic by the requesting clinician. This should not be considered if the laboratory does not have access to clinical information. Indications for ANA-reflex testing supported by clinical information In our opinion, it would be useful if the ANA-reflex test request was accompanied by clinical information, since some signs and symptoms could independently justify the execution of the second-level tests [17]. The exact nature of the signs and symptoms to associate to the ANA-reflex test request should be decided in conjunction with the clinical specialists (rheumatologists). Out of the classification criteria for the respective AARDs, we have identified the following clinical findings that could warrant the second-level tests even in the case of low-titre ANA positivity or ANA negativity: Raynaud's phenomenon, photosensitivity or malar rash, persistent oral or ocular dryness, leucopenia or lymphopenia, significant increase in the creatine phosphokinase (CPK) enzyme, persistent arthritis, thrombotic events, or recurrent miscarriages. Some of the aforementioned clinical findings are subjective, but nonetheless relevant in the suspicion of AARDs. In the presence of these conditions, the pathologist should react, as shown in Table 2. The laboratory is able to identify a much larger number of autoantibodies that can be found in various autoimmune pathologies with varying frequency. The identification methods, in general, are immunoblot or microarray that in some countries currently present such elevated costs as to be used only in selected cases. We believe therefore that such diagnostic investigations are justified only in a specialized setting. Consequently, it is not appropriate to integrate these investigations into the ANA-reflex algorithm. An additional consideration regards the capacity of a positive ANA test to predict uveitis in juvenile idiopathic arthritis (JIA) or to evidence autoantibodies that correlate with autoimmune hepatitis. Widespread use of the ANAreflex test for diagnosing such pathologies, however, is not advisable considering that only some of the markers for autoimmune hepatitis can be identified by ANA-IIF on HEp-2 cells. Nevertheless, in the presence of a pattern suggestive of an autoimmune hepatitis-associated marker, confirmation tests are indicated. Table 3 proposes a correct diagnostic procedure in the case of positivity for this group of autoantibodies. The ANA-reflex test report An interpretative comment on the ANA-reflex report is important, and should include an explanation of the results obtained as well as the possible diagnostic route undertaken [18]. For example, in the presence of an unexpected marker for autoimmune hepatitis, it should be indicated that the finding of such autoantibodies ''could be associated with autoimmune hepatitis.'' In the presence of anti-DFS70 antibodies (possibly confirmed with specific tests), it should be indicated that said marker ''does not generally correlate with ANA-associated autoimmune pathology.'' When the second-level tests are executed in the context of ANA negativity, the reason for following that particular diagnostic procedure should be explained. Administrative aspects of the ANA-reflex test The proposal of the SIPMeL study group wants merely to be a referral model in terms of type and modality of the second-level test execution. From our group's proposal, it is evident that any patient with ANA-reflex could have his own more or less complex course, in some cases articulated with more second-level tests. This, peculiarity, should not translate to difficulty of the bureaucratic or administrative type: in fact, it is not conceivable that the ANA-reflex test requires a tariff calculation for each request. In Italy, the cost of reflex tests is predetermined on the basis of an approximate calculation of the number and type of further tests that could be executed. This way, at the moment of administrative procedure, the patient with the ANA-reflex test request is charged a flat rate, which will cover all eventual further tests. That allows the elimination of complex administrative procedures associated with additional requests or payments. This model may, of course, be applied differently in other countries, according to local laws or regulations [19]. One final consideration, befitting the context in which laboratories operate, is that if, on the one hand, the adoption of the ANA-reflex request modality is aimed at improving the handling of resources, it is also and above all a cultural application able to provide rapid complete diagnostic information with important repercussions on subsequent clinical decision. Compliance with ethical standards Conflict of interest The authors exclude any conflict of interest that could influence the paper. Ethical approval The paper does not involve both humans and animals. Informed consent Informed consent was not necessary. Raynaud's phenomenon and/or photosensitivity (or malar rash) and/or leucopenia and/or arthritis Antibodies to dsDNA and to intracellular specific antigens (anti-ENA) Raynaud's phenomenon and ANA positivity with nucleolar pattern at elevated titres (C1:320) Anti-PM/Scl, anti-fibrillarin, anti-RNA polymerase III and Th/To Significantly increased CPK Antibodies to intracellular specific antigens (anti-ENA) and myositis-associated antibodies ANA positivity (even at a titre 1:80) and persistent arthritis Anti-citrullinated peptide antibodies and rheumatoid factor Positive ANA and/or SLE-associated specific antibodies (dsDNA, Sm, RNP, Ro52, and 60Kd), with a clinical history of thrombotic events and/or polyabortion Anti-phospholipid antibodies (anti-cardiolipin, anti-beta2 glycoprotein I, lupus anticoagulant)
2023-01-23T14:54:07.946Z
2016-07-16T00:00:00.000
{ "year": 2016, "sha1": "0c281a7abcd7a3a540dead767646ef76b6627397", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s13317-016-0080-3", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "0c281a7abcd7a3a540dead767646ef76b6627397", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18664151
pes2o/s2orc
v3-fos-license
Core instability models of giant planet accretion II: forming planetary systems We develop a simple model for computing planetary formation based on the core instability model for the gas accretion and the oligarchic growth regime for the accretion of the solid core. In this model several planets can form simultaneously in the disc, a fact that has important implications specially for the changes in the dynamic of the planetesimals and the growth of the cores since we consider the collision between them as a source of potential growth. The type I and II migration of the embryos and the migration of the planetesimals due to the interaction with the disc of gas are also taken into account. With this model we consider different initial conditions to generate a variety of planetary systems and analyse them statistically. We explore the effects of using different type I migration rates on the final number of planets formed per planetary system such as on the distribution of masses and semimajor axis of extrasolar planets, where we also analyse the implications of considering different gas accretion rates. A particularly interesting result is the generation of a larger population of habitable planets when the gas accretion rate and type I migration are slower. INTRODUCTION Planetary astronomy is a young science, and until recently, was essentially devoted to the study of planetary bodies in our own Solar System. The extrasolar planets found since 1995 have vastly expanded our database by increasing the number of known planets by more than 200. Although the distribution of masses and semi-major axis of observed extrasolar planets is highly biased towards those planets that are detectable using Doppler radial velocity technique, the increase in precision and continuity of the surveys has given the possibility of unveiling a large variety of planets and reformulate the theories of planetary formation. As the number of detections grows, statistical studies of the properties of exoplanets and their host stars can be conducted to unravel some of the processes leading to the formation of planetary systems. In this frame several models of planetary formation have been presented in the last years. Chambers (2006) and ⋆ E-mail: ymiguel@fcaglp.unlp.edu.ar † Member of the Carrera del Investigador Científico. Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET).Email: abrunini@fcaglp.unlp.edu.ar Thommes et al. (2003) have developed semi analytical models based on the oligarchic growth regime, that retain much of the simplicity of the analytic models, but incorporate more physics from the detailed ones. Ida & Lin (2004a,b, 2008 have studied the mass and semimajor axis distribution of extrasolar planets through a very simple model but considering all the relevant physics involved in the process of planetary formation. We begin our studies in a previous paper (Miguel & Brunini 2008), where we developed a very simple model based on the core instability model. The accretion of solids was based in the regime where the largest embryos dominate the dynamics of the planetesimal disc: the oligarchic growth regime (Kokubo & Ida 1998;Ida & Makino 1993). When the cores are large enough to have an associate envelope which can not be sustained by the hydrostatic equilibrium any more, the gas accretion process begins. In our previous work we explored the effects of different gas accretion rates on the final mass distribution of extrasolar planets, and found a strong dependence. But our simple model allowed us to form only one core per disc and did not allow us to consider the dynamical evolution of the cores or the planetesimal disc. In this paper, we have improved the model introduced in Paper I considering other important effects. The mainly improvements are the formation and evolution of several cores simultaneously in the protoplanetary nebula, a fact that also influences the dynamic of the planetesimal disc. The embryos can grow by the accretion of planetesimals or by accreting other embryos, in this work we consider the collision among them as a source of growth. We also incorporate the orbital evolution of the protoplanets due to its tidal interaction with the gas in the surrounding protoplanetary nebula, considering two regimes of planetary migration: type I and II regimes, which result mostly in a radial motion towards the host star. Type I migration involves low mass protoplanets who are rapidly moved towards the star (Goldreich & Tremaine 1980;Ward 1986) and type II is the regime of migration for those protoplanets whose mass is large enough to allow them to open up a gap in its orbit and migrate on a rate controlled by the disc viscous time-scale (Lin & Papaloizou 1985). This processes could have important consequences on the final mass and semimajor axis distribution, such as on the final number of planets formed per protoplanetary disc. The disc of planetesimals also interacts with the nebular gas. This gas drag effect causes the migration of planetesimals towards the host star before they become large enough to decouple from the disc gas. This effect, which was included in our model, affects directly the distribution of solids in the disc and the growth of the cores. We show that a variety of planetary systems can be constructed using these principles, varying the initial conditions, and that the mass and semimajor distribution of extrasolar planets is strongly dependent on the gas accretion model and type I migration rate considered, being larger the population of habitable planets when the rate of gas accretion and type I migration are slower. DESCRIPTION OF THE MODEL In our previous work (Miguel & Brunini 2008), Paper I, we developed a simple model for computing planetary formation based in a work of Ida & Lin (2004a)(hereafter IL04), but incorporating the accretion rates given by Fortier et al. (2007). This lead us to a very different results regarding the final mass distribution of extrasolar planets, which was found strongly dependent on the gas accretion model considered. With the aim of improve the model introduced in Paper I, we have extended it by considering other important effects. In this section we explain briefly our previous model and show the improvements made. The previous model For the sake of completeness, we will explain our previous model. Although very simple, it incorporates all the important aspects of planetary formation. We considered one initial core per protoplanetary disc, with an initial mass of 10 −5 M⊕. The initial disc was based on the minimum mass solar nebula of Hayashi (1981)(for more details see section 2.2), and it was defined between 0.01 and 100au. The population of the nebular mass scaling parameter, f d , which states the solid mass in the disc in terms of the minimum mass solar nebula model, was distributed with a Gaussian distribution in terms of log10(f d ) with dispersion 1 and centred at 0.25, in order to characterise different discs. We supposed that fg, which is a scaling parameter for the gas disc, was equal to f d , this correspond to discs with solar metallicity. The solid and gaseous disc were not time-invariant. The gaseous disc changed globally, decaying exponentially with a characteristic time-scale of 4x10 6 years, in accordance to current estimates of disc lifetimes around young solar type stars (Beckwith & Sargent 1996) and the solid disc changed locally, suffering the depletion of planetesimals produced by the effect of core's accretion. The 10 −5 M⊕ initial core was located with equal probability per interval of log(a), and it grew with the following solids accretion rate based on the particle-in-a-boxapproximation (Safronov 1969), where R is the physical radius of the core, ΩK is the Kepler frequency, Mt is the total mass of the core, Σ d is the solids disc surface density and σ is the relative velocity between the embryo and the disc of planetesimals. The factor 10.53 was introduced for considering the F factor introduced by Greenzweig & Lissauer (1992), and the approximations made for the eccentricity e, the inclination i and the disc scale of high h(a) in the high-σ equilibrium regime, 2i ≃ e and h(a) ≃ ai. Finally, a factor of 4 was introduced in the above equation, which is an approximate value taken in order to fit the solid accretion rate used by Fortier et al. (2007), which includes the evolution of the planetesimals r.m.s. e and i and the drag effect caused by the gaseous envelope on the incoming planetesimals. When the gravitational perturbation due to the protoplanets is balanced by dissipation due to the gas drag, the planetesimals r.m.s. eccentricity attains an equilibrium value, which was obtained by Thommes et al. (2003). This effect was considered in the simulations performed by Fortier et al. (2007), where the solid accretion rate was prescribed as that obtained by Thommes et al. (2003), and where the enhancement of the protoplanet's cross section due to the drag effect caused by the protoplanet's gaseous envelope on the entering planetesimals was also considered. We compared the core accretion rates found without considering these effects with all the cases considered by Fortier et al. (2007). The results showed that the core accretion rate is ∼ 4 times larger when the evolution of the e and i are considered (Miguel & Brunini 2008). When a core of mass Mc reached a certain critical mass, it started the gas accretion on to the core. The critical mass at which the gas accretion process began was obtained analytically by Stevenson (1982) and it was improved by Ikoma et al. (2000) through numerical simulations. We considered a simplified version, The gas accretion rate was obtained by fitting the results of the self-consistent code developed by Fortier et al. where Mg is the mass of the surrounding envelope and τg is its characteristic growth time, This process stopped when the protoplanet consumed all the gas available on its feeding zone, or what is the same, when it reached its isolated gas mass, given by, where Σg is the gas surface density and brH is the typical distance between two adjacent embryos, which according to Kokubo & Ida (1998) is ∼ 10rH , being rH the Hill radius. The protoplanetary nebula and the initial cores As in our previous paper (Miguel & Brunini 2008), the structure of the protoplanetary nebula is based on the minimum mass solar nebula (MMSN) of Hayashi (1981), where the surface density of solids is defined as follows: where ηice is 1 inside the ice condensation radius and 4 outside it, expressing the effect of water ice formation across the ice boundary. The "snow line" is located at aice = 2.7 M⋆ M ⊙ 2 au from the central star of mass M⋆. Similarly, the volume density of gas follows an exponential profile, as explained Thommes et al. (2003), which is ρ(a, z) = ρg,0(a)e z 2 h(a) 2 gcm −3 where ρg,0 is the mid-plane value density given by, Unlike the previous calculations, we assume that the surface density of gas everywhere varies with time as: with τ disc the gas depletion time-scale. We consider a disc between Rin and 30 au, where the inner disc radius is estimated by, as explained Vinkovic (2006). In the above equation Ψ is a factor of ≈ 2 which depends on the disc structure and radiative transfer, the dust sublimation temperature, T sub , is taken as 1500 • K and L⋆ and L⊙ are the stellar and Sun luminosity respectively. In this work we consider stars with masses between 0.7 and 1.4M⊙, the stellar luminosity to this range of masses is given by In our previous work we considered one core per disc, here we form several cores in the same disc. Initially we start the simulation with a number N planets,0 of cores through the disc, separated by 10rH . This could have important consequences on the final distribution of masses and semimajor axis of extrasolar planets, specially for the changes in the dynamic of the planetesimals, and also considering the collisions between embryos as a very important source of growth, as was shown by Brunini & Benvenuto (2008). In the core instability model the first step in the planetary formation is the coagulation of dust particles. When particles of many meters in size have been formed, the gravitational forces became more important than gas drag, and the larger planetesimals grow faster than the smaller ones. This stage of increasingly rapid growth is the runaway growth (Greenberg et al. 1978;Kokubo & Ida 1996). In this stage, the planetesimal dynamic is dominated by the interaction with other planetesimals, until the larger ones grows so much and start to dominate the velocity distribution of the planetesimals, (Kokubo & Ida 1998). This is the oligarchic growth stage, and is the stage where the embryos form. In this high-σ regime, collisions between planetesimals tend to be destructive leaving fragments of planetesimals as a result of the impacts. This fragments are small enough to be rapidly damped by the gas drag and as a consequence, gravitational focusing favours the embryo, increasing its accretion rate (Chambers 2006). This could be an important factor in the growth of the embryos, but is not taken into account in this work. Ida & Makino (1993) found through N-body simulations, the minimum mass necessary for starting the oligarchic growth stage, which is given by where m is the effective planetesimal mass, which was found adopting the next a cumulative power law mass distribution of planetesimals, where p is 1.8 − 3. We consider p = 2.5,consistent with the results of Kokubo & Ida (1998 for the spectrum of masses of a population that has relaxed to isolated runaway bodies. Then the typical mass we use is found through here we suppose mmin = 6.3x10 12 g and mmax = 6.3x10 21 g are the minimum and maximum planetesimal mass, equivalent to 0.1 − 100Km radius (e.g. the same values adopted by Brunini & Benvenuto (2008)), here we note that the disc mass is contained in the small bodies. The typical planetesimal mass found with equation (14) is the one used and is a constant in all the simulations performed. One initial core is located at a = Rin, the rest of the cores are separated 10 rH each other until the end of the disc is reached. Their initial masses are given by equation (12), where we see that different location in the disc leads to a different initial oligarchic mass. The growth of the cores The solid accretion rate for a core in the oligarchic growth regime, considering the particle-in-a-box approximation (Safronov 1969) is given by equation (1). Thommes et al. (2003) found a expression considering the evolution of the planetesimal rms e and i. They found that when the embryos' gravitational perturbation are balanced by dissipation due to the gas drag, the planetesimal rms e attains an equilibrium value given by, where CD is a dimensionless drag coefficient which is of order unity, and ρm is the planetesimal bulk density, which is ≃ 1.5gcm −3 . The equation (1) could be rewritten using this expression, The main difference between equation (16) and the one considered in Paper I is that in the above equation we take into account the evolution of rms e and i through the equation (15), and in Paper I we approximated this evolution considering the cases analysed by Fortier et al. (2007). Equilibrium rms velocities are a good approximation as long as the embryos reach roughly one Earth mass, a state that is quickly attained in the regions a < 10au (Chambers 2006). Nevertheless, our model intent to remain as simple as possible, in such a way that we can perform statistics of a large number of simulated result. Within this in mind, a more appropriate velocity evolution model is for the moment out of our possibilities. The growth of the cores terminate when they consume all the planetesimals on their feeding zone (∆ac = brH), or equivalently when the solid superficial density is Σ d = 0 (see section 2.4). But they also can stop the growth when the density of planetesimals is substantially depleted by ejection (Thommes et al. 2003;Ida & Lin 2004a), this is, when the ratio of collision to ejection probabilities is ve vs where ve = 2GM⋆/a is the escape velocity from the primary and vs = GMt/R is their characteristic surface velocity. Once the core became massive enough to retain a gas envelope, the effect of this atmospheric gas drag on the planetesimals increases the collision cross section of the protoplanet. Considering the model for a purely radiative atmosphere (Stevenson 1982;Inaba & Ikoma 2003), Chambers (2006) found an approximate expression for the enhanced collision radius (Rc) of the embryo, where c is the velocity of light, P is the orbital period of the protoplanet, κ is the opacity of the atmosphere which is considered as ≃ 4cm 2 g −1 , rm is the planetesimal's typical radius and the equilibrium eccentricity is considered as ≈ 2 in this expression. Once the core reaches the critical mass given by equation (2), the gas accretion process begins. The model for the gas accretion on to the core is the same used in Paper I, and it was already explain in section 2.1. We also perform some simulations considering the gas accretion rate given by IL04, which differs mainly in the exponent, taken as 3 by them. It leads as to a smaller gas accretion rate, which has an enormous influence on the distribution of masses, as was explained on Paper I. Until this point, we have assumed that embryos grow by accreting gas and planetesimals only. However, embryos can also grow by accreting other embryos. When two protoplanets are too close to each other, their mutual gravitational perturbation induce high eccentricities, which enable their orbits to cross. This process may lead to close gravitational encounters and violent collisions between the embryos. We suppose that collision between protoplanets will occur if their orbital spacing is less than 3.5 Hill radius. Considering that the evolution of planetary atmosphere after a merger is a complex and poorly understood process, we analyse two different and extreme scenarios. On the one hand we assume that all embryo-embryo collisions will lead to coalescence to form a single body where the result is simply the sum of the masses (gas and solids) of both embryos, no matter the composition of the protoplanets. On the other hand impacts so energetic can cause a net loss of atmosphere, because the kinetic energy of the impact is partly transferred to the associate envelope accelerating its molecules to velocities greater than the escape velocity from the protoplanet. So we want to explore this possibility and in the other scenario, if both colliding planets have an associate envelope, after the impact their atmospheres would be lost and the result would be a merger of their cores. The mass and semimajor axis distribution were similar in both cases, for this reason we stayed with scenario 1) to not add greater complexity to the model. Evolution of the solids surface density Protoplanetary discs are not quite steady, so the surface density of solids is not a time-invariant. It will change over time due to different effects, one of them is the depletion of planetesimals produced by the effect of cores' accretion (this effect was also considered in Paper I). If, Σ d,0 is the initial surface density, we have to remove what already ate the embryo, so the evolution of Σ d is given by, On the other hand, the disc of planetesimals also interacts with the nebular gas. This gas drag effect will cause a radial motion of planetesimals before they become large enough to decouple from the disc gas. The orbital decay occur on a rate (Adachi et al. 1976; Thommes et al. 2003 where Tgas is the characteristic time-scale of the gas drag and η is the fractional deviation of the gas velocity vgas from Keplerian velocity vK due to the pressure gradient, with cs being the sound speed and cs/vK ≃ h(a)/a. The characteristic time-scale of the gas drag is given by, Finally, applying continuity to the planetesimal disc, the gas drag effect acting on the planetesimals leads to a rate of change of the surface density given by, Equation (20) is added to the equation (24) as a sink term, so we obtain the total change of the solid surface density in the disc. Model for planetary migration An important contribution to our understanding about planetary formation and evolution was the discovery of the firsts Jupiter-like planets orbiting very close to its central star (hot-Jupiters). The difficulties associated with the formation of this objects are most pronounced as we are closer to their host star, and they have awaken an interest in theories for the migration of protoplanetary embryos due to gravitational interaction with the disc. In this section we will consider two regimes of planetary migration: the type I and II regimes, which result mostly in a radial motion towards the star and concern planets of low and high mass respectively. type I migration Type I migration acts on low-mass protoplanets, that is, for those embryos who are unable to open a gap on its orbit. Our model for this regime of planetary migration is similar to that of Ida & Lin (2008), who describe in detail the assumptions made. This process is caused by an imbalance in the tidal torques from inner and outer disc leading to angular momentum exchange and making the planet drifts relative to the disc material. The time-scales involved in this process are smaller than the discs lifetimes, and were calculated by Tanaka et al. (2002) through 3D linear simulations, with q = 1 − 1.5, we use 1.5. The time-scale of this inward migration is smaller than the disc lifetime, since it seems very unlikely, recent works dealing with planet-disc interactions have proposed mechanisms that could slow down or stop type I migration. Several mechanisms have been proposed like the study of the magneto-hydrodynamic turbulence in the disc (Nelson & Papaloizou 2004), the effects of a disc containing a toroidal magnetic field (Terquem 2003), the tridimensional effects studied by , variation in the temperature gradient and surface density in the disc , the effect of including a proper energy balance on the interaction of a low-mass planet with a protoplanetary disc (Paardekooper & Mellema 2006), and so on. Halting or slowing down the inward migration requires careful computation of tidal effects between the core and the star, this is much beyond the capabilities of our model. For this reason a factor of 1 C migI is introduced in equation (25), to consider non linear effects without introduce a mayor degree of complexity to our model. If we want to slow down migration rates, CmigI must be 1. We perform simulations with CmigI = 1, 0.1, 0.01. We also consider that the migration mechanism stops when the core reach the inner edge of the disc given by equation (10). type II migration According to the observed range of orbital semimajor axes, many giant planets are found to a very close proximity to their parent stars. At this distance the equilibrium temperature is above the 170 • K, the temperature at which ice (and the icy cores of giant planets) exists. Actually, giant planets would form, preferably, at or outside of the ice line, this implies that they could have suffered great orbital changes after formation. The model of giant planet type II migration presented here is a very simple one, and is essentially the same use by Ida & Lin (2004a, so we will describe it briefly. The process starts when the giant planet form a gap in the disc. According to Lin & Papaloizou (1993), a necessary condition to open a gap is rH h The above equation leads to a mass condition, when the mass of the protoplanet is larger than a gap is opened in the disc. When this happens, there are tidal forces acting between the protoplanet and the inner and outer edges of the gap, whose balance (or imbalance) depends on their difference in density. The imbalance can result on a migration of the protoplanet 's orbit (Lin et al. 1996) (28) where α is a dimensionless parameter which characterises the viscosity and is taken as 10 −3 through this work. The protoplanet will migrate towards the star if a < Rm and away if a > Rm, where Rm is the radius of maximum viscous couple. This radius depends on the distribution of gas surface density, so it changes because Σg is not a constant. From the conservation of angular momentum, and considering the total disc mass as a constant, it can be found that Various stopping mechanisms have been proposed, but none of them seem to be effective in halting migration, for this reason we stop migration arbitrarily when the planet reach the inner radius of the disc, which is given by equation (10). When a giant planet is migrating inwards, it will perturbate the cores located on its path towards the host star. This encounter can cause the ejection or the accretion of the core by the giant planet, or neither, which means that the core will survive the giant planet path. The effect of the migrating giant planets on the formation of terrestrial planets was studied by Fogg & Nelson (2007), who present results of their N-body simulations of giant planet migration through an inner protoplanet/planetesimal disc, where the migration of the lowmass embryos (type I) was also considered. At the end of the simulations, they found that ∼ 71% of the initial solid disc survive after the migration of the giant planet, a very small percentage is ejected into a hyperbolic orbit (less that 0.2%), and the rest is accreted. We introduce this results in our simulations, with a very simplified model where in a close encounter between a giant migrating inwards and another core, the last one has the 29% of possibilities of being accreted by the giant and the rest 71% of surviving the giant passage with its orbit unchanged. We note that this is a very simple model, but in reality this fraction will change depending on the distance of embryos from the star. This effect does not significantly change the results, since we have more than one migrating giant planet in the disc. The passage of many giant planets will reduce the surviving probabilities of the cores, until they were finally accreted. RESULTS In this section we first analyse the characteristics of the planetary systems formed, and then discuss the effects of different prescriptions for the gas accretion rate and different retardation constants for type I migration on the mass and semimajor axis distribution of extrasolar planets. Characteristics of the planetary systems formed In each simulation we generate 1000 discs. Each system evolves for 10 7 years, in a rich-metal disc, where the timescale for the depletion of the gas, has a uniform log distribution between 10 6 and 10 7 years, and the stellar mass has a uniform distribution in log scale in the range of 0.7−1.4M⊙. The initial number of planets per disc, N planets,0 , depends on the initial cores' mass (equation 12), the inner disc radius (equation 10) and the mass of the host star. The final number of planets, N planets , is a result and shows the evolution of the planetary system. Each disc of gas is defined by fg,0, which has a log normal distribution with a dispersion of 1 and centred on fg,0 = 1, and the disc of solids is taken as f d = 10 0.1 fg,0 (Ida & Lin 2004b). This is slightly different from what we had considered before, because in Paper I we used discs with solar metallicity and here we suppose more metallic discs. The final number of planets per disc also depends on fg,0 and f d , as seen in Fig. 1, where f d is plot against N planets , considering different values for CmigI . Figure 1(a) was obtained with CmigI = 1, Fig. 1(b) considers CmigI = 0.1, the value taken in Fig. 1(c) was 0.01 and the type I migration was not considered in Fig. 1(d). As seen in the Figures, planetary systems with a large number of planets (N planets > 100) correspond to small values of f d (< 3), and this fact does not change considering different rates of type I migration. These discs have low mass and as a consequence the initial cores will not grow too much. When the mass of the cores is less than ≈ 0.1M⊕, they are not so affected by migration (which would favor the merging) and therefore the discs remain with many small planets, no matter the rate of migration considered. On the other hand the values of f d required to form planetary systems with few planets change when different migration rates are considered. When CmigI = 1, discs with f d > 0.5 can form planetary systems with few planets. This can be explained with the rapid type I migration rate, which moves the biggest cores inwards accreting the other cores to its path, forming planetary systems with few planets near to the host star. Small values of f d implies discs less massive, which will form planetary systems with few (terrestrial) planets. As migration slows down, the range of values of f d required to obtain planetary systems with few planets becomes smaller, as seen in figures 1(b), 1(c) and 1(d). In the last figure, when CmigI = 0, we note that those planetary systems with few planets correspond to f d > 10, which leads to high mass discs which will generate giant planets who accrete all the other cores to its path towards the central star. We also analyse the kind of planets formed per disc. We consider that terrestrial planets are those with Mt < 7M⊕, giants with low percentage of gas are suppose to be the planets with Mt > 7M⊕ and a percentage of gas mass less than < 15%, and the others, which have a larger percentage of gas are gas giants. The histogram presented in Fig. 2 shows the number of terrestrial planets (dotted line with filled circles), giants with low percentage of gas (dashed line with filled triangles) and gas giants (solid line with squares) formed per planetary system. Figure 2(a) is the one obtained when CmigI = 1. We can see that planetary systems with few planets have mostly terrestrial ones and there are few with giant planets. As we explained, for this rapid migration rate, the range of discs which form planetary systems with low N planets are those with 0.5 < f d < 30. In this range most of the discs have low mass and cores have less time to grow because they are rapidly moved to the central star. As a result most of the planetary systems with few planets will form only terrestrial planets. On the other hand those discs with a large value of f d are able to form planetary systems with giant planets, but Figure 1. The final number of planets is plot against the disc mass scaling parameter, f d . Fig. 1(a) shows the results when C migI = 1 is considered, fig. 1(b) and 1(c) present the results when C migI is equal to 0.1 and 0.01 respectively and in Fig. 1(d) C migI is 0 which means that type I migration is not considered. As seen in the figures, planetary systems with a large number of planets are those with small values of f d and a large disc mass is necessary to form planetary systems with few planets, but the range depends on the type I migration rate considered. Fig. 2(a) C migI = 1, figure 2(b) was obtained when C migI = 0.1 and in figure 2(c) the value of C migI is 0.01. The number of planetary systems with few terrestrial planets are larger when C mig is 1, and it decreases when the type I migration rate slows down. On the contrary, the number of planetary systems with few gas giants are smaller with C migI = 1, and it increases with small values of C migI . are the less ones. We also note a big absence of planetary systems with an intermediate number of planets (N planets > 30), this is because of the rapid migration rate, which moves quickly the cores to the inner disc limit, so they accrete the other cores to its path, forming planetary systems with few planets. When CmigI is smaller ( Fig. 2(b) with CmigI = 0.1 and 2(c) with CmigI = 0.01), the population of planetary systems with few gas giant planets increases, meaning that the formation of gas giants is favoured with slower type I migration rates. On the other hand, the number of giant planets with low percentage of gas remains low, this population is not affected by the rate of migration. We also note that the number of planetary systems with an intermediate number of planets increases, so when migration slows down the final amount of planets per disc tends to follow a uniform distribution. Mass and semimajor axis distribution To investigate the mass and semimajor axis of extrasolar planets we perform a series of numerical simulations considering different prescriptions for the gas accretion rate and different retardation constants for type I migration. Results are displayed in Fig. 3(a,b,c,d) and 4(a,b,c,d). The first ones were found by considering the gas accretion rates obtained by fitting the results of Fortier et al. (2007) and the second ones were obtained using the gas accretion rates given by IL04. In the figures 3(a) and 4(a) the factor considered for delaying the type I migration rate is CmigI = 1, in Fig.3(b) and 4(b) CmigI = 0.1 Fig. 3(c) and 4(c) are those obtained with CmigI = 0.01 and finally Fig. 3(d) and 4(d) where found without considering the effects of type I migration (CmigI = 0). As Fig. 3 and 4 show, the planetary distribution of mass and semimajor axis is strongly dependent on the gas accretion model considered (Paper I), as well as the type I migration rate used. When CmigI = 1 most of the planets migrate to the inner disc radius. The mainly difference between the distribution found with our model and the one obtained with IL04's gas accretion rate ( Fig. 3(a)and 4(a),respectively), is the population of planets with masses between 1 and 10M⊕, which is larger on the first one. In Fig.3(b) and 4(b) (CmigI = 0.1), the type I migration is slower and there are less planets who reach the inner edge of the disc. We can see a major population of planets with 1 − 10M⊕ and a 1au. The population of giant planets are also increased, but this is larger in Fig.4(b) because with IL04's gas accretion rate the run away of gas is reached sooner, and also the Mgap. This allows the planets to start the type II migration, which is slower than the type I. Type I migration is the slowest when CmigI = 0.01, and as seen in Fig.3(c),4(c),3(d) and 4(d), the distribution found when CmigI = 0.01 is very similar to those found with type II migration only. When CmigI = 0.01 the population of planets with masses less than 10 M⊕ is larger than in the other cases and there are more giant planets, specially when a 1au. The differences between 3(c) and 4(c) are similar to those found when CmigI = 0.1, which is in agreement with the results found in Paper I. The "planetary desert", which is a region with a deficit of planets, was found to be between 100 − 1000M⊕ (Paper I) with our model. Here we found that the effect of the planetary migration and specially type I migration, is to enlarge the desert, allowing to the planets with masses between 10 and 100M⊕ to reach the inner limit of the disc and empty the area with semimajor axis between ≈ 0.2 and ≈ 3au. Figure 3(a) shows the data obtained with C 1 = 1, in 3(b) C 1 = 0.1, in 3(c) C 1 = 0.01 and the effect of type I migration is not considered in 3(d). We find a larger population of terrestrial planets and a lower population of gas giant beyond Another important result is that we found a major population of habitable planets with our model than the one found with IL04's gas accretion rates. In order to compare the different population of terrestrial planets found with different gas accretion rates, we adopt a criteria where a habitable planet is a planet with masses between ≈ 0.3 and ≈ 7M⊕ (Ikoma & Genda 2006). The upper limit to the mass was obtained by Ikoma & Genda (2006) with a different time-scale for gas accretion, with our time-scale this value is probably larger, but we consider 7M⊕ as a nominal value for comparison. The Table 1 shows the percentage of final planets that are habitable according to Ikoma & Genda (2006)'s definition. In the table, model I is the one considering our gas accretion rate, and model II consider IL04's gas accretion rate. With our model the run away gas accretion process is reached by the cores at a larger mass (Paper I) so there are terrestrial planets that never reach this "crossover mass" and for this reason we found more planets with masses less than 7M⊕. So a slower gas accretion rate leads to larger population of habitable planets. CONCLUSIONS In our previous work, we had developed a very simple model for computing planetary formation, which had allowed us to show the strong dependence between the gas accretion model considered and the mass distribution of extrasolar planets. In order to get a better understanding of the processes involved in planetary formation, we improve our previous model, including significant effects that had not been taken into account in Paper I while maintaining the computational speed. We have presented the results of the simulations performed with our improved model, considering the formation of several planets per disc and taking into account the collision among them as a source of potential growth. The formation of several cores simultaneously in the disc has a strong influence on the dynamics of the planetesimal disc. The evolution due to this effect and due to the gas drag effect was also considered in this work. Finally, we have analysed the interaction between the protoplanets and the disc, which leads to a planetary migration (type I and II). The migration of a giant planet towards the central star could perturbate the cores placed in its path, causing the ejection or the accretion of the core by the giant planet. The another possibility is the survival of the core to the passage of the giant planet. We have performed numerical simulations taking into account that in a close encounter a terrestrial planet has the 71 percent chance of surviving the passage of a giant, but these simulations have not produced significant changes in the results due to the fact that in a real system several protogiant planets cross the region of terrestrial planets. The final number of planets formed per disc is a direct result of the planetary system evolution. We showed that the distribution is strongly dependent on the type I migration rate considered, noting an absence of planetary systems with an intermediate number of planets when the fastest migration rate is considered and seeing that it becomes more uniform when the migration slowed down. The final number of planets per planetary system also depends on the initial mass of the disc, those discs with low mass will form planetary systems with a large number of planets and a higher mass disc is necessary to form planetary systems with a few ones, but the range of disc masses corresponding to the final number of planets depends on the type I migration rate considered. For a rapid migration rate low mass discs are able to form planetary systems with few planets, but when the migration slows down only high mass discs could form this kind of planetary systems. However we note that many of the systems with a large number of planets will be unstable and undergo further evolution since the orbital spacing is small, and eccentricities will increase after the gas is gone. The final number of planets will be substantially smaller in some systems, but we do not consider this fact in this work. We also have analysed the kind of planets formed per disc, and found that planetary systems with a large number of planets have only terrestrial ones, but when we looked at those planetary systems with few planets, the kind of planets formed varies according to the type I migration rate considered. When we used the fastest rate, most of the planets are terrestrial planets, but when the rate is slower the amount of terrestrial planets decreases and most are now giants. In conclusion we found that the gas giant formation is favored at slower type I migration rate. On the other hand the giant planets with few gas reminds little in all the simulations, this population is not affected by the rate of migration. When we analyse the mass and semimajor axis distribution we consider different gas accretion rates, and the results show that this distribution is strongly dependent on the gas accretion model and also on the rate of migration, mainly due to type I migration effects. The boundaries of the planetary desert are enlarged due to the rapid migration of the embryos that did not reach the necessary mass to open up a gap. But we note that the planetary desert would be a bit different, since the actual observational sample shows that the number of observed planets at distances smaller than 0.1 AU from the central star is not as large as that found through numerical simulations, which could mean that most planets do not stop their migration at the inner edge of the disc. Finally we found that a lower gas accretion rate leads to a larger population of habitable planets, and lower population of gas giants beyond the ice condensation line.
2008-10-09T20:53:28.000Z
2008-06-11T00:00:00.000
{ "year": 2008, "sha1": "8adfe1a2eccfb9152166d7f9ca234faaee922e9b", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/387/1/463/3218666/mnras0387-0463.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "8adfe1a2eccfb9152166d7f9ca234faaee922e9b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235772464
pes2o/s2orc
v3-fos-license
Relation Between Cancer and Vasospastic Angina Introduction Patients with cancer have an increased risk of cardiovascular disease including ischemic heart disease and vice versa. Anticancer drugs and radiotherapy are known to contribute to endothelial injury and vasospasm. However, the relations between vasospastic angina (VSA) and cancer or its treatment are poorly investigated. Methods A total of 786 patients underwent intracoronary acetylcholine (ACh) provocation tests to diagnose VSA. The positive ACh provocation test was defined as angiographic coronary artery spasm accompanied by chest pain and/or ischemic electrocardiographic changes. Patients were divided into active cancer, a history of cancer, and no cancer according to the status of malignancy. The impact of types of cancer, anticancer drugs, and radiotherapy on VSA was evaluated. Results Of 786 patients, 38 (4.8%) and 84 (10.7%) had active cancer and a history of cancer, respectively, and 401 (51.0%) were diagnosed as VSA. There was no significant difference in rates of positive ACh test among patients with active cancer, a history of cancer, and no cancer (39.5% vs. 57.1% vs. 50.9%, p = 0.20). Types of cancer and cancer treatment also had no impact on positive ACh provocation test. Conclusions In this cross-sectional observational study, we did not find an association of active and a history of cancer with the diagnosis of VSA. Anticancer treatment including chemotherapy and radiotherapy was not significantly associated with positive ACh provocation test. Supplementary Information The online version contains supplementary material available at 10.1007/s12325-021-01854-z. INTRODUCTION Vasospastic angina (VSA) is an important cardiac disorder that is associated with the deterioration of quality of life and can induce myocardial infarction and sudden cardiac death [1]. Endothelial dysfunction with deficiency of nitric oxide plays a key role in the mechanism of VSA [2], as well as inflammation, autonomic nerve system, and rho-kinase activity [3,4]. Heart disease and cancer are the leading causes of death worldwide especially in developed countries [5], and patients with cancer have an increased risk for cardiovascular events and mortality from shared lifestyles, risk factors, and underlying mechanisms such as inflammation, endothelial dysfunction, and oxidative stress [6][7][8][9][10][11]. Toxicities of cancer treatment, including chemotherapy and radiotherapy, also contribute to endothelial injury and vasospasm, as described in the European Society of Cardiology position paper [12]. In this context, a significant association of anticancer drugs with VSA has been indicated. A case report demonstrated coronary spasm induced by 5-fluorouracil (5-FU) [13], and sorafenib, an inhibitor of the vascular endothelial growth factor (VEGF) receptors, was related to VSA with the potential mechanism of upregulation of rho-kinase activity [14,15]. Thus, the presence or a history of cancer and chemotherapy and/or radiotherapy may be associated with coronary vasospasm. The aim of this study was to investigate the impact of cancer and its treatment on VSA. Study Population Between Intracoronary Acetylcholine Provocation Tests Intracoronary ACh provocation tests were performed to diagnose VSA based on the guidelines [16,17], as previously reported [4,[18][19][20][21][22][23]. In brief, all vasodilators including calcium channel blockers and long-acting nitrates were discontinued at least 48 h before the examination in elective cases, except for sublingual nitroglycerin as needed. The radial artery and brachial vein were mainly used as approach sites [24]. After control angiography, a temporary pacing electrode was inserted in the right ventricle. Intracoronary ACh was administered in incremental doses of 20, 50, and 100 lg into the left coronary artery initially, and 20 and 50 lg into the right coronary artery subsequently, over a period of 20 s. One minute after the start of each injection, coronary angiography was performed to evaluate coronary vasospasm. After ACh provocation testing, 1-2 mg of isosorbide dinitrate was administered into the right and left coronary arteries, and coronary angiography was performed. Obstructive epicardial coronary artery disease (CAD) was defined as C 50% stenosis on coronary angiography after administration of intracoronary isosorbide dinitrate. The positive ACh provocation test was defined as angiographic coronary artery spasm, a total or subtotal occlusion by the ACh administration, accompanied by chest pain and/or ischemic electrocardiographic changes. It was evaluated by two experienced cardiologists who were blinded to patients' clinical characteristics. Endpoint and Statistical Analysis The primary endpoint of the present study was positive ACh provocation test in patients with different cancer statuses. Relations between VSA and cancer type or treatment were also evaluated. All statistical analyses were conducted using JMP Pro 15.0.0 (SAS Institute, Cary, NC, USA). Data are expressed as median [interquartile range] or frequency (%). Continuous variables were compared using Kruskal-Wallis and Mann-Whitney U tests. Normal distribution was tested using Shapiro-Wilk test. Categorical variables were compared with Fisher's exact test. Separate logistic regression analyses were performed to identify univariable predictors of positive ACh provocation test. The associated variables in univariable analyses (p \ 0.20) and age, sex, and active cancer (irrespective of p value) were included in the model of multivariable logistic regression analysis. Because anticancer drug was highly correlated with active cancer, the two factors were not included in the multivariable model simultaneously. A value of p \ 0.05 was considered statistically significant. RESULTS Of 786 patients, 38 (4.8%) and 84 (10.7%) had active cancer and a history of cancer, respectively, and 401 (51.0%) were diagnosed as VSA with positive ACh provocation test. Patients with cancer were older and had lower body mass index among the three groups (Table 1). In terms of ACh provocation test findings, the number of provoked angiographic coronary artery spasms was significantly fewer in patients with active cancer, while the rate of positive ACh provocation test was not different among the three groups (Table 2). Figure 1 shows relations between cancer type and VSA in patients with active and a history of cancer. The rates of positive ACh provocation test ranged from 33% to 100% among types of cancer with no significant differences. When focusing on only patients with active cancer, no significant differences were observed (Fig. 2). Among patients with active cancer, 16 were receiving anticancer drug (4 molecularly targeted, 4 cytotoxic, and 9 hormonal drugs) and 4 underwent ongoing radiotherapy. Cancer treatment also had no impact on positive ACh provocation test (Fig. 3). Multivariable analysis identified current smoking and obstructive epicardial CAD as factors associated with VSA, but cancer-related variables were not associated (Table 3). DISCUSSION The present study included 5% and 11% of patients having active and a history of cancer in a cohort of patients with suspected VSA undergoing intracoronary ACh provocation test. Overall, 51% was diagnosed as VSA, but the status of cancer was significantly not associated with positive ACh provocation test. In addition, we did not find impact of type of cancer and anticancer treatment including chemotherapy and radiotherapy on positive ACh test. Multivariable analysis reinforced the findings. To the best of our knowledge, this is the first study investigating relations between VSA and cancer or its treatment. Cancer and Ischemic Heart Disease Patients with cancer have an increased risk of cardiovascular disease including ischemic heart disease and vice versa [8][9][10][11]. A large-scale national registry including [ 6.5 million participants showed that in patients with acute myocardial infarction, 2.8% had active cancer and 6.2% had a history of cancer [25]. In this registry, patients with active cancer had higher mortality as expected, but they were also at a higher risk for major adverse cardiovascular and cerebrovascular events compared with patients with no cancer [25]. VSA, a part of ischemic heart disease, reportedly accounts for approximately 40% of all angina in East Asian populations [16]. Given that patients with cancer and ischemic heart disease including VSA have shared lifestyle and risk factors (e.g., smoking and metabolic syndrome) and underlying mechanisms (e.g., inflammation, endothelial dysfunction, and oxidative stress) [9,10], it Vasospastic Angina in Cancer In a position paper from the European Society of Cardiology, chemotherapy including 5-FU and VEGF inhibitors and radiotherapy are listed as having effects on endothelial injury, Fig. 2 Positive rates of acetylcholine provocation test across types of cancer in patients with active cancer Fig. 3 Positive rates of acetylcholine test across types of anticancer treatment. Active cancer treatment is further divided into three groups (molecularly targeted, cytotoxic, and hormonal drugs) vasospasm, and coronary atherosclerosis [12]. 5-FU is used to treat patients with gastrointestinal and other malignancies and is reported to induce myocardial ischemia through vasospasm and endothelial injury in up to 10% of cases [12]. Indeed, patients treated with 5-FU experience chest pain in 1-18% [26,27]. VEGF inhibitors are another class of drugs associated with angina at an incidence of 1-15%. Inhibition of VEGF receptor impairs stimulation of endothelial nitric oxide activity, increases oxidative stress, and upregulates rho-kinase activity, contributing to coronary vasospasm [26]. In fact, several case reports have demonstrated coronary spasm induced by 5-FU and VEGF inhibitors [13][14][15]. However, it is difficult to determine in clinical practice whether chest pain in patients treated with anticancer drugs is induced by VSA. A recent retrospective singlecenter study demonstrated that 87 out of 4019 (2.2%) patients treated with 5-FU were adjudicated as having VSA according to the presence of ''typical chest pain'' without any other diagnostic criteria [27]. In the present study, VSA was diagnosed by invasive intracoronary ACh provocation test, which has good diagnostic ability [16], and no significant relations between VSA and cancer or its treatment were shown. Even in patients actively treated with anticancer drugs, no impact was found with a positive ACh provocation rate of 31%. Although patients who were receiving molecularly targeted drugs had a numerically higher rate of positive ACh test than those with cytotoxic and hormonal drugs (50% vs. 25% vs. 22%), all percentages were lower than the overall rate of positive ACh test in the present study (51%). These findings may be robust because of consistent rates of positive ACh test across cancer types and multivariable analysis findings. Since patients with cancer often experience chest pain by numerous etiologies including myocardial ischemia and others [26], a pretest probability of having VSA might have been low in cancer patients in the present study. Further studies are needed to elucidate the impact of cancer and its treatment on coronary vasospasm. Beyond the relation between VSA and cancer, multivariable analysis showed current smoking and obstructive ACh acetylcholine, CAD coronary artery disease, CI confidence interval, OR odds ratio epicardial CAD as factors associated with coronary vasospasm in the present study. Smoking is a well-recognized risk factor for coronary spasm [16,28], and a close relationship between coronary atherosclerosis and VSA has been shown in previous reports [29]. Study Limitations The present study has several limitations. This was a single-center, retrospective, cross-sectional study. The overall sample size was not small, but subgroups of cancer and treatment had only small number of patients. Thus, larger, prospective studies are warranted. In patients with a history of cancer, data on the duration and dosage of chemotherapy and radiotherapy were not available. In addition, patient characteristics were different among the three groups. Even though a case report showed that intracoronary ACh provocation test confirmed the diagnosis of VSA in a patient with sorafenibinduced coronary artery spasm [14], whether ACh provocation test can identify patients with chemotherapy-induced coronary vasospasm as well as VSA patients who are not associated with anticancer therapies remains unclear. CONCLUSIONS Active and a history of cancer were not significantly associated with the diagnosis of VSA. We did not find an association of anticancer treatment including chemotherapy and radiotherapy with positive ACh provocation test in this cross-sectional observational study. Funding. No funding or sponsorship was received for this study or publication of this article. Authorship. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published. Compliance with Ethics Guidelines. The informed consent was obtained from all participants, and the study was reviewed and approved by the ethics committee of Chiba University Graduate School of Medicine. The study was conducted in accordance with this approval and adhered to the tenets of the Declaration of Helsinki revised in 2013. Data Availability. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you
2021-07-09T13:47:10.979Z
2021-07-09T00:00:00.000
{ "year": 2021, "sha1": "5fbc4a2fc2dd2f019498f31e2238d4eaab47bdd1", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12325-021-01854-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5fbc4a2fc2dd2f019498f31e2238d4eaab47bdd1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
266472235
pes2o/s2orc
v3-fos-license
Identification and validation of key biomarkers associated with macrophages in nonalcoholic fatty liver disease based on hdWGCNA and machine learning Background: NAFLD has attracted increasing attention because of its high prevalence and risk of progression to cirrhosis or even hepatocellular carcinoma. Therefore, research into the root causes and molecular indicators of NAFLD is crucial. Methods: We analyzed scRNA-seq data and RNA-seq data from normal and NAFLD liver samples. We utilized hdWGCNA to find module-related genes associated with the phenotype. Multiple machine learning algorithms were used to validate the model diagnostics and further screen for genes that are characteristic of NAFLD. The NAFLD mouse model was constructed using the MCD diet to validate the diagnostic effect of the genes. Results: We identified a specific macrophage population called NASH-macrophages by single-cell sequencing analysis. Cell communication analysis and Pseudo-time trajectory analysis revealed the specific role and temporal distribution of NASH-macrophages in NAFLD. The hdWGCNA screening yielded 30 genes associated with NASH-macrophages, and machine learning algorithms screened and obtained two genes characterizing NAFLD. The immune infiltration indicated that these genes were highly associated with macrophages. Notably, we verified by RT-qPCR, IHC, and WB that MAFB and CX3CR1 are highly expressed in the MCD mouse model and may play important roles. Conclusions: Our study revealed a macrophage population that is closely associated with NAFLD. Using hdWGCNA analysis and multiple machine learning algorithms, we identified two NAFLD signature genes that are highly correlated with macrophages. Our findings may provide potential feature markers and therapeutic targets for NAFLD. INTRODUCTION Non-alcoholic fatty liver disease (NAFLD) is a clinicopathologic syndrome characterized by the occurrence of steatosis in more than 5% of hepatocytes which is not caused by alcohol and other well-defined hepatic injury factors [1].It includes simple steatosis, nonalcoholic steatohepatitis (NASH), cirrhosis, and hepatocellular carcinoma (HCC) [2].With the rising number of obese people, NAFLD has become one of the leading causes of liver disease in the world, with about 100 million people worldwide suffering from the disease according to recent statistics [3].If not diagnosed and treated in time, it will not only lead to liver disease disability, but also has a strong connection to the elevated incidence of metabolic syndrome, type 2 diabetes mellitus, atherosclerotic cardiovascular disease, and colorectal tumors [4].In addition, there are no specific drugs that can reverse NAFLD, and the only treatment for end-stage liver disease is a liver transplant [5].Thus, there is an urgent need to elucidate the intrinsic molecular mechanisms of NAFLD pathogenesis and to search for specific biomarkers to develop effective preventive and therapeutic approaches. The pathology of NAFLD is a complex network of mechanisms involving multiple factors.It is significantly influenced by the increased infiltration of immune cell subsets, such as monocytes, T lymphocytes, and neutrophils, along with the activation of liver-resident cells, such as Kupffer Cells (KCs) or Hepatic Stellate Cells [6].Macrophages can be categorized according to their origin into liver tissue resident macrophages (KCs) and monocyte-derived macrophages.Several studies have demonstrated that cytokines and chemokines released by KCs are critical in promoting chronic steatohepatitis [7,8].Depletion of KCs by using gadolinium chloride or clodronate liposomes prevented the progression of dietinduced steatosis and hepatic insulin resistance in rats [9].In addition, it has been shown that monocyte-derived macrophages that can infiltrate the liver during the disease also play a vital role in NAFLD, and the reduction of infiltrating macrophages with specific drugs can inhibit hepatic steatosis and fibrosis [10]. Over the past few years, the rapid development of single-cell transcriptomics has revolutionized the highresolution analysis of cellular composition and heterogeneous cellular states, which has considerably contributed to our understanding of the composition of immune cells in liver tissues in NAFLD disease [11].Weighted gene co-expression network analysis (WGCNA), an unbiased systems biology analysis method, aims to explore the co-expressed gene modules and identify core genes in the networks, whereas it can only be used for bulk RNA sequencing (RNA-seq) data [12].Unlike WGCNA, the highdimensional weighted correlation network analysis (hdWGNCA) could constitute an integrated functional framework for co-expression networks based on singlecell RNA sequencing (scRNA-seq) data [13].Previous studies have not delved into the characteristic markers of NAFLD at the single-cell level [14].In our investigation, we coordinated scRNA-seq data and RNA-seq datasets to screen key signature genes contributing to NAFLD diagnosis by hdWGCNA and multiple machine learning algorithms, which may contribute to the early diagnosis and treatment of NAFLD (Figure 1). Preprocessing of single-cell RNA sequencing data Using the "Seurat" (4.1.0)package [22], we created Seurat objects based on the single-cell transcriptomic expression matrices of overall and individual cell types.We identified cells expressing over 200 but no more than 2500 RNA features.Additionally, 10% of mitochondrial RNA was set as a threshold for normalizing the scRNA-seq data.The batch effect of the samples was eliminated by the "harmony" functions.Furthermore, we used the "ScaleData" and "RunPCA" functions to determine the number of principal components (PCs) based on the Seurat object.We adjusted the number of PCs to 12 to generate cell clusters and then visualized them using the "UMAP" plot.To annotate the cell clusters, we performed unsupervised clustering using the "FindClusters" and "FindNeighbors" functions.The clustering results were obtained with the clearest resolution when the resolution was set to 0.5.Subsequently, we used the "SingleR" package (v 1.4.1)[23] for automated cell type annotation based on the marker genes of each cluster.Finally, we selected macrophages for principal component analysis (PCA) to identify distinct macrophage subtypes. High-dimensional weighted correlation network analysis We employed the high-dimensional weighted gene co-expression network analysis (hdWGCNA) to construct a co-expression network based on single-cell level data using the "hdWGCNA" package [13].First, we input the genes expressed in at least 5% of the cells and use the "MetacellsByGroups" function to construct the metacell gene expression matrix.Then, the "TestSoftPowers" function is used to determine the soft power.The "ConstructNetwork" function is used to build the co-expression network.All analyses are performed according to the official standard procedure as described in https://smorabit.github.io/hdWGCNA/articles/basic_tutorial.html. Cell-cell communication analysis The "CellChat" package provides an effective analysis tool for studying the interactions and communications between cells [24].We used the "CellChat" package to infer important biological interactions between cells in the liver, and calculated the probability values and significance of these interactions.Circle plots and bubble plots were used to visualize the relationships and importance between cells. Pseudo-time trajectory and SCENIC analysis Monocle2 algorithm (version 2.22.0)[25] can infer the temporal development and differentiation trajectories of cells, as well as explore the transition relationships between cell states.In our study, we employed monocle2 (v2.18.0) for trajectory analysis to further investigate the differentiation process of macrophages in the liver.Additionally, Single-cell regulatory network interference and clustering (SCENIC, version 1.2.4) was employed on all single cells to unveil the regulatory relationships between transcription factors (TFs) and target genes [26].The "limma" package was utilized to calculate significantly distinctly expressed regulators, with a statistical significance level set at p < 0.05. Functional enrichment analysis and GSVA analysis To test the gene expression level in RNA-seq datasets, we employed the "limma" package in R to compare the expression differences of the feature genes between NAFLD samples and control samples.And to investigate the functional abundance of the potential feature genes, we used the "clusterProfiler" package (v4.0) for Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis.Additionally, the gene set variation analysis (GSVA) algorithm was applied to explore the activity variations of KEGG pathways in the optimal feature genes.The statistical significance level was set at p < 0.05. Building the machine learning model To assess the diagnostic capability of the potential feature genes identified by hdWGCNA, we employed seven machine learning algorithms to build models using the mlr3verse (version 0.2.7) package in R (https://CRAN.R-project.org/package=mlr3verse).The predictive performance of the seven models was evaluated using the AUC values obtained from ROC analysis in the training set and validation set.To further select the optimal feature genes, we applied three machine learning algorithms (LASSO, SVM-REF, and RF) to predict disease status and identify important prognostic variables.For RF analysis, we used the "randomForest" package and the "caret" package in R [27] to determine gene importance, with a threshold set at an importance score greater than 2 [28].We utilized the "glmnet" package in R [29] to perform LASSO logistic regression analysis with the value of lambda min.An SVM classifier was created using the "e1071" package in R [30].Next, the effectiveness of the optimal feature genes was thoroughly evaluated in the training and validation sets.The expression levels of the optimal feature genes in NAFLD tissues and control tissues were compared with the Wilcoxon rank-sum test.The predictive ability of the optimal feature genes was evaluated using receiver operating characteristic (ROC) analysis, and the area under the ROC curve (AUC) values were assessed.The statistical significance level was set at p < 0.05. Immune infiltration analysis We utilized the CIBERSORT analysis technique [31] to evaluate the immune infiltration patterns in NAFLD samples and normal samples.In this analysis, the parameter "PERM" was set to 1000 and a significance threshold of p<0.05 was applied.The "pheatmap" package in R was employed to generate a heatmap that displays the 22 immune cell types, while the "vioplot" package was used to create boxplots illustrating their abundance.To assess the differences in immune cell proportions, we conducted Wilcoxon rank-sum tests, considering p<0.05 as statistically significant. Single-sample gene set enrichment analysis (ssGSEA) and gene set enrichment analysis (GSEA) In order to gain a more comprehensive understanding of the activation status of the gene sets under investigation, we utilized the single-sample Gene Set Enrichment Analysis (ssGSEA) algorithm [21] to assess the relative levels of 50 hallmark gene sets (h.all.v7.5.1.symbols.gmt) in control and NAFLD samples.Moreover, we conducted Spearman correlation analysis to determine the associations between these 50 hallmark gene sets and the top feature genes.Additionally, to delve into the biological significance of the top feature genes, we performed GSEA using the "c2.cp.kegg.v11.0.symbols" gene set from the Molecular Signatures Database. Animal model construction We purchased Ten C57BL/6 mice (males) aged 6 weeks from GemPharmatech (Nanjing, China).All mice were fed at room temperature and under standard light conditions.After 7 days of acclimatization feeding, the mice were randomly divided into 2 groups.The control group was fed chow diet, and the experimental group was fed a methionine-choline deficient (MCD) diet without any additional intervention.After four weeks, peripheral blood of the mice was collected and centrifuged to obtain serum.The levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) in serum were determined by Roche Cobas test.Fresh livers were exercised and weighed, and portions of the livers were taken for triglyceride assay.After being fixed in 10% formalin, partial liver samples were dehydrated in a series of progressively stronger alcohol washing solutions.Following xylene cleaning, tissues were embedded in paraffin.Sections were roughly 3μm thick and stained with hematoxylin and eosin (H&E).For Oil Red O staining, the sections were initially incubated in 60% isopropanol, followed by staining with an Oil Red O staining solution and a second incubation in 60% isopropanol.All procedures were conducted in accordance with the guidelines of the Institute for the Study of Animals. Single-cell RNA sequencing quality control and cell annotation We initially obtained a total of 6888 cells from a scRNA-seq data set containing the livers of two NASH mice and four normal mice.Following quality control and removal of batch effects, a total of 6875 cells were used for single-cell clustering analysis (Figure 2A, 2B and Supplementary Figure 1A-1D).After cell annotation, we could observe 8 distinct cell types on the UMAP plot, including hepatocytes, epithelial cells, endothelial cells, T cells, NK cells, macrophages, granulocytes, and B cells (Figure 2C).We then analyzed the proportions of different cell types in NASH and normal mouse livers (Figure 2D, 2E).Interestingly, macrophages, B cells, and NK cells showed increased proportions in NASH compared to the normal group (Figure 2F).Given the crucial role of macrophages in the liver, we selected macrophages for further analysis.After clustering the macrophages, we identified seven subclusters of macrophages.Notably, subcluster 1 was significantly more prevalent in the NASH group, and subcluster 7 was specific to NASH (Figure 2G, 2H).Therefore, we labeled subcluster 1 and subcluster 7 as NASH-macrophages.Next, we performed cell-to-cell communication analysis between different cells in the liver.The results revealed that NASH-macrophages exhibited highly active signaling communication with other cells, while the othermacrophages had rarely communicated with other cells.(Figure 2I).Of interest, NASH-macrophages received more signals than other cells, and hepatocytes were identified as the strongest senders (Figure 2I).This indicates that NASH-macrophages may play a critical role in the progression of NAFLD. Screening for modules representing NASHmacrophages by hdWGCNA Cell communication detected a total of 9 significant pathways, including CCL, MIF, SPP1, GAS, GALECTIN, CXCL, MK, COMPLEMENT, and PARs (Figure 3A).Of note, the macrophage migration inhibitory factor (MIF) signaling pathway exhibited great strength in both incoming and outgoing signal patterns of NASH-macrophages (Figure 3A).Within the MIF signaling pathway, NASH-macrophages were found to be the strongest sender, receiver, mediator, and influencer (Figure 3B, 3C).Ligand-receptor analysis indicated that Gas6-Axl was significantly activated in the paracrine signaling from hepatocytes to NASHmacrophages (Figure 3D).In view of the critical role of macrophages, we explored the hdWGCNA analysis to identify potential markers of macrophages.After setting the soft threshold to eight, we identified six modules (Supplementary Figure 2).As shown in Figure 3E, six gene modules were obtained, and the top 10 most influential genes were listed according to the hdWGCNA.Of interest, we found that the turquoise and blue modules were greatly expressed in subcluster 1 and subcluster 7 macrophages (Figure 3F).Additionally, the blue module exhibited a strong positive correlation within turquoise module (Figure 3G).Moreover, UMAP plots illustrated the distribution of the turquoise and blue modules in macrophages, which extremely overlapped with subcluster 1 and subcluster 7 macrophages (Figure 3H).Therefore, we proposed that the turquoise and blue modules may represent characteristics of NASH-macrophages.The top 20 genes from each of the turquoise and blue modules were considered as potential feature biomarkers of NAFLD. Pseudo-time analysis and transcription factor prediction To determine the transcriptional features of macrophage development at different stages, we performed a pseudotime analysis.Cells with similar states are grouped, and branch points separate cells into different states.Notably, cluster 1 and cluster 7 macrophages were mainly located at the end of the pseudo-time trajectory (Figure 4A).We intersected genes from the turquoise and blue modules with human transcriptional genes, resulting in 30 potential feature genes.Besides, the changes of potential feature genes in the differentiation were detected based on the gene expression levels in different subclusters of macrophages (Figure 4B).In order to further explore the transcriptional regulatory network underlying NASH, we subsequently used SCENIC algorithm to infer the transcription factors (TFs) behind NASH disease.SCENIC analysis revealed that certain TFs exhibited distinct activation and deactivation patterns across different samples (Figure 4C).We then detected 3 upregulated TFs and 8 downregulated TFs in the NASH group (Figure 4D).Additionally, a regulon specificity score (RSS) was defined based on Jensen-Shannon divergence for each group [32].In the normal group, the transcription factor with the highest score was identified as Fos, while the RXRa ranked first in the NASH group (Figure 4E, 4F). In addition, transcription factors were closely related to potential feature genes (Figure 4G). Expression validation and functional enrichment of potential feature genes Next, we analyzed the expression levels of these 30 potential feature genes in the livers of normal individuals and NAFLD patients using RNA-seq data.We found that SIRPA, ATP2B1, RRBP1, SRRM2, SON, and RBM39 were significantly downregulated in NAFLD samples, while MAFB, CX3CR1, and DBI were significantly upgraded (Figure 5A).Among all the differentially expressed genes, CX3CR1, SIRPA, and MAFB showed the largest differences (Figure 5B).And these 30 genes exhibited close associations with each other (Figure 5C).In terms of GO enrichment analysis, the potential feature genes were enriched in biological processes (BP), such as Cytoplasmic translation, RNA splicing, via transesterification reactions with bulged adenosine as nucleophile, mRNA splicing, via We also analyzed the overall relationship of the potential feature genes as a whole and their associations with immune cell infiltration.The results showed a significant association between the potential feature genes and M0, M1, and M2 macrophage infiltration, with a negative correlation with M0 and positive correlations with M2 and M1 (Figure 5G). Identifying optimal feature genes of NAFLD by machine learning Based on the potential feature genes, seven machine learning algorithms were applied to construct models and evaluate diagnostic performance (Figure 6A).It is worth noting that the support vector machine (SVM) algorithm showed the best performance in the training set (Figure 6B), with an AUC of 0.751 in the external validation set (Figure 6C).Next, we selected three machine learning algorithms to further screen for key feature genes of NAFLD.LASSO regression was performed using the aforementioned 30 genes as input, resulting in 5 genes (Figure 6D, 6E).We filtered 25 genes using the SVM algorithm (Figure 6F) and 5 genes using the Random Forest algorithm (Figure 6G, 6H).Finally, we exploit the intersection of these gene sets to identify the optimal diagnostic genes for NAFLD: MAFB and CX3CR1 (Figure 6I). To validate the performance of the two key genes, we split the RNA-seq dataset into a training and validation set.Both MAFB and CX3CR1 consistently showed higher expression in NAFLD liver samples in both sets (Figure 7A, 7C).We assessed the diagnostic performance of these two feature genes using the ROC analyses.In the training set, MAFB and CXCR1 had AUC values of 0.840 and 0.842, respectively (Figure 7B).In the validation set, the AUC values were 0.729 for MAFB and 0.687 for CX3CR1 (Figure 7D).Furthermore, the expression of MAFB was positively linked with that of CX3CR1 (Figure 7E).Additionally, we performed pathway analysis using GSVA to identify KEGG pathways potentially associated with these two genes.As shown in Figure 7F, 7G, a total of 22 pathways were significantly associated with these two genes, including Arachidonic acid metabolism, Histidine-metabolism, Oxidative phosphorylation, Phenylalanine metabolism, Pyrimidine Metabolism, and so on. Immune infiltration analysis In order to comprehensively assess immune cell profiles in normal and NAFLD samples, immune infiltration analysis was conducted.According to the results of CIBERSORT, there were significant differences in immune cell infiltration levels between NAFLD and normal samples (Figure 8A).NAFLD samples had a higher proportion of M1 macrophages, but relatively lower proportions of B cells, NK cells, and Dendritic cells than normal samples did (Figure 8B).The correlation heatmap demonstrated close associations between the potential feature genes and various immune cells (Figure 8C).We also performed separate analyses of the relationship between the optimal feature genes and immune cells (Figure 8D).The results showed that MAFB displayed a positive correlation with M2 macrophages, but negatively correlated with M0 and M1 macrophages (Figure 8E, Supplementary Figure 3A-3C).CX3CR1 showed a negative correlation with M0 and M1 macrophages (Figure 8F and Supplementary Figure 3D,3E).The ssGSEA and GSEA analysis We compared the abundance differences of 50 hallmark gene sets between the NAFLD group and the control group using the single-sample Gene Set Enrichment Analysis (ssGSEA) algorithm.In Figure 9A, we presented the distribution of these 50 gene sets in the NAFLD and control samples.We observed a significant upregulation of multiple gene sets in the NAFLD group compared to the control group.These upregulated gene sets include the peroxisome, bile acid metabolism, Heme metabolism, UV response up, P53 pathway, reactive oxygen species, glycolysis, oxidative phosphorylation, fat acid metabolism, xenobiotic metabolism, Myc targets v1, Myc targets v2, E2F targets, mTORC1 signaling, PI3K/AKT/mTOR Signaling, unfolded protein response, Interferon-alpha response, protein secretion, androgen response, estrogen response early, adipogenesis, apoptosis, G2M checkpoint, mitotic spindle, and cholesterol homeostasis.Additionally, we found that two top feature genes were positively correlated with the KRAS signaling up, interferon gamma response, interferon alpha response, inflammatory response, E2F targets and allograft rejection gene set (Figure 9B).For the single-gene GSEA analysis, the MAFB-activated pathway encompassed Cytosolic DNA-sensing pathway, Graft-versus-host disease, Oxidative phosphorylation, Phototransduction, and Viral protein interaction with cytokine and cytokine receptor (Figure 9C).The CX3CR1-activated pathway included Asthma, Nicotine Addiction, Olfactory transduction, Phototransduction, and Viral protein interaction with cytokine and cytokine receptor (Figure 9C).Then, we investigated the role of MAFB+ macrophages in cell communication analysis.The results revealed that MAFB+ macrophages exhibited highly active signaling communication with other cells, while the MAFB-macrophages had rarely communicated with other cells (Figure 9D, 9E).The MAFB+ macrophages took part in more outgoing and incoming pathways than MAFB-macrophages (Figure 9F).Of note, the MAFB+ macrophages exhibited great strength in both incoming and outgoing patterns in MIF signaling pathway (Figure 9G). Validation of optimal feature genes in mouse module To assess the diagnostic value of the best characterized genes for NAFLD, we constructed the NAFLD mouse model and a normal mouse model.After four weeks of MCD feeding, MCD mice had decreased liver weight/body weight (Figure 10A), elevated serum ALT and AST (Figure 10B), enlarged vacuolization in liver cells (Figure 10C), which suggested that MCD mice had hepatitis injury.Furthermore, Oil Red O staining showed accumulating lipids in liver sections from mice fed on the MCD-diet (Figure 10C).After that, we used RT-qPCR (Figure 10D), IHC (Figure 10E), and western blot (Figure 10F) to detect the levels of MAFB and CX3CR1 in liver tissues of MCD and normal dietary mice (Figure 10F), which showed that MAFB and CX3CR1 were significantly overexpressed in the livers of MCD mice. DISCUSSION The pathological course of NAFLD involves the role of genetic background and environmental factors and is associated with abnormalities in lipid metabolism, glucose metabolism, protein metabolism, and other aspects.This complexity increases the difficulty in understanding NAFLD [33].Although non-invasive techniques such as ultrasound and alanine aminotransferase (ALT) can assist in the diagnosis of NAFLD, there are currently no established molecular markers that serve as key indicators for NAFLD [34].According to prior studies, biomarkers at the genetic level can precisely determine the presence of disease and guide the development of clinical treatment strategies. Therefore, we utilized multiple bioinformatics methods to explore the characteristic markers of NAFLD.Firstly, we derived the scRNA-seq data and the RNA-seq transcriptomic datasets of NAFLD from the GEO database for analysis.After processing the single-cell data, we found that the subtype 1 and subtype 7 macrophages were distinct in NASH mice.Cell-cell interaction analysis revealed close communication between NASH-macrophages and different cells.Then, we used the hdWGCNA algorithm to identify the modules most associated with NASH-macrophages, resulting in 40 genes representing the functionality of NASH-macrophages.After cross-correlated them with human transcriptome data, resulting in 30 potential characteristic genes.Through immune infiltration analysis, we also found their close association with various immune cells, particularly macrophages.Then, we combined three machine learning algorithms, LASSO regression, SVM, and Random Forest, and finally identified two optimal feature genes (MAFB and CX3CR1) closely associated with the diagnosis or progression of the disease.In addition, the ROC results show that both CX3CR1 and MAFB genes have elevated diagnostic performance for NAFLD on both training and validation sets.Functional enrichment analysis using GSVA and single gene GSEA was performed on the optimal feature genes.Immune infiltration analysis revealed the optimal feature genes were statistically associated with macrophage infiltration, consistent with the module correlations obtained from hdWGCNA.Moreover, we constructed an NAFLD mouse model to further validate the expression of MAFB and CX3CR1. In conclusion, our study identified a NAFLD-associated macrophage subpopulation and the NAFLD feature gene. Previous studies have found that an increase in portal vein macrophages is one of the earliest changes observed in liver biopsy specimens of patients with fatty liver disease [35].In a high-fat diet-induced mouse model of fatty liver disease, the release of interleukin-1 beta (IL-1β) by KCs promotes hepatic steatosis by inhibiting peroxisome proliferator-activated receptor alpha (PPARα) activity in hepatocytes [7].It has been found that under hepatic lipotoxic conditions, the release of inflammatory factors by hepatocytes induces the infiltration of macrophages into the liver [36].Several studies have reported that macrophages can interact with hepatic stellate cells through cytokines and chemokines during the progression of NAFLD, promoting collagen deposition and fibrosis, ultimately leading to liver fibrosis and cirrhosis [37,38].Activated macrophages also participate in the regulation of fatty acid metabolism and lipid deposition in NAFLD [39].This is consistent with our findings of activation of lipid metabolism signaling pathways in ssGSEA.In summary, macrophages play a crucial role in the progression of NAFLD and warrant additional investigation. Our pseudo-temporal analysis describes macrophage development and indicates that NASH-macrophages are predominantly concentrated in the late stages of macrophage differentiation.Previous literature reports have shown that late-stage differentiated macrophages express additional surface molecules, including various tissue-specific surface markers and receptors [40].This is consistent with our analysis, as some receptor-related genes, such as CX3CR1, Gani2, and Son, are expressed in the late stages of macrophage differentiation.In the transcription factor prediction analysis, it is clear that the RXRa is greatly expressed in the NASH group.Retinoid X receptor alpha (RXRA) is a member of the nuclear receptor superfamily that participates in lipid, glucose, energy, and hormone metabolism.RXRA can accelerate lipid accumulation by regulating the transcription of target genes that promote lipid accumulation [41].Furthermore, RXRA may potentially increase people's risk of developing Alzheimer's disease by affecting brain cholesterol metabolism [42].These analyses suggest that NASH-macrophages and their associated genes play a particularly prominent role in the progression of NAFLD. Of the 30 genes that could be characterized, we identified the 2 most closely related to NAFLD.The function of the transcription factor V-maf musculoaponeurotic fibrosarcoma oncogene homologue B (MAFB) in the development of NAFLD has been extensively studied.As early as 2000, Kelly et al. indicated that overexpression of MAFB resulted in differentiation of chicken bone marrow primitive cells to macrophages [43].Several studies showed that MAFB function is indispensable in disease-associated macrophages [44,45].Recently, Cuevas VD et al. found that MAFB is essential to the acquisition of anti-inflammatory transcriptional and functional characteristics in human macrophages [46].This phenomenon was verified by Basile et al., who suggested that MAFB-mediated macrophage differentiation is involved in intrinsic repair after acute kidney injury [47].CX3CR1 is a gene that encodes a chemokine receptor involved in immune and inflammatory processes.Sutti et al. found that inflammatory dendritic cells expressing CX3CR1 cells promote the development of nonalcoholic steatohepatitis [48].The research conducted by Ni Y et al. demonstrated a significant upregulation of CX3CR1 expression in liver macrophages within NASH mice, as compared to their counterparts in normal mice [49].Hence, we contend that the optimal feature genes play a crucial role in the initiation of NAFLD. The methionine and choline deficiency (MCD) dietinduced NAFLD model is one of the most classical models.Its principle involves the deficiency of methionine and choline, which hinders the necessary processes of beta-oxidation and very low-density lipoprotein synthesis [50].Mice fed with the MCD diet exhibit characteristics such as weight loss, decreased levels of serum triglycerides (TG), and reduced liver weight-to-body weight ratio, which are contrary to the phenotype of human fatty liver disease [51].Pathological validation of the model commonly involves the use of HE and oil red O staining.Through validation using the MCD mouse model, we observed increased expression of MAFB and CX3CR1 at the RNA and protein levels in liver tissue, further confirming their accuracy in the diagnosis of NAFLD. All in all, we first identified a special cluster of macrophages playing an important role in NAFLD.Secondly, we reported for the first time that MAFB and CX3CR1 are characteristic genes of NAFLD.Immune infiltration analysis validated the relationship between these two genes and macrophages.Additionally, the diagnostic performance of these two genes was confirmed through the construction of an animal model.However, our study also has limitations.On one hand, we did not thoroughly explore the expression patterns of these two genes in macrophages.Furthermore, we lacked a large clinical cohort to explore the diagnostic value of these characteristic genes.In summary, our findings may bring new hope for the early diagnosis of NAFLD.Further research on the specific mechanisms and regulatory pathways of MAFB and CX3CR1 mediated by macrophages in NAFLD development will help enhance our understanding of the pathogenesis of NAFLD and potentially provide new targets for its treatment. CONCLUSIONS In conclusion, our study demonstrates a diagnostic model that can be applied to NAFLD.These findings will help to better reveal the role of macrophages in the progression of NAFLD.Meanwhile, the NAFLD characteristic genes identified in this study, especially MAFB and CX3CR1, may shed new light on the clinical development of effective diagnosis and treatment of NAFLD. Figure 1 . Figure 1.The flow chart of our analysis. Figure 2 . Figure 2. Single-cell analysis of cell proportion of NAFLD.(A) The features, counts, and percentages of mitochondrial genes in each of the analyzed samples after quality control.(B) The elimination of batch effect.(C) UMAP plot visualizes the distribution of eight cell types in control and NASH mouse livers.(D) Bar plot indicating the cell proportion of all eight cell types in liver of a normal chow diet mice.(E) Bar plot indicating the cell proportion of all eight cell types in liver of NASH mice.(F) Cell fraction distribution differences between NASH and normal.(G) UMAP plot showing the distribution of different clusters of macrophages in livers.(H) Bar plot indicating the proportion of seven macrophage clusters in liver of control and NASH mice.(I) Scatter plot indicating the incoming and outgoing interaction strength of the cells. Figure 3 . Figure 3. Identification of the crucial modules related to NASH-macrophages by hdWGCNA.(A) The dot plot showing the comparison of outgoing and incoming signaling patterns.(B) Heatmap showing the relative importance of each cell group in the MIF signaling network.(C) Circle plot showing the communication strength between interacting cells in the MIF signaling network.(D) Bubble plot showing the significant ligand-receptor pairs between cells.(E) Six gene modules were obtained and the top ten hub genes were presented according to the hdWGCNA pipeline.(F) Module activities in different macrophage clusters.(G) Correlation analysis between different models.(H) UMAP plots illustrating the distribution of each module. Figure 4 . Figure 4. Pseudo-Time trajectory and SCENIC analysis.(A) pseudo-time distribution of the different macrophage subtypes.(B) Heatmap showing the change of potential feature genes in pseudo-time developmental trajectories.(C) Heatmap of RAS activity of transcription factors (TFs) in each sample, with negative correlations in blue and positive correlations in red.(D) Heatmap of the area under the curve (AUC) scores of TFs in each group.(E, F) Ranking of TFs in NASH and normal samples calculated by the RSS specificity score.(G) The correlation between SCENIC-identified TFs and 30 potential feature genes. Figure 5 . Figure 5. Expression analysis and functional enrichment of potential feature genes.(A, B) Expression analysis of potential feature genes between NAFLD and normal samples.(C) Correlation analysis between potential feature genes.(D) Enrichment analysis of potential feature genes using Gene Ontology (GO).BP, biological process; CC, cellular component; MF, molecular function.(E, F) Enrichment analysis of potential feature genes using the Kyoto Encyclopedia of Genes and Genomes (KEGG).(G) Correlation analysis between potential feature genes and M0, M1, and M2 macrophages.*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001. Figure 6 . Figure 6.Machine learning identifies optimal feature genes of NAFLD.(A) Seven machine learning algorithms were utilized for model construction.(B) The ROC values of all seven algorithms in the training group.(C) The ROC scores of the SVM model were presented in the test group.(D) Lasso algorithm for selection features.(E) Coefficient changes of the selected features using lasso algorithm.(F) The SVM algorithm was used to further candidate optimal feature genes with the highest accuracy (the lower) and lowest error (the upper) obtained in the curves.The x-axis represents the number of feature selections, and the y-axis indicates the prediction accuracy.(G) The impact of the number of decision trees on the error rate was examined.The x-axis represents the number of decision trees, while the y-axis indicates the error rate.(H) The relative importance of potential feature genes was calculated in random forest (Top 5 genes' importance > 2).(I) Venn diagram showing the overlap between the three algorithms. Figure 7 . Figure 7. Verification of expression and diagnostic efficacy for optimal feature genes.(A) MAFB and CX3CR1 mRNA expression in the training group.(B) ROC curves of MAFB and CX3CR1 in the training group.(C) MAFB and CX3CR1 mRNA expression in the testing group.(D) ROC curves of MAFB and CX3CR1 in the testing group.(E) Correlation analysis between MAFB and CX3CR1.(F) Heatmap showing the scores of KEGG pathways in the optimal feature genes as calculated by GSVA.(G) Heatmap showing the correlation between the gene pathway and optimal feature genes.*P < 0.05, **P < 0.01, ***P < 0.001. Figure 8 .Figure 9 . Figure 8.Immune cell infiltration analysis.(A) Heat map of the 22 immune cell subpopulations comparing NAFLD and normal samples.(B) Violin diagram illustrating the proportion of 22 different kinds of immune cells in NAFLD versus normal samples.(C) Heat map showing the correlation between 22 different kinds of immune cells and potential feature genes.The size of the colored squares indicates the connection's strength; red indicates a positive correlation, while blue indicates a negative correlation.(D) Correlation between immune cells and optimal feature genes.(E) Correlation between MAFB and infiltrating immune cells.(F) Correlation between CX3CR1 and infiltrating immune cells.Correlation strength is proportional to the size of the dots.The color of the dots indicates the P-value.*P < 0.05, **P < 0.01, ***P < 0.001, ns, no significant difference. Figure 10 . Figure 10.Validation of optimal feature genes in mouse module.(A) Fresh Livers and liver-to-body weight ratio in control and MCD mice.(B) The serum ALT and AST levels on control and MCD mice.(C) HE staining and Oil Red O staining of liver sections from mice fed on control or MCD-diet.(D) The relative expressions of MAFB and CX3CR1 were validated by RT-qPCR.(E) MAFB and CX3CR1 expression in liver tissues of control and MCD mice was detected by IHC.(F) MAFB and CX3CR1 expression in liver tissues of control and MCD mice was detected by WB. ***P < 0.001, ****P < 0.0001. Total RNA was isolated from mouse liver tissue and cDNA was synthesized using HiScript II Reverse Transcriptase (Vazyme, Nanjing, China).Real-time quantitative polymerase chain reaction (RT-qPCR) analysis was performed using SYBR Green mixture (Vazyme Biotech, Q711).We set actin as the reference gene for each sample.All primers were purchased from Sangon Biotech (Shanghai, China).The primers are given in Supplementary Table1.For western blot (WB) RT-qPCR, WB, and IHC
2023-12-23T16:15:25.899Z
2023-12-21T00:00:00.000
{ "year": 2023, "sha1": "0d54d6679eacd338619fd09faef7049d1f295e28", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "28257f147625dc7296ec374f7cc6476d55d09f52", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
150898057
pes2o/s2orc
v3-fos-license
Improving Functioning, Quality of Life, and Well-being in Patients With Bipolar Disorder Abstract People with bipolar disorder frequently experience persistent residual symptoms, problems in psychosocial functioning, cognitive impairment, and poor quality of life. In the last decade, the treatment target in clinical and research settings has focused not only on clinical remission, but also on functional recovery and, more lately, in personal recovery, taking into account patients’ well-being and quality of life. Hence, the trend in psychiatry and psychology is to treat bipolar disorder in an integrative and holistic manner. This literature review offers an overview regarding psychosocial functioning in bipolar disorder. First, a brief summary is provided regarding the definition of psychosocial functioning and the tools to measure it. Then, the most reported variables influencing the functional outcome in patients with bipolar disorder are listed. Thereafter, we include a section discussing therapies with proven efficacy at enhancing functional outcomes. Other possible therapies that could be useful to prevent functional decline and improve functioning are presented in another section. Finally, in the last part of this review, different interventions directed to improve patients’ well-being, quality of life, and personal recovery are briefly described. Introduction Bipolar disorder (BD) is a recurrent and chronic disorder characterized by fluctuations in mood state and energy that affects around 2.4% of the global population (Merikangas et al., 2011). As a lifelong and recurrent illness, BD is associated with functional decline, cognitive impairment, and a reduction in quality of life (QoL) (Martínez-Arán et al., 2004;Michalak et al., 2005;Bonnín et al., 2012). Given the complexity of this illness and its consequences, researchers and clinicians are not only focused on clinical remission but also functional recovery and, more lately, well-being too . This emergent paradigm includes not only symptom recovery but also return to normal functioning and attainment of a meaningful life. In fact, in 1988, Dion and colleagues already pointed out that factors other than symptoms were related to functioning of patients with BD and that treatment should target symptom amelioration as well as reduce a patient's disability (Dion et al., 1988). It is known that even after the first manic episode, only 1 out of 3 patients regains psychosocial functioning at 1 year follow-up , suggesting that functional outcomes in BD are undoubtedly impaired from the very beginning and should become a priority in therapeutic interventions. In the last decade, many efforts have been made to improve functioning and well-being in BD; hence, this review aims at providing a brief overview of both issues. First, the definition and how to measure functioning is discussed. Then, a brief review of the variables influencing psychosocial functioning is performed. The following sections present some treatments that have proven to be effective at enhancing functional outcomes and other promising treatments that might also be useful at targeting functional impairment and prevent functional decline. Finally, a brief overview of therapies directed to improve well-being and QoL is also presented. Definition of Psychosocial Functioning and How to Measure It Despite the importance of psychosocial functioning in BD there is not a clear consensus regarding its definition. In the Task Force for the International Society for Bipolar Disorders conducted by Tohen and colleagues in 2009, different definitions of psychosocial functioning were examined but without reaching a consensus. The experts highlighted the definition provided by the International Classification of Functioning, Disability and Health (ICF) in which functioning comprises 3 different components: body structures and functions; activities and participation; and personal environmental factors. Moreover, the authors of these guidelines underlined that this construct was complex to measure and that besides the ICF, the Functioning Assessment Short Thest (FAST) scale (Rosa et al., 2007) might also constitute a good approach to measure functioning (Tohen et al., 2009). Before these guidelines, there were other attempts to define psychosocial functioning. For instance, in 2000, Zarate and colleagues suggested the assessment of psychosocial functioning should involve different behavioral domains such as the individuals' ability to function socially or occupationally, to live independently, and to engage in a romantic life, with functional recovery typically being defined as the restoration of normal role functioning in the domains under scrutiny . This definition represented a breakthrough in the field because in that moment, psychosocial functioning was measured by means of the Global Assessment Functioning Scale (GAF), endorsed by several consecutive editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM). The GAF provides 1 single score without differentiating between the behavioral domains pointed by Zarate and colleagues. Despite all, the GAF is still the most commonly used clinician rating scale to measure disability, at least in the United States (Von Korff et al., 2011). In 2007, Rosa and colleagues developed a tool to measure functioning, the already mentioned FAST scale. It was specifically created to measure the most common difficulties experienced by patients with BD. The rationale behind this scale is in line with the definition of functioning proposed by Zarate and colleagues in 2000, mostly focused on the assessment of different behavioral domains. More specifically, the FAST targets the following areas: autonomy, occupational functioning, cognitive functioning, financial issues, interpersonal functioning, and leisure time. In this regard, the FAST represented several advantages over the GAF, mainly that it assesses different behavioral domains, it does not rate the symptomatology, and it is specific for BD. Currently, the DSM-5 no longer encourages the use of the GAF. Instead, the use of the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0) (Üstün et al., 2010) is recommended. The WHODAS 2.0 allows the assessment of functioning and disability irrespective of diagnosis; that is, it can reflect difficulties due to any medical or psychiatric illness. In contrast, both the GAF and the FAST are limited to the impact of the psychiatric disease on functioning, excluding the medical or environmental limitations. The GAF, FAST, WHODAS 2.0, or ICF core sets specific for BD Ayuso-Mateos et al., 2013) are clinical tools, either rater administered (GAF, FAST, ICF core sets) or self-administered (WHODAS 2.0), but other approaches exist. For instance, the UCSD Performance-based skills Assessment (UPSA) (Patterson et al., 2001) is based on task performance and measures functional capacity, assessing the skills involved in community tasks such as comprehension and planning, finance, communication, mobility, and house management. Figure 1 represents an overview of some different scales available to measure functioning in BD during the last 40 years, starting in 1980, when the GAF was first endorsed by the DSM-III until the present. The scales presented in Figure 1 are just a little part of the big picture of the measurement of psychosocial functioning in BD. Nevertheless, it fairly represents the great variability that exists. It is likely that the way the researcher or clinician defines psychosocial functioning will determine the tool to measure it, but the reverse is true as well: the use of one tool or another implies how the concept of psychosocial functioning is understood. To overcome this bias, it would be ideal that psychosocial functioning could be measured taking into account 3 different perspectives: (1) a subjective view using a self-administered scale, such as the Sheehan Disability Scale for BD (SDS) (Arbuckle et al., 2009) or the WHODAS 2.0; (2) a semi-objective scale, using the FAST, GAF, or LIFE-RIFT (Leon et al., 1999), which are interviewer rated based on patients' answers; and finally (3) an objective scale, like the UPSA, which is performance based and measures functional capacity. Combining these 3 different approaches might help to disentangle all the variables associated with functional impairment observed in BD. Variables Influencing Functional Outcome in BD Many variables have been associated with functional outcome in BD, including demographic, clinical, and neurocognitive factors. The brief summary presented below includes findings reported in some studies that use different scales, including the GAF, FAST, The Multidimensional Scale of Independent Functioning , and SDS among others. As mentioned above, there is a great variability not only in the assessment of functioning but also in the variables reported to influencing it. Despite this, the next paragraphs are useful to reveal the magnitude of the complex construct that researchers and clinicians are trying to predict. Concerning the sociodemographic factors, it seems that male patients (Tohen et al., 1990;Sanchez-Moreno et al., 2018) as well as older patients (Sanchez-Moreno et al., 2018) show poorer functional outcomes. On the other hand, being married could represent a protective factor against functional impairment (Kupfer et al., 2002;Wingo et al., 2010). Higher socioeconomic status, based on education and employment, has also been associated with better functional outcomes (Keck et al., 1998;Wingo et al., 2010). Finally, regarding neurocognitive variables, verbal memory has been found to be a good predictor of functional outcome in several studies (Martinez-Aran et al., 2007;Bonnín et al., 2010, Torres et al., 2011Jiménez-López et al., 2018), However, variables related to other neurocognitive areas have also been reported, including executive functions, processing speed, and attention Mur et al., 2009;Wingo et al., 2010). It might be hypothesized that the neurocognitive variables influencing functional outcome in BD may vary depending on illness progression. For instance, patients in early stages of the disease seem to present a more selective profile of cognitive impairment, with some domains capable of improving 1 year after the first manic episode, including improvements in processing speed and executive functions . In this line, at least 2 studies have found that first-episode patients who did not relapse during 1-year follow-up could improve their neurocognitive functioning Demmo et al., 2018); hence, preserving neurocognition from the very beginning of the illness might guarantee better functional outcomes. Pharmacological Interventions Research on pharmacological and nonpharmacological treatments to restore functioning in BD is still immature. As previously mentioned, the link between functional outcomes and neurocognition is well recognized, which is why in recent years many efforts have improved cognition, including both pharmacological and psychological treatments. In fact, new trends in pharmacological treatments include focusing on restoring cognitive functioning rather than psychosocial functioning. Among the most promising medical treatments to improve cognition in BD are mifepristone (Watson et al., 2012), lurasidone , and erythropoietin (Miskowiak et al., 2014. Given the link between neurocognition and psychosocial functioning, it is likely that the efforts directed to improve neurocognition will also improve functional outcome; however, so far, no studies on pharmacological treatments have addressed both issues at the same time. It is worth mentioning that the methodological recommendations for cognition trials by the Cognition Task Force from the International Society for Bipolar Disorders encourage the inclusion of a functional measure as a key secondary outcome . In this regard, a tool to measure functional improvement that allows the researchers and clinicians to classify patients into different categories of functional performance could be useful to assess the efficacy of these treatments (Bonnín et al., 2018a). Psychological Therapies In contrast to the area of pharmacological treatments, in the field of psychological interventions several efforts have been made lately to design therapies to restore psychosocial functioning in BD. The first attempt was an open trial using a program named Cognitive Rehabilitation (Deckersbach et al., 2010). The authors included a total of 18 patients with subsyndromal depressive symptoms and after 14 session of cognitive rehabilitation, patients improved cognitive performance and functional outcome. More interestingly, the findings showed that changes in executive function accounted, in part, for the improvements in occupational functioning. The first randomized controlled trial (RCT) implementing a similar therapy was conducted in 2013 by Torrent and colleagues . The efficacy of functional remediation (FR) was proved in terms of FAST scale (Rosa et al., 2007) ICF core sets for BD ValidaƟon of the MSIF for BD ValidaƟon of the SDS for BD (Arbuckle et al., 2009) Present A measurement of funcƟoning combining three perspecƟves is recommended: 1) a subjecƟve assessment (using a self-administered scale); 2) a semi-objecƟve assessment (using interviewer-rated scales) 3) an objecƟve assessment using performance-based tools. improving functional outcomes in euthymic patients with moderate to severe functional impairment at baseline. Moreover, improvement in psychosocial functioning was maintained after 6 months' follow-up . However, the impact of the intervention was low in terms of cognition. Contrary to others therapies labeled as "cognitive remediation," FR is specially centered on functional recovery, focusing on the training of neurocognitive skills that are useful for daily functioning. Hence, this approach might be suitable especially for patients in late stages of the illness and who present moderate to severe functional impairment. Another preliminary study conducted in the Netherlands included 12 patients and replicated the positive results in functional outcome after receiving a shorter FR program (Zyto et al., 2016). However, not all the interventions targeting cognitive rehabilitation were found to improve functional outcome. For instance, another RCT conducted by Demant and colleagues (2015) found no improvement on either cognition or functional outcome after a 12-week intervention. It is worth mentioning that these negative results might be explained by some methodological limitations of the trial, including the length of the intervention (too short) or the fact that patients were subsyndromic at study enrolment. Another study leaded by Lewandowski and colleagues (2017) assessed the efficacy of an internet-based cognitive remediation program in patients with BD compared with an active control group both in neurocognition and community functioning. After treatment, patients who received the internet-based program improved cognitive performance in processing speed, visual learning and memory domains, and the composite score. These results were maintained over 6 months after finishing the intervention; however, the intervention was not associated with change in community functioning, although cognitive change was associated with functional change across the sample. There are other ongoing trials targeting cognition including action-based cognitive remediation programs in which computerized training is combined with practical in-session activities and cognitively challenging tasks between sessions. This novel approach may have greater effect at enhancing functional outcomes than traditional cognitive remediation programs Ott et al., 2018). It is difficult to measure the power of these current approaches in changing functioning, since very few studies have used psychosocial functioning as a primary outcome. In this regard, meta-analyses providing these data are urgently needed. How to Prevent Functional Decline: Promising Therapies So far, there is no strong evidence regarding the prevention of functional decline in BD. The following section includes some targets and treatments that could address this issue and deserve to be further explored. Addressing Subthreshold Depressive Symptoms A considerable portion of the patients with BD (more than 50%) experience inter-episode residual depressive symptoms (De Dios et al., 2010;Gitlin et al., 1995), preventing them from living to the fullest. In this regard, subthreshold depressive symptoms together with neurocognitive impairment might be one of the strongest predictors of functional outcome (Bonnín et al., 2010(Bonnín et al., , 2012(Bonnín et al., , 2014Reinares et al., 2013;Martinez-Aran and Vieta, 2015;Samalin et al., 2017). However, the relationship between functional outcome and subthreshold depressive symptoms might not be linear and unidirectional; instead, they seem to influence one another (Gitlin and Miklowitz, 2017;Weinstock and Miller, 2008). Besides the implications in functional outcome, residual depressive symptoms are also a major cause of relapse (Vieta and Garriga, 2016;Radua et al., 2017), consequently affecting psychosocial functioning and QoL (Bonnín et al., 2012;Xiang et al., 2014). The treatment of residual depressive symptoms during euthymia is an unmet need, but fortunately, clinical research has begun to investigate how to tackle them. One recent RCT proved that adjunctive extended-release quetiapine at a dose of 300 mg daily was significantly more effective than placebo in the treatment of subthreshold depressive symptoms (Garriga et al., 2017), but no significant improvement was detected in functional outcome. One possible explanation is that the sample size was not powered enough to detect significant changes in this secondary outcome. Regarding psychological interventions, a limited number of therapies have addressed subthreshold depressive symptoms as a primary outcome. To the best of our knowledge, only one pilot RCT study assessed the effect of Eye Movement Desensitization and Reprocessing therapy on this type of symptomatology. Specifically, patients in the treatment group showed a statistically significant improvement in depressive and hypomanic symptoms when compared with treatment as usual at 12-month follow-up; however, psychosocial functioning was not assessed (Novo et al., 2014). Another multicenter study of Eye Movement Desensitization and Reprocessing with a bigger sample is underway with the objective to reduce symptoms and relapses and improve psychosocial functioning (Moreno-Alcázar et al., 2017). Regarding FR, secondary analyses showed that patients with subsyndromal symptoms could also improve psychosocial functioning after the therapy (Sanchez-Moreno et al., 2017). Other therapies include an approach testing the long-term efficacy of an intervention that combined cognitive behavior therapy (CBT) and psychoeducation, which has also been described to be effective in terms of symptoms and socialoccupational functioning improvement (González- Isasi et al., 2014). Positive results in social functioning were also found with CBT (Lam et al., 2003). Inder and colleagues (2015) randomized a group of patients with BD to Interpersonal and Social Rhythm Therapy or specialist supportive care, and both groups improved in depressive/manic symptoms and social functioning. Finally, an intensive psychotherapy (family-focused treatment [FFT], Interpersonal and Social Rhythm Therapy, or CBT) in patients with BD during an acute depressive episode also showed beneficial functional outcomes (Miklowitz et al., 2007a). Finally, positive results have also been reported on anxious and depressive symptoms using mindfulness-based cognitive therapy (Williams et al., 2008;Ives-Delipery et al., 2013;Perich et al., 2013). Although more research is needed, it might be hypothesized that treating subthreshold depressive symptoms could be an indirect pathway to improve psychosocial functioning. Enhancing Cognitive Reserve Cognitive reserve (CR) is the capacity of the adult brain to endure neuropathology, minimizing clinical manifestations and allowing a successful accomplishment of cognitive tasks (Stern, 2009). Genetics determine, to some extent, CR; however, environmental factors such as an active lifestyle, education, and brain stimulation (mental activities) can also influence it. In BD the most common ways to measure CR include years of education, premorbid Intelligence Quotient, and leisure activities. So far, no interventions have tested whether improving CR enhances functioning, but some studies suggest that CR is a good predictor of both cognitive and psychosocial outcome in euthymic patients with BD (Anaya et al., 2012;Forcada et al., 2015). Further, it could also play an important role in patients with first psychotic episode since CR has shown to predict psychosocial functioning 2 years after the first episode (Amoretti et al., 2016). Hence, given the role of CR both in chronic patients and at early stages, this might constitute an area to explore and enhance to prevent functional decline (Vieta, 2015). In this regard, there is another ongoing trial by Torrent and colleagues (NCT03722082) that aims to enhance CR in child, adolescent, and young adult offspring of patients diagnosed with schizophrenia or BD; however, so far, no preliminary results are available. Diet and Physical Exercise Nutrition and physical exercise play a critical role in both the mental and physical health of patients with BD. Physical inactivity and poor diet habits can contribute to obesity, diabetes, hypertension, and dyslipidemia, which, in turn, increase the risk for cardiovascular disease (Soreca et al., 2008). At any rate, these risk factors should be targeted since it has been shown that obesity can also impact cognitive functioning (Mora et al., 2017), and in turn, cognitive impairment could be a predictor of weight gain (Bond et al., 2017). Hence, it seems that weight increase and cognitive impairment can influence one another. Moreover, another study has found that increased body mass index (BMI) was associated with a more chronic course of the disease, longer duration of illness, and lower psychosocial functioning (Calkin et al., 2009). In line with this, Bond and colleagues (2010) found that those patients who suffered a clinically significant weight gain (defined as gaining ≥7% of baseline weight over 12 months) had significantly poorer functional outcomes at 12-month follow-up, and, interestingly, functional impairment was independent from current mood symptoms. Poor dietary habits and a sedentary lifestyle can increase physical and psychiatric morbidity, worsen psychosocial and cognitive functioning, and predict a poor pharmacological response. That is why clinicians treating individuals with BD face a dual challenge of treating not only patients' brains but also their bodies. Interventions targeting healthy habits (including nutrition and exercise) are expected to benefit patients with BD. One RCT examined the effects of a 20-week CBT intervention (NEW tx) for BD consisting of 3 modules: nutrition, exercise, and wellness (Sylvia et al., 2013); patients who underwent the treatment showed improvements in nutritional habits, exercise, depressive symptoms, and overall functioning. Hence, this study provides preliminary evidence that improving nutrition and promoting an active lifestyle is associated with functional improvement and mood symptoms in patients with BD. Another previous study showed the efficacy of an intervention on healthy lifestyle, nutrition, and physical exercise on muscle mass index, particularly in women (Gillhoff et al., 2010). These lifestyle interventions are promising since they demonstrate that people with BD can engage and be successful in these types of therapies. Therapeutic mechanisms of action are still unknown but might include different pathways, for example, by reducing morbidity (i.e., depressive symptoms), which in turn would improve functional outcome (Ernst et al., 2006), or by enhancing treatment effects, including the synergistic effects of exercise in combination with other treatments. For instance, in schizophrenia there is some preliminary evidence suggesting that cognitive remediation efficacy can be enhanced by aerobic exercise-induced BDNF upregulation (Nuechterlein et al., 2016;Campos et al., 2017). Multicomponent Programs One advantage of this type of intervention is to tackle different areas to be improved at the same time, hence, allowing a holistic treatment of patients, taking into account not only education on the illness but also how to improve healthy lifestyles and functional outcomes. Following the premise that no single psychosocial intervention might be sufficient to address the morbidity, the functional impairment and the consequences associated with severe mental illnesses (Kern et al., 2009), multicomponent programs, and care packages are being developed for patients with BD. An example of this kind of treatment that has proven to be effective in BD is the Integrated Risk Reduction Intervention developed by Frank and colleagues (2015). More specifically, this program consists of 17 sessions grouped in different modules, including psychoeducation, training to improve sleep/wake patterns and social rhythm regularity, nutrition, physical activity, and healthy habits (smoking cessation). Results from this study showed that patients who followed the intervention significantly reduce their BMI. Moreover, 3 variables (C-reactive protein, total cholesterol, and instability of total sleep time) contributed to a combined moderator of faster decrease in BMI with Integrated Risk Reduction Intervention treatment. Recently, the Bipolar Disorder and Depression Unit in Barcelona has developed an integrative approach consisting of therapeutic components of broader programs that the Barcelona Bipolar Disorders Program had previously developed and whose effectiveness had been proven separately, such as psychoeducation for patients (Colom et al., 2003), psychoeducation for family members (Reinares et al., 2008), and FR . In addition, an important emphasis is given to the promotion of a healthy lifestyle, and a module focused on mindfulness-based cognitive therapy has also been included. Therefore, some contents of psychoeducation for patients have been combined with a session for family members and complemented with aspects related to health promotion, mindfulness training, and strategies for cognitive and functional enhancement, always as adjunctive to pharmacological treatment. This integrative approach combines the main components of different treatments to cover broader therapeutic objectives, to improve the prognosis of the disease in both clinical and functional aspects, as well as the well-being and QoL of those who suffer from BD (Reinares, Martínez-Arán and Vieta, in press). Due to the characteristics of the intervention (12 sessions of 90 minutes each), in case it shows its efficacy, it could be easily implemented in routine clinical care. Personal Recovery: Well-being and QoL Subjective assessments and patient-reported outcomes are gaining ground in the field of BD (Morton et al., 2017;Bonnín et al., 2018b). As in psychosocial functioning, the problem with subjective measures is the variability in the definitions and in the instruments to assess the subjective experience of these patients (Morton et al., 2017). It is common that terms such as QoL, well-being or life satisfaction are used as synonyms and interchangeable terms (Morton et al., 2017). Moreover, the current lack of consensus between these construct definitions add uncertainty and complication to select an appropriate instrument to measure this dimension. Despite all, the subjective experience should always be taken into account since it can also impact on the course of the illness. Some studies indicate that the improvement in well-being provides a protective effect against recurrence (Keyes et al., 2010), and it has also been found that low levels in QoL are associated with an increase in oxidative stress (Nunes et al., 2018). For this reason, it is important to evaluate not only objective outcomes (symptoms and functioning) but also to assess patients' subjective experience, since they can provide valuable information and might be an essential part to ensure better outcomes in BD. Rajagopalan et al. (2016) tested the effects of lurasidone as monotherapy or as adjunctive to lithium/valproate on healthrelated QoL (HRQoL). They found that patients in both conditions increased HRQoL. However, this improvement was not independent of changes in depression, indicating that the effect of lurasidone on improving patient HRQoL may act through a reduction in depressive symptoms associated with BD. Similarly, Gonda and colleagues (2016) found that patients enhanced both their work functional outcome and QoL after receiving prophylactic lamotrigine therapy at 6-months follow-up. In young patients (10-17 years old) with an acute episode of bipolar depression, it was found that those who received olanzapine/ fluoxetine combination presented better QoL scores compared with those receiving placebo (Walker et al., 2017). Psychological Interventions Even though physical activity is not a psychological intervention itself, it is well-known for increasing well-being and QoL; however, the impact of this kind of interventions has been less studied in the field of BD. Vancampfort and colleagues (2017) proved the effect of 150 min/wk of physical activity on physical, psychological, social, and environmental QoL; those patients who did not meet the established minimum (150 minutes) showed lower QoL outcomes. Involving the family, O'Donnell and colleagues (2017) tested the effect of 2 psychological interventions on QoL scores in a sample of adolescents with BD. They compared the efficacy of a FFTplus pharmacotherapy vs brief psychoeducation plus pharmacotherapy on self-related QoL over 2 years. They found the 2 groups did not differ in overall QoL scores at 24 months follow-up. However, adolescents who received the FFT had greater improvements in quality of family relationships and physical well-being compared with the brief psychoeducation program. Besides, internet-based approaches using smartphones are gaining traction (Lauder et al., 2015;Hidalgo-Mazzei et al., 2018), representing a useful and attractive tool especially for the young population with BD (Bauer et al., 2018). So far, some preliminary studies using a mobile application (SIMPLe) have reported an improvement of biological rhythms (Hidalgo-Mazzei et al., 2017) and increased QoL and well-being (Hidalgo-Mazzei et al., 2018). There is much room for improvement in the field of subjective well-being and QoL. These above-mentioned interventions may shed some light regarding the path to follow. Nevertheless, it is important to keep in mind that those patients who suffer from more depressive symptoms, irritability, and psychiatric comorbid conditions present lower QoL and functional outcomes (IsHak et al., 2012;Sylvia et al., 2017); hence, all the strategies directed to reduce medical and psychiatric burdens might also be useful to increase patients' well-being and QoL. It is also worth mentioning that some authors defend that QoL depends not only on clinical remission but also relies on functional recovery . In this line, poor QoL is also associated with poor occupational outcome, reduced academic attainment (Marwaha et al., 2013), and difficulties in activities of daily life (Träger et al., 2017). Future studies should include subjective measures (such as QoL, well-being) to better understand the relationship with these clinical variables. Figure 2 represents a brief summary of the therapies and strategies that have been presented in this review. Conclusions Because the construct of psychosocial functioning is complex and difficult to measure, it is therefore recommended to assess it based on the combination of 3 different approaches: (1) a subjective assessment that involves a self-administered measure (SDS, WHODAS 2.0, etc.), (2) a semi-objective measure including an interviewer-rated assessment (FAST, LIFE-RIFT, GAF, etc.), and (3) an objective assessment based on performance-based measurements (i.e., UPSA). Taking into account these different approaches might help to better disentangle the variables associated with the functional outcomes in BD, which are often heterogeneous and influenced by demographic, clinical, and neurocognitive factors. Regardless of the great variability in the assessment of psychosocial functioning, many efforts have successfully improved functional outcomes in BD. But where are we now? At the present moment, the interventions that have proven to be effective at enhancing functioning and/or QoL include lurasidone, lamotrigine, FR, some programs of cognitive remediation, ISPRT, FFT, and NEW tx, among others. These therapies have set the stage for developing further interventions to prevent functional decline and ensure well-being, because this is where we go. Ideally, future therapies should focus not only on restoring functional outcomes but also preventing functional decline and enhancing QoL and well-being. In this regard, those programs that target cognitive enhancement and promote healthy lifestyles (including healthy nutrition patterns and physical activity) are urgently needed, since they constitute a preventive tool for cognitive and functional decline. Although more studies are still needed, multicomponent therapies might be also a good option since they include different approaches to cover several areas at a time (symptoms, functioning, cognition, well-being, etc.). Finally, it is likely that the future will also include personalized treatments focusing on tailored interventions that may differ from one patient to another (Salagre et al., 2018); in this sense, the type and duration of interventions might differ from patients recently diagnosed and patients with a complex course of the illness who might take advantage of restorative therapies such as cognitive and FR .
2019-05-13T13:05:56.450Z
2019-04-19T00:00:00.000
{ "year": 2019, "sha1": "aeee11a5662cec7c0cd3a3dbf078343e8acd38bf", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ijnp/article-pdf/22/8/467/29027815/pyz018.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aeee11a5662cec7c0cd3a3dbf078343e8acd38bf", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233440403
pes2o/s2orc
v3-fos-license
Autonomous Tracking by an Adaptable Scaled KCF Algorithm A multicopter is equipped by a passive tracking device to follow a specified target. However, if want to track a non-controlled target, the passive tracking device is failed. We propose a vision-based tracking system for multicopters, used computer vision method to track any target without additional tracking devices. In this study, propose scale candidate graphs and scale tables to improve KCF. There are also stable results when the scale changes. The proposed an adaptable scaled KCF algorithm, when the KCF tracking failed, a feature-based matching detector is used to re-detect the target. Several experiments on various scene based on the proposed approach were conducted and evaluated. Stable tracking results were obtain to show the feasibility of the proposed system. I. INTRODUCTION The applications of multicopters for aerial photography is most popular [1]. In general, the flying path of a multicopters is controlled by a remote controller, and manually controlling the multicopters to follow the moving target. Other equipped by a passive tracking device to follow a specified target. In order to conveniently track targets, a multicopters is equipped by a passive tracking device to follow a specified target. However, if we want to track a non-controlled target, the passive tracking device is failed. We propose a vision-based tracking system for multicopters, used computer vision method to track any target without additional tracking devices. The proposed system divided into three parts: generate reaction graphs, feature extraction and target matching. To do tracking, we must first be able to capture a target object in the image sequence. Generate reaction graphs method, Zhang et al. [2] using radius sliding window with a radius at the position tracked by the previous frame. Kalal et al. [3] proposed a single-target long-term tracking algorithm for Tracking-Learning-Detection (TLD), after the target is selected in the first frame, a sliding window is used at the beginning of each frame. In addition to generating candidate regions by exhaustive methods, can use the characteristics of the target between frames and frames without moving too much, using Lucas Kanade optical-flow [4], Kalman filter [5] and Particle Filter algorithm to predict the result of the next frame as a candidate area. Feature tracking is to capture various information of a target in an image as feature information, Comaniciu and Meer [6] proposed a tracking method using mean shift to calculate the color distribution in the target region as the target feature model, and then use this color feature model to match each possible target region. Background subtraction [7], [8] usually requires background model training, using the trained background model as a feature to detect foreground images of next inputs. Kim et al. [7] proposed CB algorithm adopts a quantization/clustering technique, used color distortion and brightness distortion classified as background, and established multimode background model. This method can to encode moving backgrounds or multiple changing, and the capability of coping with local and global illumination changes backgrounds. Image features are important for the comparison of candidate images and target images. General image features are Features from accelerated segment test (FAST) [9], Harris Corner [10], Binary Robust Independent Elementary Features (BRIEF) [11], Scale invariant feature transform (SIFT) [12], Speeded-Up Robust Features (SURF) [13] and color histogram, the quality of the feature extraction affects the entire tracking result. Cheng et al. [14] proposed a tracking method combining Particle Swarm optimization (PSO) [15] and SIFT, using PSO algorithm to find candidate regions, and then integrating SIFT features into PSO results to obtain more accurate tracking results. Miao et al. [16] proposed an adaptive classifier tracking method to match the feature points between successive frames, using SURF as a feature point, with an adaptive online boosting [17] classifier, and a sample weighting mechanism to establish robust feature descriptions and reliable feature point matching for tracking. Leichter et al. [18] proposed an extension of the mean shift tracking, used a color distribution map obtained from multiple different frames to enhance the mean shift tracking. Make a convex hull on the color map and use it in the target model, this maintains the original convergence and speed of the mean shift tracking and can be successfully tracked when the target color changes dramatically. Zhang et al. [2] proposed a real-time tracking algorithm based on compressed sensing. In order to achieve scale invariance, each sample convolved with a multi-scale rectangular filter to obtain high-dimensional multi-scale features, but high-dimensional multi-scale features can cause excessive computation, while sparse perceptual theory can compress images and preserve original features. Therefore, using sparse random matrices to reduce multi-scale image features, this can speed up feature capture and achieve real-time tracking. Kalal et al. [19] proposed a forward-backward error method, use optical flow to find the possible position of the target in each frame. The features used by Kalal et al. [3] for TLD are similar to the local binary pattern (LBP) [20] and use the trained classifier to find out the probability that this feature may be the target. Bolme et al. [21] proposed the minimum output sum of squared error (MOSSE), using the correlation filter (CF) template as the target feature to measure the similarity for each input image and template image, the highest similarity is the target. After the feature captured, the target founded from a plurality of candidate regions. In this step, the classifier often used to calculate the probability of the target in each candidate region, as Support Vector Machine (SVM) [22] and AdaBoost [23]. Lu et al. [24] used PSO to replace the traditional sliding window search for tracking, and put the RGB values of the corresponding window of the particle as features to AdaBoost to calculate the probability that the particle belongs to the target. Integrating all particles whose probability exceeds the threshold is the target. Zhu et al. [25] proposed a feature matching method. Use the extracted Harris corner to find the affine transformation of the image between frames, then use SVM to check and remove the unmatched feature points. Babenko et al. [26] combine online boosting with multiple instance learning (MIL) to train target models. In this paper, we use the Kernel Correlation Filter (KCF) proposed by Henriques et al. [27] for target tracking, but KCF cannot adapt to target scale changes, so we propose scale candidate graphs and scale tables to improve KCF. There are also stable results when the scale changes. The proposed an adaptable scaled KCF algorithm, when the KCF tracking is failed, a feature-based matching detector is then used to re-detect the target. This paper is structured as follows: The details of the proposed techniques for use in the system are presented in Sections in Section II. Experimental results are included in Section III, followed by conclusions in Section IV. II. PROPOSED TECHNIQUES The tracking module by an adaptable scaled KCF algorithm are described in detail below. A. Correlation Filter The minimum Output Sum of Squared Error filter (MOSSE) which applies correlation filtering to target tracking, as shown in Fig. 1, and evaluate the correlation of the two signals. If the two signals are similar, the correlation is higher. It is necessary to generate a filter template for tracking, let the target can generated the maximum response on the template, and the position of the maximum response value is the target position. When tracking initialization, need to generate a filter template to maximize the target's response on the template, and can be written as (1) where g is reaction graphs, h is filter template and f is output image. Assume the reaction graphs is designed to be Gaussian shape, as shown in Fig. 2, after knowing the reaction graph and the input image, the template is required. The correlation filter can be calculated at high speed because the Fast Fourier Transform (FFT) is used in the operation. (2) Let Fg = G, Ff = F, F(h)* = H* and can be described as (3) Refer to multiple images for application to make the template more robust, so described as (4) Solving equation (4) to get the template H, After getting the template, then can start tracking. Only need to correlate the input image after FFT with the template to get the reaction graph. Then use the Inverse Fast Fourier Transform (IFFT) to find the peak position. The position of the peak of the reaction graph is the position of the target. In order to make the tracking results better, the template must updated as the target changes. If the template remains the same forever, the tracking will fail when the target changes to a certain extent. The update method is (6) where is determined filter template at t time, ht-1 is determined filter template at t+1 time, is empirical constant. B. Generate Candidate Graphs The original KCF does not have the ability to adapt to the International Journal of Machine Learning and Computing, Vol. 11, No. 1, January 2021 target scale change. Target cannot be correctly selected when the scale change. As a result, it will be selected the background or unable to select the entire target, as shown in Fig. 3. Therefore, we propose scale candidate graphs and scale tables to improve this problem. Because the template size of the KCF algorithm cannot be change, so we change the size of the input image to approximate the template. The principle is that the camera shoots objects at different distances. The photosensitive element of the camera as our template cannot be resize, so the object needs to adjust the focal length of the lens when shooting different distances, let the object is just inserted into the photosensitive element through the lens. Lens zoom is our scale table, as shown in Fig. 4. Using the scale table, as shown in Fig. 5, the candidate graphs with seven different scales of the input image, as shown in Fig. 6. Let the target size in the input image be close to the template size, and then perform KCF tracking on the seven candidate images. In different scales of input images, the target size is closer to the template have a higher similarity, as shown in Fig. 7. Therefore, the highest similarity of the tracking results is the target. The candidate graph and the template are correlate to obtain a reaction graph. The template correlation of the reaction graph is calculate by equation (7). The template correlation is the similarity between the candidate graph and the template, and the template correlation are calculated as (7) Obtained seven template correlations, with the highest value as the best template correlation, if the optimal template correlation is higher than the threshold, the tracking is successful, and the peak position of the reaction map corresponding to the correlation is the target position. If the optimal template correlation is lower than the threshold, the tracking fails. The criteria for tracking results as (8) In addition, the target set used to store the last 20 high-correlation target images, as shown in Fig. 8. When the peak of the response graph is higher than the threshold, the tracking result place into target set to replace the oldest target, and update the scale table. The target update criterion as (9) where Tcorrelation is template correlation, α is empirical constant of template correlation, is peak of the reaction graphs, is average value of the reaction graphs, is standard deviation of the reaction graphs, Cthreshold is template correlation threshold, β is target update threshold. C. Re-detection of Target Disappearance Our feature point detection uses FAST, and the D. Feature Matching Feature matching, first needs to detect the target with FAST, as shown in Fig. 9, and the feature points of the input image, as shown in Fig. 10. Then, the SURF description of each feature point is calculated, and generated array matching points, as shown in Fig. 11, based on the use of Euclidean distance as SURF feature matching. In order to make the matching result more stable, it is necessary to filter the matching points, as shown in Fig. 12. E. Target Position and Scale Re-detection Calculate target displacement and scale using filtered matching points, and find the mode of the displacement vector between the matching points, which is the displacement vector of the target. The scale of t time estimated by the ratio of the sum of the distances between the feature points at t-1 time and the sum of the distances between the feature points at time t time. (10) where is sum of the distances between the feature points at t-1 time, is sum of the distances between the feature points at t time, is target scale at t time, is target scale at t-1 time. Last updated KCF scale and displacement, as shown in Fig. 13. III. EXPERIMENTAL RESULTS In this section, we will show and compare experimental results of our proposed algorithm detectable KCF (dKCF), original KCF, Tracking-Learning-Detection (TLD) and Compressive Tracking (CT) methods, whether it can adapt to scaling and shadowing. When the target size changed, whether each methods can accurately select the target and when the target obscured, whether each methods can retrieve the target again. In the scaling scene, the results of our algorithm can be change and select target when the target size changed, as shown in Fig. 14, and in the shadowing scene, when the target disappears, our algorithm can re-track to the target, it will not cause the tracking to fail, as shown in Fig. 15. In Fig. 14, when the target size changes, our proposed algorithm can accurately selected the target, because we use the scale candidate map to generate different sizes of input images, so that the target size in the input image can be close to the template size, and the similarity will be the highest. Then take the input image of the best scale as the result. The target frame size of the algorithms CT and KCF will always remain the same, so the target cannot accurately selected. Instead, the frame box too large to cause the background to select, or the frame box too small to cause not to cover the target. If the target cannot accurately selected (too much background information or only a part of the target image information), the final tracking will fail. In the target tracking, it may encounter the problem that the target obscured or disappeared. In order to solve this problem, we use the target set to record the target image. When the target obscured or disappeared, the feature matching used to find out whether there is an image similar to the target in the input image, and the most similar image is the target. In the other three algorithms (KCF algorithm, CT algorithm and TLD algorithm), only TLD algorithm has the ability to process the occlusion problem, the KCF algorithm and CT algorithm cannot retrieve the target after encountering the target occlusion. The results of the experiment can divided into two cases, the target is shaded and scaled. There are total of four test films, each with 500 frames. The background of the film includes highways, general roads, trees, grasslands, shadow changes, etc. The success of the tracking based on manually selecting the target position in the film, then selecting the target using the algorithm, and finally comparing the results of selected by the manual and algorithm. In order to evaluate the ability of different algorithms, the target tracking success can be determined as positive, tracking failure determined as negative, and tracking success is based on (13) where gt is manual selected target per frame, bb is the algorithm selected target per frame. If the manual frame selection position overlaps with the algorithm position selected more than 70 percent and the overlap area accounts more than 50 percent of the frame selection area of the algorithm, and the target successfully tracked. The performance of detection method can addressed by estimating the following parameters: (i) True Positive (TP) rate, (ii) False Negative (FN) rate, (iii) False Positive (FP) rate, and (iv) True Negative (TN) rate. Then can define the accuracy rate, tracking rate, and false positive rate. The experimental results of system assessment four video, each video length is 500 frames, the accuracy rate, tracking rate and false positive rate results of TLD, CT, KCF, and dKCF algorithm, as shown in Table I, Table II, and Table III. The average accuracy rate is 92.75 percent, the average tracking rate is 93.25 percent and the average false positive rate is 13.5 percent in our proposed algorithm. In addition to using the tracking status table to evaluate, we also compare the overlapping accuracy of the target selected, and the calculation equation for the overlap accuracy of each frame as (14) where gt is manual selected target per frame, bb is the algorithm selected target per frame. The overlap accuracy rate is to evaluate whether the algorithm can accurately selected target and calculate the area of each algorithm selected and manual selected. If the manual selected and algorithm selected have no overlap or the overlap area is less than 70 percent, the overlap accuracy rate is zero percent, conversely, there is overlap and area is higher than 70 percent then the overlap accuracy is the overlap area. Finally, the average overlap accuracy per frame is taken as the result and the average overlap accuracy is 87.25 percent in our proposed algorithm, as shown in Table IV. IV. CONCLUSION In this paper, we have proposed dKCF algorithm allows multi-axis aircraft to have the ability to track non-specific targets through computer vision tracking. Because KCF is a template-based tracking method, there is no need to train the target in advance to achieve the goal of tracking non-specific targets. In tracking part, use the scale candidate graph to improve the KCF so that the KCF can adapt to the target change size and get the correct target image information. In re-detect part, find the FAST corner point on the input image target, calculate the SURF feature of each corner point, find the matching corner points as the matching points, and use the matching points to find the target position and scale to successfully retrieve the target. In the matching point filter part, use the mode of the displacement of the matching point as the matching criterion, such a standard is simpler and faster. In experimental, the proposed algorithm performs speed to 26 fps, tracking rate in the scaled cases can reach 88%, tracking rate in occluded cases can reach 98%, and overlap rate can reach 87%. With trade-off between speed and accuracy, we sacrifice the execution speed of the algorithm to exchange for more robust tracking. The future, system can be added color as an auxiliary feature, or create a more robust target model to complement the lack of template updates to increase the tracking effectiveness. CONFLICT OF INTEREST The authors declare no conflict of interest. AUTHOR CONTRIBUTIONS Din-Chang Tseng were in charge of overall direction and supervised the project; Chien-Hung Chen and Yi-Ming Chen carried out the experiments and concuted the research; Chien-Hung Chen wrote the paper; all authors discussed the results and contributed to the final manuscript and had approved the final version.
2021-04-28T12:45:34.396Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "84a6da1e997c7d12db9c006744f3ccb7bd38b104", "oa_license": "CCBY", "oa_url": "http://www.ijmlc.org/vol11/1013-CE2002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "84a6da1e997c7d12db9c006744f3ccb7bd38b104", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53525003
pes2o/s2orc
v3-fos-license
Outcomes of Including Fracture Level in Short-Segment Fixation for Thoracolumbar Fracture Dislocation Study Design This was a prospective study of 50 patients of thoracolumbar fracture dislocation treated at a single institution with short-segment fixation with the inclusion of fracture level. Purpose To assess the outcomes of including the fracture level in short-segment fixation for thoracolumbar fracture dislocation. Overview of Literature Traditionally, thoracolumbar fracture dislocation is treated with long-segment posterior fixation. However, to save motion segments, short-segment fixation has been used instead in many cases of thoracolumbar trauma. Methods In this study, 50 patients with thoracolumbar fracture dislocation were treated with short-segment fixation with inclusion of the fracture level; patients with pathological fractures or with a McCormack load-sharing score >6 were excluded. The 50 patients were prospectively followed for at least 1 year. The duration of surgery, blood loss, and complications were noted. The Visual Analog Scale (VAS) score was used to measure pain, and the American Spinal Injury Association (ASIA) scale was used to determine the neurological status at follow-up. Preoperative, immediate postoperative, and final follow-up X-rays were used to measure the kyphotic angle using Cobb’s method. Results The mean age of our patients was 33.4 years, and the male:female ratio was 1.9:1. The mean follow-up period was 18.4 months (range, 12–23 months). Injuries were mainly at the thoracolumbar junction area (T11–L2, 41 cases, 82%). The average duration of surgery was 94.6 minutes, and the average blood loss was 394.8 mL. Postoperative infection occurred in two cases and implant failure in one case. The kyphosis angle values were as follows: average preoperative, 26.80°±14.50°; immediate postoperative, 4.30°±8.70°; and final follow-up, 5.50°±110°. The ASIA scale and VAS score at final follow-up showed improvement. Conclusions Inclusion of the fracture level in short-segment fixation for thoracolumbar fracture dislocation (McCormack load-sharing score ≤6) gives good kyphosis correction and correction maintenance. It can also obviate the need for traditional long-segment fixation. Introduction Thoracolumbar fracture dislocation due to high-energy trauma is a major cause of disability in adults. The traditional treatment is long-segment posterior transpedicular fixation. However, to save motion segments, short-segment fixation has been used instead in many cases of thoracolumbar trauma [1][2][3][4]. A few studies have reported the treatment of thoracolumbar fracture dislocation with short-segment fixation, but none have discussed the inclusion of fracture level for fixation [5][6][7][8][9]. Other studies have shown that biomechanical stability of a construct increases by inserting screws at the fracture level [10]. Hence, we conducted this study to analyze the outcomes of including the fracture level in short-segment fixation for thoracolumbar fracture dislocation. Materials and Methods Approval from the ethical committee of Civil Hospital, Ahmedabad, India was taken for conducting the study (IRB approval no is 14-03-214). Between April 2014 and March 2016, we prospectively followed 50 patients with thoracolumbar fracture dislocation treated with shortsegment instrumentation and fusion with inclusion of fracture vertebra; patients with pathological fractures or a McCormack load-sharing score >6 were excluded. X-rays and magnetic resonance imaging were used for fracture evaluation and to measure the preoperative kyphotic angle using Cobb's method. Surgery was performed by a single senior spine surgeon, and the same instrumentation was used in all cases. Decompression was performed when indicated. Fusion was performed using locally harvested autologous bone grafts after meticulous creation of a fusion bed. All patients began rehabilitation from day 2 postoperatively and were provided braces for 3 months. The patients were followed for at least 1 year. The duration of surgery, blood loss, and complications were noted. The American Spinal Injury Association (ASIA) scale was used to record the neurological status of each patient at follow-up. The Visual Analog Scale (VAS) score was used to measure pain. Immediate postoperative and final follow-up X-rays were used to again measure the kyphotic angle using Cobb's method. During follow-up, additional X-rays were obtained, and investigations were performed in case of any complications. Results The mean age of the 50 patients was 33.4 years (range, 18-68 years), and the male:female ratio was 1.9:1. In the majority of cases (33 cases, 66%), the mode of injury was determined to be a fall from a height, whereas in the re- maining cases (17 cases, 34%), it was determined to be a road traffic accident. With regard to neurological status, 34 patients (68%) were ASIA A, nine (18%) were ASIA B, six (12%) were ASIA C, and one (2%) was ASIA D. Inju-(12%) were ASIA C, and one (2%) was ASIA D. Inju-one (2%) was ASIA D. Inju-(2%) was ASIA D. Injuries were mainly at the thoracolumbar junction area (T11-L2, 41 cases, 82%), and the remaining (nine cases, 18%) were in the remaining thoracolumbar spine (Fig. 1). The average duration of surgery was 94.6 minutes, and the average blood loss was 394.8 mL. Postoperative infection occurred in two cases, and implant failure occurred in one case. Discussion Thoracolumbar fractures are the most common injuries of the spine; they constitute more than 50% of all traumatic spine cases [11]. However, fracture dislocations are rare, constituting only <3% cases [12]. Thoracolumbar fracture dislocations are a cause of significant morbidity and mortality in patients. As per the Thoracolumbar Injury Classification and Severity Score [13], injury to the posterior ligament complex is highly unstable and the associated neurological injury makes surgery inevitable. The traditional treatment is long-segment fixation. However, this procedure causes loss of motion segments. Therefore, to save motion segments, short-segment fixation was tried, but it was associated with a high complication rate. Yu et al. [14] treated thoracolumbar fracture dislocations with short-segment fixation and had a complication rate of 15% for loss of reduction, 20% for implant failure, 10% for pseudoarthrosis, and 25% for poor initial postoperative reduction. Comparatively, our study had fewer complications (Table 1). We had one case of implant failure that was treated with extension of fixation, one case of superficial infection that was treated with antibiotics and dressings, and one case of deep infection that was treated with debridement and antibiotics. Eventually, all three cases achieved fusion. In this Preop Postop Preop Postop Preop Postop A B C study, we used locally harvested bone grafts for fusion and consistently achieved good results; there was no requirement for other autografts, allografts, or bone graft substitutes. There were no cases of poor initial reduction or loss of reduction (Fig. 4). The better kyphosis correction and maintenance that was achieved may be because of the stronger three-point support provided by the inclusion of fracture vertebra in the construct. Studies by Guven et al. [15] and Mahar et al. [10] have also concluded that fracture-level screw insertion increases stability, helps in fracture reduction and kyphosis correction, and decreases the chances of correction failure, especially with shortsegment constructs. We included only patients with a Mc-Cormack load-sharing score of ≤6 because they needed no anterior reconstruction. Dobran et al. [16] have shown that including the frac-. [16] have shown that including the frac- [16] have shown that including the fracture level in short-segment fixation to treat thoracolumbar junction fractures results in kyphosis correction and maintenance similar to that in the case of long-segment fixation, with similar neurological outcomes. Besides the obvious advantages of shorter incisions, lesser dissection, fewer screws, lower cost, and lesser chances of infection, short-segment fixation results in saving the motion segments. Our study has certain limitations. First, the number of patients was small, and they all were from the same institution. Second, computed tomography scans were not performed to check the fusion status because of nonavailability of appointments at our high-volume institution. We plan to improve upon these shortcomings in future studies. Conclusions The results of our study suggested that including the fracture level in short-segment fixation for thoracolumbar fracture dislocation (with a McCormack load-sharing score ≤6) achieves good kyphosis correction and correction maintenance. It also obviates the need for longsegment fixation.
2018-11-01T20:38:12.718Z
2018-10-18T00:00:00.000
{ "year": 2018, "sha1": "894f8dee2d5afe5572731c414baded570e5fcfca", "oa_license": "CCBYNC", "oa_url": "https://www.asianspinejournal.org/upload/pdf/asj-2018-0064.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "894f8dee2d5afe5572731c414baded570e5fcfca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204836742
pes2o/s2orc
v3-fos-license
Pregabalin: Potential for Addiction and a Possible Glutamatergic Mechanism Drug addiction remains a prevalent and fatal disease worldwide that carries significant social and economic impacts. Recent reports suggest illicit pregabalin (Lyrica) use may be increasing among youth, however the addictive potential of pregabalin has not been well established. Drug seeking behavior and chronic drug use are associated with deficits in glutamate clearance and activation of postsynaptic glutamatergic receptors. In the current study, we investigated the abuse potential of pregabalin using conditioned place preference (CPP) paradigm. Different doses of pregabalin (30, 60, 90, and 120 mg/kg) were used to assess the seeking behavior in mice. Glutamate homeostasis is maintained by glutamate transporter type-1 (GLT-1), which plays a vital role in clearing the released glutamate from synapses and drug seeking behavior. Therefore, we investigated the role of glutamate in pregabalin-seeking behavior with ceftriaxone (CEF), a potent GLT-1 upregulator. Mice treated with pregabalin 60 and 90 mg/kg doses demonstrated drug seeking-like behavior, which was significantly blocked by CEF pretreatment. These results suggest that pregabalin-induced CPP was successfully modulated by CEF which could serve as a lead compound for developing treatment for pregabalin abuse. Drugs. Pregabalin was generously provided by Jamjoom Pharmaceuticals (Jeddah, KSA). Ceftriaxone was gifted by King Abdulaziz Hospital at Taif. All the drugs used in this study were reconstituted in sterile saline solution (0.9% NaCl). Experimental design. The overall experimental design is presented in Fig. 1A. Experiment 1: Mice were randomly assigned into five groups; Group 1: Control group (n = 8), mice in this group were treated with vehicle for 8 days. Group 2: Preg-30 group (n = 8), mice were injected with pregabalin (30 mg/kg, i.p.x 4) and vehicle for eight days during the acquisition phase. Group 3: Preg-60 group (n = 8), mice were injected with pregabalin (60 mg/kg, i.p.x 4) and vehicle for eight days during the acquisition phase. Group 4: Preg-90 group (n = 8), mice were injected with pregabalin (90 mg/kg, i.p.x 4) and vehicle for eight days during the acquisition phase. Group 5: Preg-120 group (n = 8), mice were injected with pregabalin (120 mg/kg, i.p.x 4) and vehicle for eight days during the acquisition phase. Mice were then examined for place preference following completing the conditioning training. Experiment 2: Mice were randomly assigned into four groups; Group 1: C-C group (n = 8) Mice in this group were treated with vehicle in home cage as a 30 minutes pretreatment and vehicle for eight days during the acquisition phase. Group 2: CEF-C group (n = 8) Mice were injected with CEF (200 mg/kg, i.p.) in home cage as a 30 minutes pretreatment and then vehicle for eight days during the acquisition phase. Group 3: C-Preg group www.nature.com/scientificreports www.nature.com/scientificreports/ (n = 8) Mice were treated with vehicle in home cage as a 30 minutes pretreatment and then Pregabalin (60 mg/ kg, i.p.x4) as well as vehicle for eight days during the acquisition phase. The dose of Pregabalin which induced CPP in the previous experiment was used in this study. Group 4: CEF-Preg group (n = 8) Mice were injected with CEF (200 mg/kg, i.p.) in home cage as a 30 minutes pretreatment and then Pregabalin (60 mg/kg, i.p.x4) as well as vehicle for eight days during the acquisition phase. Mice were then examined for place preference following completing the conditioning training. Conditioned place preference paradigm. A custom made acrylic CPP apparatus (Fig. 1B) was used in this study as described in our previous work 46 . Briefly, this apparatus consists of two equal-sized conditioning chambers (35 cm × 35 cm × 50 cm), and one start box (10 cm × 15 cm × 10 cm) located outside of the CPP apparatus. The two conditioning chambers are distinguished by both tactile and visual cues. The interior walls of the first chamber are white in color with horizontal black stripes and textured walls (chamber 1). The interior walls of the other chamber (chamber 2) are black in color with vertical white stripes and smooth walls. The floor of chamber 1 is perforated with round holes. The floor of the other chamber is perforated with rectangle holes. Habituation phase: The preconditioning day is considered as day one. On days one, two, and three each mouse was placed in the start box with door closed for 3 minutes. Then, the door was opened to let the mouse explore the conditioning chambers for 30 minutes. On day three, the animal exploration in both conditioning chambers was recorded by a digital camera fixed on the top of the apparatus. The time spent in both chambers was calculated using ANY-maze software (Stoelting, USA). Conditioning phase: An un-biased CPP design has been used. Therefore, in each treatment group, half of the animals were randomly assigned to receive pregabalin and were placed in chamber 1, while the other half received this drug and were placed in chamber 2 during the conditioning phase (days four to eleven). Each mouse received intraperitoneal injection of specific treatment and were then located in the corresponding chamber with the door closed for 30 minutes session. On the following day, each mouse received vehicle and was located in the other chamber with the door closed for 30 minutes. The process was repeated until the completion of the eight conditioning sessions. On day twelve, each mouse had free access to both chambers for 30 minutes. The time spent by the mice in both chambers was documented using digital camera (post-conditioning test) and counted by ANY-maze video tracking system. Statistical analysis. Two-way repeated measure ANOVA, (Phase × Treatment) was used to analyze time spent, at two different timepoints (pre-test and post-test), in the conditioning chambers in response to the selected dose of pregabalin or saline. This analysis was selected based on previous published work [46][47][48][49] . When significant main interactions or effects were found, Newman-Keuls multiple comparisons were performed. All data were statistically analyzed by GraphPad Prism, using a 0.05 level of significance. www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion In the present study, we demonstrate for the first time, using a mice model of drug addiction, that pregabalin can induce CPP. These findings are in contrast to previous reports, however, the maximum tested dose of pregabalin was 30 mg/kg, which did not induce rewarding effects and did not change place preference 15,16 . Consistent with the previous findings from Andrews, et al., the dose of 30 mg/kg did not change place preference in our study. However, when the dose was increased to 60 mg/kg, a significant place preference was induced by pregabalin. This suggests a dose dependent effect of pregabalin's rewarding effects. Interestingly, this effect is supported by previous controlled clinical studies [17][18][19][20] showing that pregabalin can cause euphoric effects as a side effect in participants of these studies. The drug seeking effects found in several drugs of abuse have been consistently reported to be mediated by glutamatergic mechanism. GLT-1, an astrocyte-specific excitatory amino acid transporter, which is responsible for glutamate homeostasis in the brain 37 . It had been previously demonstrated that downregulation of GLT-1 expression in the NAc was associated with continuous exposure to addicting drugs [50][51][52] . Interestingly, GLT-1 expression was found to be downregulated instantly in cocaine self-administration model 53 . Of note, the glutamatergic transmission is amplified as a result of increase in glutamate concentrations and decrease in glutamate uptake in the synapses 54 . Additionally, it has been observed that glutamate receptors such as mGlu-5 and N-Methyl-D-aspartate could be potentiated and activated by the spillover of glutamate which enhances drug seeking behavior 54 . In the present study, our results suggest that pregabalin at higher doses [60 mg, and 90 mg] may induce addiction partly by downregulating GLT-1 expression and thereby decreasing glutamate uptake at the synaptic cleft. Treatment with CEF has been reported to prevent drug seeking behavior caused by, in part, decreased GLT-1 expression in methamphetamine, cocaine, ethanol, nicotine, and heroin dependence 26,[41][42][43][44][45] with the drug seeking associated with glutamate spillover secondary to GLT-1 downregulation 25,[55][56][57] . Additionally, the normalization of GLT-1 expression, by CEF treatment, was associated with a decrease in drug-seeking behavior 58,59 . Therefore, pregabalin seeking at the addictive doses of [60 mg, and 90 mg] might be mediated by altering GLT-1 expression as the drug seeking effects of pregabalin was eliminated by CEF pretreatment in the present study. CEF has been demonstrated to have neuroprotective efficacy in many neurological disorders 60,61 and can offer neuroprotective effects in drug addiction associated with glutamate excitotoxicity 60,62,63 . CEF can freely pass the blood brain barrier and enter the central nervous system to up-regulate GLT-1 making it an attractive potential therapeutic for future clinical use in antagonizing pregabalin-induced drug-seeking behavior 60,[62][63][64] . One limitation of the present study is that we did not demonstrate a mechanisms of action for CEF in antagonism of pregabalin-induced drug-seeking behavior. GLT-1 plays a central role in inflammatory mechanisms in the brain, which has previously been demonstrated to be associated with drug addition [65][66][67] . It has been previously reported that central administration of some neurotoxicants causes significant impairement in motor functions, increased neuroinflammation and increased drug addiction [68][69][70] . Post-treatment with CEF (200 mg/ kg) significantly antagonized motor impairment, attenuated lipid peroxidation, restored endogenous antioxidant enzymes glutathione peroxidase and catalase, and decreased drug addiction 71,72 . Taken together, CEF-mediated antagonism of pregabalin-induced drug seeking like effects could promote restoration of glutamate homeostasis, and in this way modulate drug-seeking behavior. CEF could serve as a lead compound for developing treatment for pregabalin abuse as other pharmacological effects of this antibiotic could not be excluded. Future studies are needed to investigate a mechanistic role for neuroinflamation in pregabalin abuse, as well as sex-and age-related mechanisms in pregabalin-induced neurochemical changes. Published: xx xx xxxx
2019-10-23T15:24:36.126Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "8fd9e90c49c9b43b8dd81f052958cf11e8e06820", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-51556-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fd9e90c49c9b43b8dd81f052958cf11e8e06820", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5073251
pes2o/s2orc
v3-fos-license
Neural Sentence Location Prediction for Summarization A competitive baseline in sentence-level extractive summarization of news articles is the Lead-3 heuristic, where only the first 3 sentences are extracted. The success of this method is due to the tendency for writers to implement progressive elaboration in their work by writing the most important content at the beginning. In this paper, we introduce the Lead-like Recognizer (LeadR) to show how the Lead heuristic can be extended to summarize multi-section documents where it would not usually work well. This is done by introducing a neural model which produces a probability distribution over positions for sentences, so that we can locate sentences with introduction-like qualities. To evaluate the performance of our model, we use the task of summarizing multi-section documents. LeadR outperforms several baselines on this task, including a simple extension of the Lead heuristic designed for the task. Our work suggests that predicted position is a strong feature to use when extracting summaries. INTRODUCTION Summarization is an important problem in natural language processing. With information being generated and written about at an increasing pace, more reading is required to keep up. Faced with the problem of no longer being able to read in entirety everything that may be interesting, summarization renders a potential solution for many applications. The goal of informative summaries is to allow users to read a small amount of text while being exposed to the main ideas in a document. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). LONDON'18, August 2018, London, UK © 2018 Copyright held by the owner/author(s). ACM ISBN 123-4567-24-567/08/06. . . $15.00 https://doi.org/XX.XXX/XXX_X Previous approaches cover the spectrum of techniques from expert knowledge [9], to simple word frequency based approaches [19], to supervised and unsupervised machine learning approaches [4,5,23,25,31], and even reinforcement learning [29]. These approaches employ an extractive or abstractive strategy. Extractive approaches aim to summarize using extracted spans of text, and abstractive approaches aim to accomplish this by generating novel phrases. Forming useful extractive summaries has proven to be the easier option, but an abstractive summary has the ability to be more concise, by combining the information in multiple sentences or by identifying important concepts not explicitly stated in a document. Excellent views of the wide array of summarization algorithms are provided by Nenkova et al. in [26,27] and Mani et al in [20]. More recently, deep learning has been explored to perform automatic summarization. In [5] for example, a document is encoded with a convolutional neural network, then decoded with a recurrent neural network (RNN), where the inclusion of a sentence in the summary depends on the degree to which the decoder attends to that sentence. [4] uses a fully recurrent encoder-decoder framework, and the authors use it to directly generate abstractive summaries. To encourage topic coverage of summaries, they focus on adding "distraction" to the encoder and decoder instead of improving local attention mechanisms. Similar to [5] where sentences are classified for summary inclusion are the approaches taken by [23] and [25], where sentences are sequentially read by a RNN and directly classified for summary inclusion. [31] describes an abstractive model aiming to solve common problems of abstractive summarizers, including generating incorrect or inaccurate details and not handling out-of-vocabulary words. This is done with a RNN capable of producing novel text as well as explicitly copying text from the source document. An abstractive model which also uses a RNN and encoder-decoder framework with attention is [29]. Using a novel attention mechanism and by training with both reinforcement and supervised learning, their model achieved state-of-the-art ROUGE scores on the multiple datasets. Despite the complexity and expressive power of models used in recent approaches, position-based baselines have proven difficult to beat by more than a small margin on the popular CNN/Daily Mail news article dataset. The competitive performance of the Lead heuristic is due to the progressive elaboration present in the writing, so that important and general information is already found at the beginning. While most of the deep-learning-based summarizers use the CNN/Daily Mail data, partly due to the large volume of the dataset, it is obvious that not all documents are written by trained reporters or contain a single important story. Similarly, not all documents will have the most important information at the very beginning. An increasingly common typifying example is the 'listicle' -a list-like White is low probability, red is high probability, and position quantiles are ordered from left to right. The position model considers sentences 2-4, 11, and 12 to be more characteristic of introductory sentences than the actual first two sentences. article containing many equally important sections. Many documents may also contain a catching but low-information story or adage at the very beginning. In such articles, applying the Lead heuristic may provide a summary indicating what type of information is present, but is unlikely to result in an informative summary of the total document. In documents where the boundaries between sections are not available or are vague, applying Lead to each of the sections individually is also not easily done. In this paper, we aim to further explore neural-network-based summarization to explicitly consider the guidance of position. In particular the main contributions of this paper are as follows: (1) We create a neural model which produces a probability distribution over positions in the document for a given sentence. (2) We introduce a sentence embedding which can combine pretrained word embeddings to retain word order information while using an embedding dimension independent of sentence length. (3) We introduce the task of summarizing documents with multiple sections. (4) We propose the LeadR summarizer which can be applied to multi-section documents by locating sentences with introduction-like properties. (5) We compare LeadR to several baseline algorithms on the task of summarizing multi-section documents to 10% of their original length. On documents with more than one main topic, LeadR consistently outperforms all models tested against by a full point on ROUGE-2, and about 0.5 points on ROUGE-1 and -L scores. The remainder of the paper is organized as follows. Section 2 reviews related work on applying heuristics to summarization and the task of predicting sentence position in a document. Section 3 introduces our summarization algorithm, its primary components, and describes how they are combined to extract summaries. In Section 4 we discuss how the dataset was prepared, how our model was trained and optimized, and compare our summarizer to several baselines. This section also includes interesting observations made using the sentence position model. Section 5 concludes the paper and proposes directions for future work. RELATED WORK 2.1 Predicting sentence position Sentence position has been applied to understanding document structure and coherence. In [18], Logeswaran et al. approach two tasks related to sentence order. The first task is that of determining whether a given sequence of sentences is coherent (i.e. in the correct order) or not. In the second, more difficult problem, they take an unordered set of sentences from an abstract and reorder them. To solve these tasks, they use an end-to-end set-to-sequence RNN framework proposed by [35] and reorder sentences by iteratively choosing among all sentences the one best fit to follow the one last added. This work is in contrast to the more computationally efficient approach we will take where we predict the position information of all sentences in a document at once. True sentence position is often provided as a feature for sentence level extractive summarizers, but to our knowledge, no previous work has been done on explicitly using predicted sentence position in summary extraction. Phrase embedding Many options exist for creating sentence representations suitable for machine learning. Representing sentences as a bag of words (BoW) is a simple and common approach, but removes word order information of sentences, which is often important for understanding the meaning of sentences. Combining distributed word embeddings [1] is an approach which has been shown to work well for many applications. In [14], the authors explore different methods for combining word embeddings for the purpose of measuring semantic similarity between sentences. They compare combining word embeddings by either using a recursive auto-encoder [33] or simply adding the vectors together, and find that adding vectors consistently performs better. In addition to looking at how word embeddings are combined, they consider choices in the semantic similarity function. They find that a measure based on Euclidean distance performs better than using cosine similarity. For representing short texts such as tweets, [7] considers learning to weight the words when adding their embeddings, and shows that it outperforms other combination methods such as taking the mean or max of the word embeddings. Moving beyond addition, [22] considers combinations of multiplying and adding word embeddings to produce a phrase embedding. Evaluating on a sentence semantic similarity task, they show that multiplying, or a combination of multiplication and addition, outperforms the common addition approach. LEADR SUMMARIZER MODEL Our summarization model is inspired by the competitive performance of the Lead-3 heuristic on news articles. The reasoning behind the LeadR summarizer is as follows: We know that the Lead-3 heuristic works well for certain single-section documents, so if we can locate sentences in a multi-section document similar to those extracted by Lead in a single-section document, we may be able to form a good summary. To be able to handle very long documents, the summarizer is implemented in a pipeline where each step can be efficiently executed. In Section 3.2 we mention additional benefits to using this pipeline. At a high level, the LeadR summarizer executes the following steps: (1) First, we generate embeddings for each sentence in the document. Refer to Section 3.1. (2) We feed these embeddings into a position model to obtain a probability distribution over positions for each sentence. Refer to Section 3.2. (3) Next, we compare a window of target distributions over the sequence of sentence position distributions to obtain an intro score for each sentence. Refer to Section 3.3. (4) Finally, we iteratively construct a summary by choosing sentences with high intro scores and small overlaps with previously chosen sentences. Refer to Section 3.4. Sentence embedding In this paper, we develop a flexible sentence embedding technique which combines benefits of multiple methods. The new method has the property of producing fixed dimension embeddings similar to BoW on a fixed vocabulary or word embedding addition. The method also retains word order information, a property enjoyed by recurrent models. The intuition behind our embedding is that we are squeezing (or stretching) word embeddings of dimension d f to fill a matrix of width R and height d f . The word embeddings are blended together across the width to make the embedding less sensitive to exact word ordering and to allow every word to have an influence on the combined embedding even if R < |S |. Figure 2 provides a visualization of the embedding technique. Given the speed of using pretrained word embeddings and the excellent cross-task performance recently reported in [13], we choose to build upon these state-of-the-art fastText word embeddings 1 . The word embeddings are created with a cbow model architecture described in [21], with the addition of a few tricks. These embeddings were shown to form good sentence representations when averaged together. As the first step in producing sentence embeddings, we apply a random sparse projection 2 to the fastText word embeddings to reduce the dimension from 300 to d f = 75. This value was chosen to balance performance with increased model training and summary extraction speed and reduced memory usage. In addition to the new sentence embedding we define in this section, our model requires the ability to generate simple fastText embeddings for words, sentences (lists of words), and documents (lists of sentences). We use the notation f txt(token) to represent converting a textual token to its dimensionally reduced fastText embedding, as defined by: Given a sentence S from a document D consisting of words [w 1 , ..., w |S | ], we combine the word embeddings for the sentence to obtain X a S , the main part of the embedding as follows: where and R ∈ N is the spatial resolution of the sentence embedding. β ∈ R controls the spatial decay in word importance in the combined embedding. The amount of blending between words is inversely proportional to β, so that when β is large, very little blending occurs. When β = 0 and R = 1, the embedding is equivalent to adding together the fastText embeddings. In Sections 4.4, we will see that controlling the spatial resolution and amount of word blending allows us to produce better summaries than if the word embeddings were simply averaged. The second part of the sentence embedding, X b D , shared by all sentences in D, is computed with f txt(D nost ops ), where D nostops is the document with stopwords removed. X a S is then flattened into a vector and concatenated with X b D to obtain X S . The purpose of X b D is to incorporate a global context, shown in [18] to improve performance at sentence ordering. Neural Position Model The purpose of the position model is to predict the position in a document of a given sentence. We will implement the position model with a fully connected neural network with a softmax output layer. Instead of predicting a single continuous value for the position of a sentence as the fraction of the way through a document, we frame sentence position prediction as a classification problem. The use of classification was initially motivated by the poor performance of regression models; since the task of position prediction is quite difficult, the models would consistently make predictions very close to 0.5 (middle of the document), thus not much useful information was attained. To convert the task to a classification problem, we aim to determine what quantile of the document a sentence resides in. Notationally, we will refer to the number of quantiles as Q. We can interpret the class probabilities behind a prediction as a distribution over positions for a sentence, providing us with a predicted position distribution (PPD). When Q = 2 for example, we are predicting whether a sentence is in the first or last half of a document. When Q = 4, we are predicting which quarter of the document it is in. In Figure 1, we can see an example of the PPDs for sentences in an article when our model uses Q = 11. As we will see in Section 4.4.2, having fine-resolution distributions (i.e. Q > 2) is beneficial when locating sentences for a summary. This neural model will be trained to map the sentence embeddings to one-hot encodings of sentence quantile position. More implementation and evaluation details will be discussed in Section 4.3. Composing the novel sentence embedding method with the position model provides us with the PosDist function which maps a sentence S to its predicted position distribution, a vector of dimension Q. While a summarization model which learns to predict the inclusion of a sentence in a summary given our novel word embedding without generating PPDs is possible, being able to generate and access these PPDs confers multiple benefits, including: Cross-task flexibility Being able to generate the PPDs for a document allows for upstream models to use them for a variety of tasks, akin to pretrained word embeddings. Coherence evaluation As we mention in Section 4.5, PPDs may be used for judging the coherence of a document. For the same reason, they may prove useful for the purpose of segmenting an otherwise unstructured document by topic and into coherent sections, similar to [32] or [8]. PPDs may also be useful for automatic grading of essays as a high level indicator of coherence and structure. Insights In Section 4.5 we will use these PPDs to analyze human written summaries and show how they are unique from sentences typically found in documents. Intro Scores To try solve the difficult problem of determining how characteristic a sentence is of an introductory sentence, we compare the predicted position distributions of the sentence and its successors to a sequence of target distributions. The purpose of the sequence of target distributions is to specify what we consider an archetypal introduction to look like. The sequence of sentences whose PPDs maximizes the cosine similarity with the target distributions should be the most introduction-like. Since we may care more about the similarity of sentences at the start of a sequence than ones at the end, we allow for placing more weight on matching the PPDs near the start of the target distribution. Equation 5 shows how these ideas are combined to produce the final sentence intro scores. Here, S i is the i t h sentence in D, T j is the j t h target distribution, γ controls the weight decay of sentence similarities. To measure semantic similarity between a PPD and a target distribution, Sim (cosine similarity) is used. Summary extraction Instead of building up a summary one sentence at a time, we allow our model to add sentence spans of length sl. If sl = 2 for example, then sentences are added to the summary in contiguous pairs. Having a larger sl allows for more coherent summaries, and as we will see in Section 4.4.2, can also improve performance. In addition to considering intro scores when constructing a summary, we also consider the semantic overlap of sentence spans chosen by using Maximal Marginal Relevance (MMR) as described in [3]. MMR essentially provides a way of weighting the positive and negative aspects of adding a piece of text to the summary, given what has already been added. A linear combination of the two values is used, with the weight distribution controlled with λ ∈ [0, 1]. In our case, when λ = 1, the sentence spans are scored only according to their intro score. When λ = 0, spans are scored entirely on their semantic novelty (or added coverage) with respect to previously chosen spans. Carbonell et al. find that an intermediate value for λ is best for the task of generating query-directed summaries. Similar to their work., we use cosine similarity for semantic similarity, but we use averaged fastText embeddings to embed the text and compare sentence spans instead of single sentences. Algorithm 1 shows how we apply MMR for extracting a summary of n sentences for document D. EXPERIMENTS In this section, first we will discuss the dataset used and its preparation for the multi-section summarization task. Second, we will look at how our models are trained and tuned. Third, we will compare our LeadR model to several baselines. Finally, interesting results attainable using the neural sentence position model will be presented. All experiments were performed on a machine with an Intel Core i7-6700HQ CPU and 16G RAM. Neural networks were implemented with the Keras Python library [6]. Data preparation for multi-section summarization In this paper, we aim to show how neural models can be used to extend the Lead heuristic beyond its typical area of expertise on single-sectioned news articles. To construct a dataset for this task, we start by preparing the CNN/Daily Mail news article dataset constructed by [12] 3 , then simulate the more challenging situation of multi-section summarization by concatenating multiple news articles. For features predictive of sentence location to be learned which may transfer to other datasets, many of the articles in the CNN/Daily Mail dataset require preprocessing to remove unwanted information. Publication meta-information appearing at the beginning of articles could easily be used as an indicator of sentence position without having to consider the remainder of the sentence. Table 1 contains examples of such cases. Each news article also contains "highlights" which compose the human written summary of the article. These are extracted and will be used during evaluation of summarizers. Our model does not require entity anonymization to be performed on the news articles. After preprocessing, we end up with over 300,000 articles, with an average of almost 30 sentences/article, and the most common length being 17 sentences. 114 articles which contain zero sentences are not used at any point in the experiments. Human written summaries are an average of 3.8 sentences long and most commonly 4. To extend this set of articles for multi-section summarization, we simply concatenate random articles together. We refer to a LOS ANGELES, California (Reuters) -Massive dogs belonging to "Mission: Impossible" star Ving Rhames attacked and killed a live-in caretaker at the actor's Los Angeles home ... document formed by concatenating M single articles as an MXconcatenated document. The task of summarizing 1X concatenated documents refers to the standard single-article summariztaion. For automated evaluation of summaries, we use ROUGE scores [17], which requires ground-truth ("gold") summaries. Each article in the CNN/Daily Mail dataset is accompanied with a short summary, so to extend to the multi-section task we simply concatenate the M summaries together. This concatenated summary is considered the gold summary for the MX concatenated document. Next, we will discuss in more detail how summarization models will be evaluated. Evaluation of multi-section summarizers To evaluate the quality of summaries, we use three ROUGE evaluation metrics [17]: ROUGE-1, which measures the amount of unigram overlap between the gold summaries for an article and the proposed (automatically produced) summary, ROUGE-2, which measures the amount of bigram overlap, and ROUGE-L, based on the longest common subsequence in the gold and automated summary. While ROUGE metrics originally focused only on recall, precision is often taken into account in the form of F1 scores so that summaries which are both informative and sufficiently concise are rewarded. For evaluating newswire summaries, these metrics have been shown to correlate well to human assessments [16]. In this paper, we will use full-length F1 ROUGE variants 4 . For training, validation, and testing, 30,000 articles are randomly selected from the CNN/Daily Mail dataset, with 80% used for training, 10% for validation, and 10% for testing 5 . The volume of data used is largely limited by the time required to run full-length ROUGE for evaluating the extracted summaries. The evaluation time is especially noticeable in our experiments due to the use of concatenated articles. To train the neural sentence position model, the original singlesection articles are used. Since we have one sample per sentence and 24,000 training articles, we will train on approximately 700,000 samples. For validation of the summarizer, the original 3,000 singlesection validation articles are used, as well as 9,000 multi-section articles formed by concatenating 2, 3, and 4 random articles from the validation set. To evaluate models on the validation set, instead of aiming to maximize a single ROUGE metric, we combine ROUGE -1, -2, and ROUGE-type(model, MX concatenated docs) (6) For testing, a process similar to validation set construction is used, except 5X concatenated articles are also included, for a total of 15,000 articles 6 , and the individual ROUGE scores will be reported. In the single-section (1X) summarization task, extractively summarizing articles to three sentences is most commonly performed. For the MX concatenated task, we will focus on summarizing documents in 3M sentences. Training details To determine our hyperparameters in a tractable way, we perform grid searches on subsets of the hyperparameters. The subsets are chosen to minimize search time and take into account those hyperparameters which might interact non-linearly to influence the final performance. To further reduce training time and decrease the potential for over-fitting, many values are fixed to reasonable settings. A summary of the fixed values is as follows: Neural network structure We use a fully connected feedforward structure with two hidden layers, Leaky ReLu activation [36] (with a = 10), and no regularization. Neural network training regime The number of epochs is set to 10. Batch size is set to 256. We use the Adam optimizer [15] with most default parameters as supplied by Keras [6]. The learning rate is tuned to avoid over or underfitting on the training set. Training is performed to minimize the crossentropy of the predicted and true labels. Target distributions We fix the sequence length of target distributions to 5. To calculate the target distributions for a given Q value, we linearly interpolate between a Q-length vector with a 1 in the first dimension and a vector with all values set to 1/Q. We expect this to be a good approximation of what the introductory PPDs of an archetypal article should be like. The six sets of tuned hyperparameters are shown in Table 2. Initial values are not applicable to the very first two hyperparameters optimized. The initial value is meant to be a best guess and is used when optimizing other hyperparameters before it itself undergoes optimization. After the hyperparameters in a set are optimized, they are fixed and the next set is optimized. An exploration into how a few hyperparameters affect the validation performance is given in Section 4.4.2. Comparison to baseline models. We look at the test performance of four versions of LeadR. The first is the full LeadR algorithm. Second is LeadR λ=1 , which does not value maximizing coverage and only chooses sentence spans based on intro scores. Third is LeadR λ=0 , which does not use the intro scores and only maximizes coverage. Fourth is LeadR avд which uses word embedding averages instead of our novel position-sensitive sentence embedding. In this paper, we evaluate our models on the novel task of multi-section summarization, thus no previously reported results are directly comparable to this work. However, existing summarization algorithms can be applied to this task. We compare our models to the following baselines: Lead Sentences are extracted in their original order. On the 1X CNN/Daily Mail summarization task, this is a competitive heuristic, especially for 3 extracted sentences. Lead mul t i This method is a naïve extension of Lead. If k sentences are required for the summary, it will split the document into ⌈k/3⌉ sections and take the first three sentences from each section until the sentence limit is reached. Luhn This method makes use of both word frequency statistics as well as position of words within sentences to rank them by importance [19]. SumBasic This is a method which exclusively makes use of word frequency information [28]. LSA This method uses singular value decomposition applied to the term by sentences matrix of a document to identify important concepts [34]. The sentences chosen for summarization are those which best represent these concepts. TextRank This graph based method ranks sentences using an algorithm similar to PageRank [2]. LexRank This graph based method aims to capture sentence importance with eigenvector centrality in a weighted edge graph induced by the sentences and their similarity [10]. KLSum This method greedily adds sentences to a summary to minimize the KL divergence between the document and summary unigram distributions [11]. Aside from the first two baselines, these models are all implemented in the sumy Python package 7 Our LeadR model is clearly shown in Table 3 to consistently outperformed all baselines tested against on the multi-section summarization tasks. On the 2X -5X concatenation tasks, our ROUGE-1, -2, and -L scores are an average of 0.5, 1, and 0.6 points above the next best scores respectively. One immediately evident detail is that 7 Available online at https://pypi.python.org/pypi/sumy. the Lead heuristic still outperforms our model on the 1X task. The simple extension of Lead is able to improve upon the performance of Lead by several points on the 2X -5X tasks and even performs better than or similarly to LSA, TextRank, and KLSum. Of the LeadR variations, LeadR λ=0 and LeadR avд perform the poorest across all tasks and metrics, suggesting that the use of intro scores and novel sentence embedding are the main contributors to its performance. Effects of parameter tuning. The effects of several of our model hyperparameters are displayed in Figure 4. In each of the plots, the performance shown is that on the validation set calculated with Equation 6. The performance curves for the hyperparameters are gathered during their tuning. This means, for example, that while R, β, and Q were being optimized, neural networks with hidden layers of sizes [200,50] were being used, while layers of size [100,25] were used when optimizing γ , sl, and λ. The following observations on the effects of various hyperparameters can be made: R and β When one of R (sentence embedding resolution) or β (inverse word blending) is low, average ROUGE score is low. After both reach about 4, performance seems levels off. Since R and β are optimized together, the curve values for R are averages across all values of β, and the curve values for β are averages across all values of R. Q Having a value for Q (number of position quantiles in PPDs) at or above 5 seems to be critical to performance. γ There seems to be a clear optimal value for γ (controls importance decay of sentence PPD similarity with target distributions). The best performance is achieved when the similarity of a sentence PPD to the corresponding target distribution is considered half as important as for the previous sentence. λ The optimal value for λ (balance of intro score and coverage importance) appears to be around 0.7, and going below that value is more detrimental than increasing it. This indicates that intro score for a sentence is more important than maximizing coverage, but a benefit is achieved by taking it into account. sl The span length used when extracting summaries has a sweet spot at 3. Going above or below it is detrimental. Interesting Observations Ideally, one would like an extractive summarizer to choose sentences most similar to those in a human written summary. However, applying the neural position model to gold summaries reveals why that may be difficult. In Figure 5, we see averaged PPDs for gold summaries of length 4, for the first 5 sentences of news articles, and for the last 5 sentences of news articles. Even though the Lead heuristic performs quite well, the predicted sentence positions at the start of articles are quite different from those in gold summaries. Interestingly, the very first sentence in a gold summary is strongly predicted to be either at the start or the end, with low probabilities in the middle. While the PPDs for gold summaries might seem more characteristic of conclusory sentences, applying a Last-3 heuristic on the 1X summarization task results in very poor performance (ROUGE-1, -2, and -L scores of 13.2, 2.7, and 12.3 respectively). Upon looking at the PPDs for many news articles (the PPDs for five articles are shown in Figure 3), we observe that sentences with high start and end position probabilities are quite rare. Another interesting observation made with PPDs is how easily the structure of an article may be observed. In the PPD sequence in the bottom left of Figure 3, there seem to be multiple sentence subsequences whose predicted positions shift relatively smoothly from the first to last quantile, akin to small articles embedded within the larger one. When reading these sentence subsequences, the characterization often seems appropriate. This observation suggests that the neural position model could be applied to automatically assess the flow and coherence of documents. CONCLUSIONS To extend the performance of the Lead-3 heuristic to extract summaries for multi-section documents, we propose the LeadR algorithm which locates sentences with introduction-like properties. This is performed by first using a neural model to predict position distributions of sentences, then choosing those sentences with predicted positions similar to that of introductory sentences. The strength of predicted sentence position as an indicator for summary quality is demonstrated on an augmented version of a common summarization dataset. The importance of the position model is demonstrated by adding over 4, 2, and 4 ROUGE-1, -2, and -L points respectively when compared to the same summarization pipeline which only maximizes coverage of the summary. We also employ a novel sentence embedding which encodes positional information of words while maintaining a constant dimensionality. We demonstrate that this embedding contributes a full ROUGE-2 point and over 2 ROUGE-1 and -L points when summarizing multi-section articles. This paper leaves many interesting areas for future work. As suggested by [14], optimizing the semantic similarity measures used could increase performance. Other heuristics to build upon could also be used. For example, instead of predicting the location of sentences, the presence of other signals such as key phrases could be predicted. Similar to training a model to predict sentence location, obvious evidence of the phrases would need to be removed before training. We also briefly mentioned applications of predicted position distributions of sentences including segmenting documents into subsections and estimating coherence for automatic evaluation of writing quality.
2018-04-22T01:02:23.000Z
2018-04-22T00:00:00.000
{ "year": 2018, "sha1": "d442889917cf546fb9f54a28dda6db8754af9296", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4e1760acc57b67b02dacc011e7c70439b881b13a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
8807407
pes2o/s2orc
v3-fos-license
Improvement of glucaric acid production in E. coli via dynamic control of metabolic fluxes D-glucaric acid can be used as a building block for biopolymers as well as in the formulation of detergents and corrosion inhibitors. A biosynthetic route for production in Escherichia coli has been developed (Moon et al., 2009), but previous work with the glucaric acid pathway has indicated that competition with endogenous metabolism may limit carbon flux into the pathway. Our group has recently developed an E. coli strain where phosphofructokinase (Pfk) activity can be dynamically controlled and demonstrated its use for improving yields and titers of the glucaric acid precursor myo-inositol on glucose minimal medium. In this work, we have explored the further applicability of this strain for glucaric acid production in a supplemented medium more relevant for scale-up studies, both under batch conditions and with glucose feeding via in situ enzymatic starch hydrolysis. It was found that glucaric acid titers could be improved by up to 42% with appropriately timed knockdown of Pfk activity during glucose feeding. The glucose feeding protocol could also be used for reduction of acetate production in the wild type and modified E. coli strains. Introduction D-glucaric acid was identified by the United State Department of Energy as a top value-added chemical for production from biomass (Werpy and Petersen, 2004). It has a number of potential applications including use in biopolymers (Kiely and Chen, 1994) and as a detergent builder and corrosion inhibitor (Smith et al., 2012). Glucaric acid can be produced through nitric acid oxidation of glucose (Mehltretter and Rist, 1953) but a biological route to glucaric acid production could potentially provide several advantages, including mild processing conditions and high selectivity for the product of interest. Production of D-glucaric acid in Escherichia coli was previously demonstrated by our group via expression of heterologous enzymes from three different organisms ). Titers of 1.13 g/L glucaric acid were achieved in strain BL21(DE3) in LB medium supplemented with 10 g/L glucose. Following demonstration of the initial pathway, some increases in glucaric acid titers were achieved through improved strategies for expression of the myo-inositol oxygenase (MIOX) enzyme, one of the limiting factors in glucaric acid production in LB supplemented with glucose or myo-inositol (Moon et al., 2010;Shiue and Prather, 2014). However, competition for glucose-6-phosphate (G6P) between native E. coli enzymes (phosphoglucosisomerase and glucose-6phosphate dehydrogenase) and the first enzyme in the glucaric acid pathway, myo-inositol-1-phosphate synthase (INO1), is also a concern. High level expression of INO1 is required for detectable myo-inositol and glucaric acid production, indicating it competes poorly with endogenous metabolism for substrate . Additionally, the second pathway enzyme, MIOX, appears to be stabilized by its substrate, myo-inositol, so more rapid accumulation of myo-inositol may help reduce limitations in MIOX activity as well (Moon et al., 2010). With this in mind, our group has explored strategies for Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/mec development of strains capable of accumulating G6P and directing greater fluxes of this metabolite into production of glucaric acid and myo-inositol. By eliminating the pathways for glucose catabolism in the production strain, and feeding alternative carbon sources, higher yields of glucaric acid from glucose could be achieved (Shiue et al., 2015). However, the rate of glucose uptake in this K-12 host strain was quite slow, especially in minimal medium, and its use was limited to mixed sugar substrates. While gene knockouts provide a static solution for redirecting fluxes in the cell (Kogure et al., 2007;Shiue et al., 2015), under many conditions, it may be advantageous to develop cells where dynamic changes in enzyme levels can be used to switch between substrate consumption for biomass formation and substrate conversion into product. Dynamic control of key enzymes can be used to facilitate more rapid initial accumulation of biomass, overcoming potential reductions in growth rate, and can eliminate the need for supplementation of the medium or addition of secondary carbon sources required with some gene knockouts (Anesiadis et al., 2008;Gadkar et al., 2005). At the desired time, activity of the target enzyme(s) can be reduced through decreasing transcription (Scalcinati et al., 2012;Solomon et al., 2012;Soma et al., 2014) or translation (Williams et al., 2015) of the enzyme, or initiating rapid degradation Torella et al., 2013). Coupling such controls with sensors capable of reporting on intracellular metabolite levels allows for the development of more complex systems capable of continuously adjusting enzyme levels to balance metabolite pools or maintain cellular state (Dahl et al., 2013;Farmer and Liao, 2000;Xu et al., 2014;Zhang et al., 2012). It was recently shown that by inducing degradation of phosphofructokinase I (Pfk-I) activity in the cell, the pools of G6P could be increased during growth on glucose minimal medium, along with the yields and titers of the glucaric acid precursor myo-inositol . In this work, we explore the expanded utility of this system for production of glucaric acid from glucose in a semi-defined medium under batch conditions and a fed-batch condition simulated by glucose release from in situ enzymatic starch hydrolysis. To explore the interplay of production conditions with metabolic intervention through Pfk-I degradation, initial screening runs were carried out in 48-well plates in a Bio-Lector benchtop bioreactor. Follow-up experiments were then carried out at altered conditions or altered scale (shake flask) to understand the robustness of the results. Improvements in glucaric acid titer of up to 42% were achieved through appropriately timed induction of Pfk activity knockdown during the fermentation. Strains and plasmids E. coli strains and plasmids used in this study are listed in Table 1. Strains IB1863 and IB1379 were constructed by our group previously . To eliminate catabolism of glucaric acid and the pathway intermediate glucuronic acid in strain IB1863, knockouts of gudD and uxaC were carried out via sequential P1 transduction from Keio collection donor strains (Baba et al., 2006). The kanamycin resistance cassette was removed after each transduction via expression of FLP recombinase from pCP20 (Datsenko and Wanner, 2000). The λDE3 lysogen was integrated into this strain using a λDE3 Lysogenization Kit (Novagen, Darmstadt, Germany), generating strain IB1486. To generate the ΔpfkA control strain IB2255, serial P1 transductions were also carried out in IB1379 to knock out pfkA, gudD, and uxaC, and the λDE3 lysogen was integrated as described above. An additional control strain without any degradation tag on pfkA, IB2472, was generated using lambda red recombination combined with Cas9based counterselection (Reisch and Prather, in press). Using this method, the tag sequence was removed from the pfkA locus without any addition of antibiotic resistance cassettes or FRT scars. Construction of plasmids for production of glucaric acid, pRSFD-IN-MI and pTrc-udh, was described previously Yoon et al., 2009). Culture medium and conditions For plasmid preparation and genetic manipulations, strains were cultured in Luria-Bertani (LB) medium at either 30°or 37°C. Temperature sensitive plasmids were cured at 42°C. Glucaric acid production experiments were carried out in T12 medium containing 7.5 g/L yeast extract, 7.5 g/L soy peptone, 7 g/L Na 2 HPO 4 , 3 g/L KH 2 PO 4 , 0.5 g/L NaCl, 3 g/L (NH 4 ) 2 SO 4 , 4 mM MgSO 4 , 100 μg/ml carbenicillin, 50 μg/ ml kanamycin, and the indicated amount of glucose and/or soluble starch (Sigma-Aldrich S9765) plus amyloglucosidase (Sigma-Aldrich A7095). For experiments in the BioLector (m2p-labs, Baesweiler, Germany), starter cultures were incubated in culture tubes at 30°C and 250 rpm overnight in T12 supplemented with 10 g/L glucose and diluted 1:100 into working cultures. Working cultures were incubated at 30°C, 1200 rpm (3 mm orbit), and 80% relative humidity in the BioLector. A working volume of 1 ml was used in the BioLector 48well flower plate, sealed with gas-permeable sealing film with evaporation reduction layer (m2p-labs). To induce expression of enzymes required for glucaric acid production, 100 μM β-D-1thiogalactopyranoside (IPTG) was added to production cultures at inoculation. For induction of SspB in strain IB1486 to knock down Pfk activity, 100 ng/ml anhydrotetracycline (aTc) was added at the times indicated in the Results section. At the indicated time points, the contents of the sample well were removed for measurement of glucaric acid production and residual glucose levels. All conditions were tested in triplicate wells. For example, to generate the data set illustrated in Fig. 1, 18 wells of IB1486 were inoculated (3 with aTc added at inoculation, 3 with aTc added at 5 h, etc.). At the times where aTc addition was indicated, the BioLector was briefly stopped, the plate opened, and aTc was added to the 3 appropriate wells. After 48 h, all 18 wells were collected for analysis of glucaric acid production. For experiments in shake flasks, starter cultures were grown to mid-exponential phase (OD 600 $ 5) in 250 ml baffled flasks containing 30 ml T12 þ10 g/L glucose and used to inoculate 30 ml working cultures to a starting OD 600 ¼ 0.05. Production cultures were incubated in 250 ml baffled shake flasks at 30°C, 80% humidity, and 250 rpm. IPTG (100 μM) was added at inoculation and aTc (100 ng/ml) was added at the indicated time points. Flasks were sampled periodically for measurement of optical density, as well as for HPLC and biomass samples. Samples from all experiments were stored at À 20°C until analysis. Measurement of extracellular metabolites and starch Glucose, glucaric acid, acetate, and myo-inositol levels were quantified by high performance liquid chromatography (HPLC) on an Agilent 1100 or 1200 series instrument (Santa Clara, CA) with an Aminex HPX-87H column (300 mm by 7.8 mm; Bio-Rad Laboratories, Hercules, CA). Sulfuric acid (5 mM) at a flow rate of 0.6 mL/min was used as the mobile phase. Compounds were quantified from 10 μL sample injections using refractive index (glucose, myo-inositol, glucaric acid, acetate) and diode array detectors (glucaric acid, 210 nm). Column and refractive index detector temperatures were held at 65°C and 35°C respectively. To quantify the amount of starch hydrolysis in fed-batch samples, samples were split at collection. Half of the sample was centrifuged at 15000xg for 15 min and used for HPLC analysis as described above. The remaining portion of the sample was treated with 15 U/ml amyloglucosidase for 15 min at room temperature for full hydrolysis of remaining starch. After treatment, the sample was centrifuged for 5 min at 15000xg and the glucose concentration in the supernatant was measured using a YSI 2900 Biochemistry Analyzer (YSI Life Sciences, Yellow Springs, OH). The difference between the glucose content measured in the fully hydrolyzed sample and the glucose content measured via HPLC in the sample without an additional hydrolysis step was used to calculate the content of un-hydrolyzed starch. The maximum amount of glucose that could be liberated from starch in the medium was determined by full hydrolysis of the starting medium with amyloglucosidase. To calculate glucose utilized by the cell, the amount of free glucose and the amount glucose generated from full hydrolysis of residual starch in a sample were subtracted from the maximum amount available in the medium. This value for consumed glucose was then used in the calculation of glucaric acid yield from glucose. Titers and yields are reported in the text as the average of triplicate measurements 71 STD and error bars in figures also represent average values 71 STD. To test for statistically significant differences between conditions, an unpaired two-tailed Student's t test was applied assuming equal variance. The level of significance is indicated in figures by the following: *p o0.05, ** p o0.005. At other points were significance is discussed, p values are indicated in the text. Phosphofructokinase activity measurements Phosphofructokinase activity assays were carried out as described previously . Results Glucaric acid production was screened in strain IB1486-GA. This strain was derived from a previously developed strain, IB1863, where Pfk activity can be dynamically controlled through addition of aTc . A modified SsrA tag was added to the coding sequence of pfkA in this strain, which results in slow degradation of the phosphofructokinase-I (Pfk-I) protein in the absence of the adapter protein SspB, but more rapid degradation of Pfk-I in the presence of SspB (McGinness et al., 2006). In IB1863, aTc addition induces SspB expression, resulting in rapid depletion of Pfk-I and buildup of intracellular G6P in glucose minimal medium. IB1486 contains the same modifications, along with additional knockouts of gudD and uxaC to prevent glucaric acid catabolism and the DE3 lysogen for expression of T7 RNA polymerase. In previous work with IB1863, it was shown that dynamic knockdown of Pfk activity could result in increased production of the glucaric acid precursor myo-inositol in glucose minimal medium, but that correct timing of aTc addition was required to achieve maximum yields and titers. Very early switching to "production mode" by aTc addition may result in insufficient time for protein expression and formation of biomass. However, very late switching results in more utilization of glucose for growth, and less remaining glucose to be redirected to product formation. In moving to glucaric acid production, screening for optimal aTc addition time was again required, due to changes in cellular growth rate from the burden of expression of the complete glucaric pathway and the change in medium composition compared to the glucose minimal medium previously tested. The modified MOPS glucose minimal medium previously used for myo-inositol production (Brockman and Prather, 2015) was initially explored for glucaric acid production, but lag times of approximately 48 hours were observed, likely due to the burden associated with expression of all three pathway proteins. In addition to glucose, the T12 medium used in this work for testing of glucaric acid production contains yeast extract and soy peptone, which provide supplemental carbon sources. While glucose would primarily be used as a feedstock for glucaric acid production, some additional carbon supplementation was desired in the medium to reduce batch time and simulate a potential semi-defined, scale-up medium. To facilitate rapid screening of a variety of aTc addition times in triplicate, cultures were grown in 48-well flower plates in a Bio-Lector microbioreactor system. Glucaric acid production was screened in T12 medium in both batch conditions (15 g/L glucose at inoculation) and simulated fed batch conditions, where 3-5 g/L glucose was added at inoculation, and additional glucose was released slowly from 10-12 g/L starch by addition of amyloglucosidase. Screening batch conditions for timing of Pfk knockdown For screening of IB1486-GA under batch conditions, working cultures consisted of T12 medium with 15 g/L glucose and with 100 μM IPTG added at inoculation for induction of the glucaric acid pathway enzymes. Additions of aTc were made at times varying from 0-30 h after inoculation. Fig. 1 illustrates the yields and titers of glucaric acid observed after 48 hours. Glucose was fully consumed in all cultures at 48 h, except the culture with aTc addition at 12 h, which still contained 6.6 70.1 g/ L glucose. The highest titers of glucaric acid, 1.35 g/L (9.7% yield from glucose), were achieved with aTc addition at 24 h, representing an 18% improvement in both yields and titers over the case of no Pfk switching. As expected, switching after 24 h resulted in somewhat lower titers, as more glucose had already been consumed for biomass and could not be redirected to glucaric acid production. Earlier switching resulted in lower titers either due to incomplete consumption of glucose, as in the case of 12 h aTc addition, or due to an "escape" phenotype. The escape phenotype correlates with rapid growth to higher cell densities ( Fig. 2A) and an increase in Pfk activity (Fig. 2B). Studies of the system based on IB1863 have indicated that the increase in Pfk activity is likely linked to disruption of SspB expression, which is required for rapid Pfk-I degradation, through mutation or mobile element insertion (Supplementary methods and Supplementary Fig. 1). Very early addition of aTc results in high stress on the cell from a combination of limited glucose uptake and high protein expression for the glucaric acid pathway enzymes, which may result in more rapid selection for the escape phenotype. Screening fed-batch conditions for timing of Pfk knockdown Fed batch conditions were initially screened in T12 medium with 3 g/L free glucose and 12 g/L starch, with 100 μM IPTG added at inoculation for induction of the glucaric acid pathway enzymes. Feeding was started at 12 hours by addition of 0.006 U/ml amyloglucosidase. As the starch hydrolysis rate declines with time, secondary additions of 0.006 U/ml and 0.012 U/ml amyloglucosidase were carried out at 36 and 48 hours, giving the glucose release profile shown in Fig. 3. At the conclusion of the experiment, in addition to the initial 3 g/L glucose, another 9.7 70.7 g/L free glucose had been released in the cultures on average. (For calculation of yield, unhydrolyzed starch was measured in individual wells via the method outlined in Section 2.3). In this system, maximum titers of 1.56 g/L could be achieved with aTc addition at either 24 or 32 h (Fig. 4A), a 42% improvement over no aTc addition. The yields of glucaric acid were also improved by up to 50% with aTc addition at 24 h, with a maximum yield of 12.4% on glucose (based on available glucose added and hydrolyzed from starch). This improvement was larger than the batch condition, likely due to differences in the amount of glucose still available for consumption after addition of aTc. Early aTc addition at 12 h did not result in improved titers, with the shape of the growth curve indicative of escape (Fig. 4B). Measurement of Pfk activity also showed activity recovery to levels at or above the case of no aTc addition for this condition. Shake flask studies under batch conditions Consistent improvements in glucaric acid titers could be observed by timed knockdown of Pfk activity under the conditions tested in the BioLector. To determine whether these improvements would be robust to moderate changes in culture conditions, a set of batch experiments was carried out in shake flasks, along with feeding experiments testing alternative starch hydrolysis conditions in both the BioLector and in shake flasks. Initially, glucaric acid production was tested in shake flasks under batch conditions in both IB1486-GA and in LG1458-GA, a wild-type MG1655 background with only gudD and uxaC knockouts. In shake flasks, batch testing resulted in high baseline variability in titers that made it difficult to validate improvements in the 20-40% range that were previously achieved through Pfk knockdown in IB1486-GA. However, the shake flask testing did provide some interesting insights into the metabolism of IB1486-GA versus the wild-type strain LG1458-GA, in the absence of any aTc addition. Significantly, under batch conditions, acetate production varied greatly between the two strains, with LG1458-GA producing much higher levels of acetate. The fill volume of flasks in batch testing appeared to have an effect on acetate production from IB1486-GA, potentially due to changes in aeration. In IB1486-GA, several cultures showed residual acetate at 48 hours in shake flasks with a 45 ml fill volume, while in previous testing in the BioLector, no acetate was observed at 48 h. Testing at a lower fill volume (30 ml) resulted in reduced acetate accumulation at both 24 and 48 h. A summary of observed acetate production in IB1486-GA and LG1458-GA in shake flasks is presented in Fig. 5A. While improved aeration could alleviate moderate acetate accumulation in IB1486-GA, acetate accumulation in LG1458-GA remained high. Pfk activity measurements at 24 h in IB1486-GA and LG1458-GA (Fig. 5B) showed that Pfk activity in LG1458-GA was significantly higher than in IB1486-GA, likely leading to metabolic overflow and greater acetate production. The low baseline activity of IB1486-GA in T12 medium was unexpected, given that the parent strain, IB1863, always showed Pfk activity higher than the wild type in MOPS minimal medium with glucose . In addition to exhibiting higher acetate accumulation, glucaric acid production was very poor for LG1458-GA under batch conditions, with titers below the limit of detection in T12 þ15 g/L Fig. 4. Growth and glucaric acid production in IB1486-GA in T12þ 3 g/L glucose þ 12 g/L starch. (A) Titers and yields of glucaric acid at 72 hours for aTc addition times from 12-48 h. (B) Growth of IB1486-GA in T12 þ3 g/L glucoseþ 12 g/L starch. Starch addition resulted in higher opacity of medium at start of fermentation, and changes in OD 600 after amylase addition represent both cell growth and changes in opacity as starch was broken down. Error bars represent triplicate mean 7 SD. *, p o0.05; **, po 0.005; between yield or titer values as indicated. glucose (30 ml fill volume). LG1458-GA showed incomplete glucose consumption as well, with 3.0 7 0.1 g/L remaining at 48 hours. Although titers in IB1486-GA showed high variability, glucaric acid production was clear in all samples in T12 þ15 g/L glucose, with measured titers of 0.9 70.3 g/L glucaric acid in shake flasks with 30 ml fill volume. Glucose was also completely consumed in the cultures. Although a static condition of low Pfk activity can clearly be tolerated in T12 medium and can provide a benefit in glucaric acid production under batch conditions, the previous screening work in Section 3.1 indicated that there is a limit to what could be gained in this manner, as complete knockdown of Pfk activity by aTc addition at inoculation resulted in poor growth and eventual "escape" of the culture. A ΔpfkA ΔpfkB double knockout strain, IB2255, was tested to assess the productivity that could be expected in T12 medium under batch conditions with no Pfk activity. After 48 hours in T12 with 15 g/L glucose, 0.18 70.01 g/L glucaric acid was produced, significantly lower than the 0.9 70.3 g/L produced in IB1486-GA in shake flasks in batch (po 0.05, Student's t test). To maximize productivity in a 48 h batch, at least some period of culture growth needs to be provided with higher Pfk activity. Validation of fed-batch results under altered feeding conditions The high variability in batch shake flask experiments made it difficult to validate whether Pfk knockdown could be used to increase titers at that scale, as well as in the BioLector. Larger titer improvements were seen with fed-batch conditions, so a set of validation experiments was run under those conditions, testing both an alternate starch hydrolysis strategy in the BioLector and application at shake flask scale. As the highest fed-batch titers were achieved previously with Pfk knockdown at 24-32 h, it would be expected that growth up to that point could be carried out under either batch or fed-batch conditions without significant changes to the outcomes. An alternative feeding strategy was tested with the initial free glucose increased to 5 g/L and feeding started at 24 h, from a reservoir of 10 g/L starch. This was initially tested in 48 well plates in the BioLector microbioreactor, with amyloglucosidase additions of 0.006 U/ml at 24 h and 48 h. The highest titers of 1.17 g/L were achieved with aTc addition at 36 hours, a 27% improvement over no aTc addition (Fig. 6), validating that Pfk knockdown could be used to improve titers under alternative feeding conditions. A maximum glucaric acid yield of 12.5% from glucose was also achieved with aTc addition at 36 h (36% improvement over no aTc addition). Although titers were lower than in the original fedbatch screening, maximum yields were similar, as un-hydrolyzed starch contents were higher under this feeding strategy, leaving only 9.7 g/L free glucose (initial dose and feeding) available for conversion to glucaric acid, versus 12.7 g/L in the previous test. The starch fed-batch strategy was then tested in shake flasks, both for IB1486-GA and for LG1458-GA. The cultures contained 5 g/L free glucose and 10 g/L starch. Free glucose was measured at 18 h, and since glucose was already found to be exhausted, amyloglucosidase addition was started at that time. Secondary additions were carried out at 40 and 48 h. Despite the extra amyloglucosidase addition, starch hydrolysis was again poorer in this condition, and resulted in 10.1 70.5 g/L total free glucose available in the cultures on average. Baseline yields for IB1486-GA were still comparable with the previous tests. Addition of aTc for Pfk knockdown at 24 hours resulted in a 28% improvement in titers and a 32% improvement in yield over no aTc addition (Fig. 7). Variability was higher in shake flasks than in previous testing in the BioLector, and statistical significance at 95% confidence was not achieved for this result, although it would be significant at a 90% confidence level (0.05 op o0.1, Student's t test). The results of fed-batch testing in shake flasks also illustrated some potential benefits of slower glucose feeding for the wild type strain, LG1458-GA. In contrast to the batch condition, under the starch hydrolysis (glucose feeding) condition shown in Fig. 7, glucaric acid titers of 0.75 70.2 g/L were achieved in LG1458-GA, more comparable to the 0.85 70.02 g/L produced in IB1468-GA without aTc addition, and any acetate produced was fully consumed by the conclusion of the experiment for both strains. Yields were also more comparable between the two strains under glucose feeding, with 7.6% for LG1458-GA versus 8.3% for IB1458-GA. One of the clear advantages of the fed-batch condition is the elimination of acetate buildup and carbon loss to acetate formation. Additional changes in metabolism, such as upregulation of genes associated with sugar transport (Franchini and Egli, 2006;Raman et al., 2005), could also be favorably changing relative metabolite pools. While a fed-batch production strategy provides a benefit for LG1458-GA, IB1468-GA does not benefit as strongly from slow glucose feeding, likely due to the fact that acetate production is already much lower in this strain. Pfk activity may also be low enough in this strain that other changes in metabolism related to glucose limitation are not significant. Growth (Fig. 8A) and Pfk activity measurements (Fig. 8B) during fed-batch testing in shake flasks showed trends similar to previous tests. As under batch conditions, IB1486-GA had significantly lower baseline Pfk activity than LG1458-GA before aTc addition, with aTc addition resulting in an additional decrease in activity of about 50%. While the mismatch in baseline activity did not cause a significant difference in titers between IB1486-GA and LG1458-GA under fed-batch conditions (p 40.4, Student's t test), this did affect batch performance strongly, as discussed previously in Section 3.3. An additional control experiment was carried out under fedbatch conditions to analyze potential effects of SspB expression on glucaric acid production in the absence of a modified ssrA-tagged degradation target. The IB1486 strain background includes the deletion of the native copy of sspB, and it is possible that restoring its expression through aTc addition could normalize native protein degradation and affect glucaric acid production independent of Pfk knockdown. To control for this possible broader effect of SspB expression on production, strain IB2472 was constructed, which is identical to IB1486, except that the modified ssrA tag was removed from the sequence of pfkA. In this strain, Pfk is not degraded upon SspB expression, so the effect of SspB expression alone can be observed. This strain was tested under the same culture conditions used to generate Fig. 7. There was no significant difference in the titers or yields of glucaric acid in IB2472-GA between flasks with SspB expressed and those without (p 40.9, Supplementary Table 1). This is consistent with reports in the literature that deletion of sspB alone does not result in a large buildup of native ssrA tagged proteins in the cell, in contrast to deletions of the unfoldase ClpX or protease ClpP, which result in clear buildup of tagged proteins (Lies and Maurizi, 2008). The glucaric acid titers in IB2472-GA were lower than those observed in IB1486, likely due to higher baseline Pfk activity in this strain. The removal of the DAS þ4 tag in the control strain also eliminates the low level background degradation of the protein that takes place even in the absence of SspB, resulting in higher Pfk activity. Discussion Results with IB1486-GA indicate that dynamic control of Pfk activity can be utilized to improve titers of glucaric acid, a product requiring several enzymatic conversions starting from G6P. The system was applicable for use with a semi-defined medium under both batch and simulated fed-batch conditions. While gains in titer were consistent across multiple conditions, the maximum gains were smaller than those observed previously for myo-inositol production in glucose minimal medium . Previous work with myo-inositol production in glucose minimal medium showed that switching at low cell density was optimal for the largest gains in titer. In T12 medium, these earlier switching times resulted in more rapid escape and little time for conversion of glucose to glucaric acid, perhaps due to the greater expression burden of the complete glucaric acid pathway and the higher availability of nutrients in T12 that "escapers" could use to rapidly grow and overtake the population. The later switching times result in higher usage of glucose for biomass formation, so the amount of glucose processed after switching to production mode is relatively low. While genetic stability can be achieved out to at least 72 h for aTc addition at 24 h, future work may be needed to address genetic stability in longer fermentations and expand the usefulness of switching between growth and production modes. Activity of the downstream enzymes in the glucaric acid pathway is another potential limitation, but in this particular medium, it does not appear that the activity of MIOX, a bottleneck under some other conditions, was limiting overall pathway yield, as minimal buildup of myo-inositol was observed in the cultures. However, balancing of expression between the three pathway enzymes could be an issue, since high level INO1 expression is required for any myo-inositol to be produced for further conversion . Reductions in INO1 expression upon expression of other enzymes in the glucaric acid pathway would be expected to limit maximum fluxes into the pathway, also limiting the glucose that could be effectively redirected in a given time period. Importantly, IB1486-GA showed titers that were comparable with a wild-type control strain under fed-batch conditions and superior under batch conditions, indicating the genetic modifications required for control of Pfk activity were not detrimental to baseline glucaric acid production and could potentially be transferred into high-performing strains as well. Although the baseline Pfk activity was low in T12 medium, it was still sufficient for rapid growth without excessive overflow metabolism. More consistent success with chromosomal modifications in the K strains led to the initial construction of the Pfk-I control system in that background, but additional improvements in glucaric acid titer, both with and without Pfk knockdown, can likely be achieved by transferring the genetic modifications of IB1486 to an E. coli B strain. In previous work, wild-type BL21 has outperformed MG1655 containing the same pathway genes Raman et al., 2014;Shiue et al., 2015). Conclusions Glucaric acid titers and yields could be improved under multiple culture conditions through timed knockdown of Pfk activity, with maximum improvements of up to 42% observed. In the absence of aTc, the switchable strain IB1486 shows titers comparable to or above those observed with wild-type MG1655, indicating the genetic modifications in IB1486 do not result in degradation of baseline performance and could potentially be applied to highperforming strains for increases in titer. Optimization of strain background and pathway enzyme expression levels may lead to both higher baseline titers and to greater gains from dynamic control of Pfk activity. Conflict of Interest Statement N.C.C. and K.L.J.P. are founders of Kalion, Inc.
2016-03-14T22:51:50.573Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "d031e4410abc98239ef96a666fe697b01e0ff90a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.meteno.2015.09.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb7049f247b0ab07318c777320ff0c4eed943e82", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
229375650
pes2o/s2orc
v3-fos-license
Learning Multimodal Word Representations by Explicitly Embedding Syntactic and Phonetic Information Word embedding (i.e., word representation) transforms words into computable mathematical expressions (usually vectors) according to semantics. Compared with human semantic representation, these purely text-based models are severely deficient because they lack perceptual information attached to the physical world. This observation promotes the development of multimodal word representation models. Multimodal models have been proven to outperform text-based models on learning semantic word representations, and almost all previous multimodal models only focus on introducing perceptual information. However, it is obvious that syntactic information can effectively improve the performance of multimodal models on downstream tasks. Therefore, this article proposes an effective multimodal word representation model that uses two gate mechanisms to explicitly embed syntactic and phonetic information into multimodal representations and uses supervised learning to train the model. We select Chinese and English as examples and evaluate the model using several downstream tasks. The results show that our approach outperforms the existing models. We have made the source code of the model available to encourage reproducible research. I. INTRODUCTION Word embedding is often used in natural language processing (NLP) tasks such as machine translation [59], text classification [1], and dialogue systems [50]. There are various word embedding models, such as word2vec [56], GloVe [20], etc. Well-performing word embedding should reflect semantics accurately. At present, most popular methods for learning word embeddings are based on the distributional hypothesis, which utilizes cooccurrence statistics from massive text datasets. However, the process of humans understanding semantics is known in psycholinguistics as language comprehension [16]. Humans are first stimulated by perceptual information (text, sound, etc.), then extract implicit syntactic information from the brain, and finally use the information from their brain to reprocess the perceptual information and understand semantics. Therefore, compared to human semantic representation, these purely text-based models are The associate editor coordinating the review of this manuscript and approving it for publication was Arianna Dulizia . severely deficient because they lack perceptual information attached to the physical world. This observation has led to the development of multimodal word representation models that utilize both linguistic (e.g., text) and perceptual information (e.g., images and audio). Such models can learn better semantic word representations than text-based models, as evidenced by a range of evaluations [8], [33]. A typical example is that the meaning of concrete words, such as ''bird'' and ''thunder'' are mostly learned from perceptual experiences of seeing, touching and listening. In contrast, more abstract words, such as ''obscure'' and ''lovely'', are less associated with perceptual modalities and act as relatively fixed parts in the sentence structure. According to different types of words, information from different modalities contributes differently to the meaning of words, which has been found in cognitive psychology [21], [22] and computational experiments [10]. However, the existing multimodal models focus on the processing of perceptual information and ignore the introduction of syntactic information. Syntactic information, such as part of speech, refers to the results obtained by dividing the combinatorial relations between words in a sentence according to specific standards. Recently, Vashishth et al. [57] and Wang et al. [52] completely addressed the word cooccurrence information collected implicitly based on the distributional hypothesis as syntactic information. This approach relies heavily on continuous context, which is the integrity of the training corpus. For target words with multiple parts of speech, if the contexts of a specific part of speech are abundant but their occurrences in training data are low or they do not appear, then the corresponding semantics will not exist in the embedding of the obtained word representation. For example, if the word representation for ''break'' does not have the semantic corresponding to the noun but only the semantic corresponding to the verb, then it is clearly not ideal. For low frequency words, it is more difficult to obtain syntactic information through the distributional hypothesis. These factors inspire us to build a multimodal word representation model that can embed syntactic and perceptual information effectively, and the model is called MSP. To this end, two fusion mechanisms have been added to the MSP: a modality-specific gate and a language-specific gate. After constructing the perceptual and syntactic representations, the modality-specific gate uses the seq2seq neural network [2], [32], [35] to explicitly embed syntactic and phonetic information in word representations and train the model based on the supervised method. The second mechanism is a language-specific gate. It uses dynamic fusion methods [52] to assign fusion weights to each modality to increase the adaptability of MSP to different language groups. The reason is that in MSP, phonetic information acts as perceptual information while different languages have different emphases on phonetic information. For example, phonetic languages (such as English) are more dependent on phonetic information than ideographic languages (such as Chinese). In addition, extensive analysis was conducted to clarify the principles of the proposed method. In summary, we have two major contributions: • We propose the multimodal word representation model called MSP. Compared with the existing word embedding models, MSP explicitly embeds syntactic and phonetic information in the model, simulates multimodal information fusion through two gate mechanisms, and obtains a multimodal word representation model with excellent performance through supervised training. The core idea of this model is that it uses supervised training to learn a set of general language information fusion rules. • The use of syntactic information can significantly improve the performance of the multimodal word representation model. On various NLP tasks, we use multiple word representation models and pre-trained language models as baselines to compare the performance and set MSP-with no processing of syntactic information as a control. The task results confirm this conclusion. II. RELATED WORKS Researchers have been working on building multimodal representation models for many years, most of which can be divided into two types. A. JOINT TRAINING MODELS These models build multimodal representations with raw inputs of both linguistic and perceptual resources. The recently introduced work is an extension of the skip-gram model [56]. For instance, Hill et al. [10] propose a corpus fusion method that inserts the perceptual features of a word in the training corpus, which is then used to train the skip-gram model. Lazaridou et al. [31] proposed the MMSkip model, which injects visual information in the process of learning linguistic representations by adding a max-margin objective function to minimize the distance between linguistic vectors and visual vectors. The joint training methods implicitly propagate perceptual information to word representations and simultaneously learn multimodal representations. However, the abovementioned models do not introduce syntactic information. This weakens the effect of introducing perceptual information and consequently leads to only limited improvement. Vashishth et al. [53] incorporate syntactic and semantic information in word representations by using graph convolutional networks, and explicit embedded syntactic information effectively improves the performance of the model; however, this model does not introduce perceptual information. B. SEPARATE TRAINING MODELS These models independently learn linguistic and perceptual representations and integrate them afterwards. The simplest approach is concatenation, which fuses linguistic and visual vectors by concatenating them. Concatenation has been proven to be effective in learning multimodal models [8], [10], [11]. Variations of this method apply transformation and dimension reduction techniques, including the singular value decomposition (SVD) [8] and canonical correlation analysis (CCA), to the concatenation result [10]. In addition, Vashishth et al. [53] and Silberer et al. [54] use a stacked autoencoder to learn multimodal representations by embedding linguistic and visual inputs into a common space with the objective of reconstructing the individual inputs. However, the abovementioned methods can only generate multimodal representations of those words that have image information, thus drastically reducing the multimodal vocabulary. Wang et al. [52] build a multimodal model that can dynamically fuse semantic representations of different modalities according to different types of words. In the last two years, the research of constructing multimodal word representation using phonetic information has also been carried out. Zhu et al. [58] propose enhanced double-carrier word representation via phonetics and writing. It trained written embedding based on phonetic embedding and the final word representation fuses writing and phonetic embedding. Zhu et al. [63] use a synchronized way that adopts an VOLUME 8, 2020 FIGURE 1. The four numbers correspond to the four steps of the method. L w , P w and S w are the linguistic representation, perceptual representation and syntactic information of the target word w , respectively. In the fourth step, w and w are semantic relational word pairs. attention model to utilize both text and phonetic perceptual information in unsupervised learning tasks. In terms of the two types of models discussed in this section, MSP belongs to separate training model. Based on the existing researches, the above methods are all effective methods to generate multimodal word representation. However, no matter the joint training model or the separate training model, most of them only focus on the introduction of a class of modality information during the learning process. In contrast, MSP uses gate mechanisms to introduce perceptual information and syntactic information in the one model. Fig. 1 shows the framework of our proposed MSP, which contains four stages: III. PROPOSED METHOD • Build the perceptual representation -Language comprehension begins with receiving perceptual stimuli. Most linguists believe that sound is the primary perceptual form of language, so the model processes the phonetic feature of words and treats the result as a perceptual representation. • Construct the syntactic information -Janda [27] have experimentally demonstrated that syntactic information plays an irreplaceable role in language comprehension. In MSP, for each word, we construct the probability distribution of the part of speech as the syntactic information. • Modality-specific gates and language-specific gates are used to explicitly embed syntactic information in training and fuse the linguistic representation and perceptual representation. We employ the GloVe and word2vec vectors as our linguistic representations, which are trained using global word cooccurrence statistics. • We design the objective function and train the MSP model using supervised learning. A. CONSTRUCT PERCEPTUAL REPRESENTATION The goal of this phase is to build the perceptual representation P w . According to linguistics, different perceptual information of the word considers different information on concepts. For example, image may include information such as shape and color. By contrast, voice contain the concept of information is less, but the phonetic context and the text context can't be regarded as duplicated, they are a complementary relationship that provides a richer semantic for each other. For example, in the case of disambiguation, ''minute'' has two meanings. When the pronunciation of ''minute'' is ['mınıt], it indicates a time unit, and when it is pronounced [maı'nju:t], it means tiny. For words with similar sounds and different meanings, the text can provide richer semantics for the model (such as ship and sheep), and the difference in their writing helps us distinguish the different meanings of the two words. Moreover, while every word has a corresponding pronunciation, images do not have this natural advantage. In this article, we choose sound, which is the primary perceptual stimulus, as the perceptual information; therefore, the model needs to obtain the phonetic representation of words. Specifically, the automatic segmentation of spoken words has been successfully trained and reported previously [3], [6]. The training audio corpus in the present work has been previously segmented into phonetic words. We use the Mel-scale Frequency Cepstral Coefficient (MFCC) method -a common approach to obtain the phonetic features of the audio -to convert the speech frames of words into vectors. Those vectors contain a considerable amount of noise, such as background noise and speaker characteristics; however, what we want to obtain is the phonetic structure [61], which is not changed by the environment or the speaker. To disentangle the phonetic structure and noise, we use an end-to-end approach to process phonetic vectors and obtain the results as perception representations [58]. B. CONSTRUCT SYNTACTIC REPRESENTATION MSP uses part of speech (POS) information to construct syntactic representations. Part of speech is the most common syntactic structure. It is the result of the classification of words based on grammatical features (including syntactic functions and morphological changes) and helps people to collocate and understand the meanings of words. Modern English words can be divided into fourteen parts of speech, but only five are used most often -nouns, verbs, prepositions, adverbs and adjectives. In this model, GCNW uses WordNet to structure syntactic information. WordNet is an English dictionary based on cognitive linguistics in which the relationship between words is human annotated [14]. It can label the POS tag of a word in each specific context. Handling polysemy is the key to constructing POS features. The problem of obtaining the POS tag can be formulated as p = F(w, c), where F is the mapping function that obtains the corresponding POS tag p based on the target word w and specific context c. First, we use WordNet to label the POS tag of each word in the corpus. Note that the same word may be labeled differently in different contexts. Next, for target word w in the corpus, we count the occurrence Occ w p of each POS p. Overview of the modality-specific gate, where L w , P w and S w represent the linguistic representation, perceptual representation and syntactic feature, respectively, of the target word w . In equation (1), m is the total number of times that word w has occurred in the corpus, Then, Occ w p is normalized; thus, the probability distribution of part of speech of the word w is obtained, (2) Finally, we treat the probability distribution of the POS as the syntactic information of word w and construct it into a feature vector that is used in the next phase. C. GENERATE REPRESENTATION IN MSP In this phase, the model explicitly uses two fusion mechanisms, fusing linguistic representation and perceptual representation, to introduce syntactic information in training. 1) MODALITY-SPECIFIC GATE To simulate the role of syntactic information in language comprehension, namely, the reprocessing of perceptual information, we add a modality-specific gate to the model. The modality-specific gate is basically a seq2seq model based on the attention mechanism [2], [55], which is a training method that transforms sequences in different domains. As shown in Fig. 2 [39] to decode c to obtain output sequence y. For the output sequence [y 1 , y 2 . . . , y i−1 ] and the current i th dimension input X , y i can be expressed as: For the M -dimension phonetic representation used as the input, y i is determined by three factors as g (y i−1 , s i , c i ) the hidden state s i at the i th dimension, the intermediate semantic vector c i , and the output y i−1 at the i-1 th dimension, where s i is related to the hidden state s i−1 , and c i is obtained by equation (4). In equation (4), e ij is the alignment model in the attention mechanism and is used to measure the influence of the j th dimension information of the input sequence on the i th dimension information of the output sequence. The encoder needs to initialize the parameters during training at which time the effect of the syntactic information is reflected. The model uses the syntactic feature vector of word w to initialize the parameters h i and ← h i in training. The network output y i of the end-to-end type is the probability distribution. Softmax calculations are performed on each dimension of the sequence [y 1 , y 2 ,. . . , y M ], and p whose dimension is equal to the input phonetic representation is obtained. Finally, a linguistic representation and p are concatenated to obtain Output ms . 2) LANGUAGE-SPECIFIC GATE In linguistics, languages can be divided into ideographic languages and phonological languages according to the dependence of text and sound. Ideographic languages (Chinese, etc.) focus more on text than phonological languages (English, etc.). The use of neural networks to dynamically fuse different modalities has been proven to be effective [52]. Based on this observation, in order to improve the applicability of the model, we add the language-specific gate to assign weights for the linguistic representation and the perceptual representation. In the joint training phase, the model uses a neural network to simulate the current language's dependence on different modalities, and the weighted linguistic representation and weighted phonetic representation will be concatenated to obtain Output ls . 3) JOINT TRAINING In this phase, the model will integrate the outputs of the two gates. According to the literature [8], [52], dynamic weighted fusion is an effective method. Thus, we add a set of variable weights {w ms , w ls } to the network to weight the outputs and superimpose the results to generate Output MSP as equation (5). To train the model, WordNet is introduced as the training dataset. WordNet can search the synonym set corresponding to the target word according to semantic conditions, and VOLUME 8, 2020 the semantic similarity is also human annotated. In the joint training phase, according to equation (6), the model first calculates the mean cosine similarity between MSP representations corresponding to words in the synonym set. Then, according to the training objective, the model minimizes the loss, namely, the difference between the mean cosine similarity and the human-annotated similarity. The model performs iterative training, during which the MSP representations will be updated with the network. Suppose the dictionary contains M words, each word w corresponds to N synonyms w, and the human-annotated similarity between w and w is Sim(w, w). To train the model and learn the network parameters, we minimize the objective function as follows: Although WordNet provides a set of annotated synonyms for almost all words, this does not mean that all words can find a synonym set. For some unqualified words, the model deletes them before training. IV. TASK EVALUATION A. BASELINE ALGORITHMS Word2vec is the most common word representation model. It includes two training modes, CBOW and skip-gram. In the tasks, we compare MSP with word2vec implemented with the CBOW structure. GloVe [20] is another efficient word representation model that incorporates global word cooccurrence information. DFM [52] is a multimodal model that uses three novel dynamic fusion methods to assign importance weights to each modality, and the weights are learned under the weak supervision of word association pairs. DCWE [58] is enhanced double-carrier word representation model via phonetics and writing, and it trained written representation based on phonetic representation and the final word representation fuses text and phonetic embedding. DPWR [63] is trained in a synchronized way that adopts an attention model to utilize both linguistic and phonetic information in unsupervised learning tasks. SynGCN [57] incorporates syntactic and semantic information in word embeddings by using graph convolutional networks. GloVe-ph is a multiple information connection model that directly concatenates the linguistic representation and the perceptual representation. MSP is the multimodal word representation model generated by the method described in this article in which the linguistic representation is represented by GloVe. MSP-w2v changes the linguistic representation in MSP from GloVe to word2vec. MSPremoves the modality-specific gate in MSP to verify the effectiveness of the method described in this article. We also compare the pre-trained language models, including ELMo and BERT, on tasks; however, considering the constraints of the pre-trained language model on task types, they are only used for text classification task. ELMo [36] is a pretrained language model that trains a model with multiple BiLSTM layers, and the output of the model is a sentence representation. BERT [19] is a pretrained transformer network model. In the comparative experiment, the model consists of 12 layers, 768 hidden layers, 12 heads, and 110 M parameters. B. EXPERIMENTAL SETUP For the English linguistic representation, we use the 300-dimensional GloVe and word2vec, which are trained on the Common Crawl corpus consisting of 840 B tokens and a vocabulary of 2.2 M words. For the Chinese linguistic representation, we also use the 300-dimensional GloVe and word2vec, and those vectors are trained on the Wikipedia data set and web news corpus and use Jieba 1 for word segmentation. The dimension of the perceptual representation in the MSP is set to 100. To control the dimensions, other word representation models used for comparison are also retrained according to the dimensions of the MSP. The MSP model is implemented by using TensorFlow. We set the initial learning rate to 0.02 and the batch size to 100, and we randomly initialize the parameters of the model according to a normal distribution. We set the minimum word frequency to 5 by default. If a word appears in the document less than 5 times, it is discarded. The related data and code will be posted on GitHub for replication 2 . We use four intrinsic and two extrinsic evaluation methods to evaluate MSP. Intrinsic evaluation methods include concept categorization task, word similarity task, word analogy task and part of speech tagging task. Those methods focus on measuring lexical internal pattern information, such as semantic information. However, a language model that performs well in an intrinsic evaluation does not necessarily produce similar performance in an extrinsic evaluation. Therefore, this chapter added text classification task and text similarity task as extrinsic evaluation methods to verify the applicability of MSP to different types of tasks. C. CONCEPT CATEGORIZATION TASK 1) DATASET AND EVALUATION CRITERION Concept categorization involves grouping nominal concepts into natural categories. For instance, computers and phones should belong to the electronic products class. In our experiments, we evaluate the models on the AP (Almuhareb, 2006), Battig (Baroni and Lenci, 2010), BLESS (Baroni and Lenci, 2011), and ESSLI (Baroni et al., 2008) datasets. We calculate the classification accuracy σ % to evaluate the models, and a higher accuracy corresponds to a better model. Table 1 lists the results of the concept categorization task. Overall, we found that MSP is superior to existing word representation methods in all four data sets, and MSP-w2v also performs well. On average, we obtain an approximately 1.4% absolute increase in performance on the concept categorization task compared to the best performing baseline. The concept classification task needs to calculate the topic similarity (topically related words) between different words rather than the functional similarity (in place substitutable words). The supervised learning method used by MSP in the training captures the topic similarity of words by utilizing the synonymous relationship between words, which provides advantages for the performance of the model on the task. Table 2 lists the information of those datasets. 2) RESULTS AND DISCUSSION The task uses the cosine similarity between a pair of word representations as the similarity of semantics and employs the Pearson correlation ρ to evaluate the relation between the human-annotated semantic similarity and the cosine similarity. A larger ρ indicates a higher correlation and a better model. 2) RESULTS AND DISCUSSION The results are listed in Table 3 and Table 4. For English, when the Pearson coefficient ρ is the evaluation criterion, MSP and MSP-w2v perform the best for all four datasets at 1.1∼5.9% higher than the state-of-the-art baseline models. For Chinese, MSP performs the best for both datasets. These results show that MSP generated better performances than the existing models. However, because the word similarity information is introduced into the objective function, the results of the word similarity task cannot be used alone to prove the good performance of MSP. The addition of the word similarity task is intended to validate the applicability of the model over different language sets. Further analysis shows that the task performances are much lower than those of the text-based models when the linguistic and perceptual representations are directly concatenated. This indicates that the direct concatenating representations increase the information of the word representation, but this approach is not applicable to the subsequent tasks. E. WORD ANALOGY TASK 1) DATASET AND EVALUATION CRITERION This task is to predict word b 2 given three words a 1 , a 2 , and b 1 such that the relation b 1 : b 2 is the same as the relation a 1 : a 2 . We compare models on SemEval-2012 (Jurgens et al., 2012) and MSR (Mikolov et al., 2013c) using the Pearson correlation. 2) RESULTS AND DISCUSSION The evaluation results on the word analogy task are summarized in Table 5. Overall, we find that MSP outperforms all the existing word representation models. Compared to the best performing baseline model, on average, MSP obtains an approximately 3.6% increase in performance. The results demonstrate that the learned VOLUME 8, 2020 representations from MSP more effectively capture the semantic and syntactic properties of words. F. PART-OF-SPEECH TAGGING TASK 1) DATASET AND EVALUATION CRITERION Part-of-speech (POS) tagging aims at associating with each word, a unique tag describing its syntactic role. For evaluating word representation models, we use Lee et al.'s LSTM model [64] on Treebank POS dataset (Marcus et al., 1994) and evaluate performance with tagging accuracy. Table 6 shows the experimental results of part-of-speech tagging task. Compared with the existing word representation models, MSP has a better performance -MSP gets an excellent result like grammar enhancement model SynGCN, which is 2.2% more accurate than the text-based word representation models and 1.5% more accurate than the multimodal models. The introduction of syntactic information effectively improves the performance of multimodal model. 2) RESULTS AND DISCUSSION Combining the results of other intrinsic evaluation tasks, it can be concluded that the word representation generated by the MSP model contain more semantic and syntactic information, and that such information can be used in relevant downstream tasks. G. TEXT CLASSIFICATION TASK 1) DATASET AND EVALUATION CRITERION We also perform a text classification task to check our method's applicability. The task is based on several public datasets, including scale, IMDB, and Yelp reviews. The scale v1.0 dataset, which we obtained from (Pang and Lee, 2005), is used as the evaluation dataset; and this dataset contains 5004 samples with review texts labeled with 1-4 stars. The IMDB data set contains 50,000 film reviews, including 25,000 opinion-filled reviews for training and 25,000 reviews for testing; and these data set can be used for classification. We also use Yelp reviews as a dataset, which we obtained from (Zhang et al., 2015). This dataset contains 1,569,264 samples of review texts labeled with 1-4 stars. For the text classification task, we use the mean of the word representations to represent a sentence or document. The text classifier was trained with LIBLINEAR 3 [65]. For the corpus that does not distinguish between the training and testing sets, 75% of the characters are selected as the training set, and the remaining 25% are used for testing. We calculate the classification accuracy σ % to evaluate the models 2) RESULTS AND DISCUSSION Table 7 and Table 8 list the results of the text classification task. Compared to other baseline word representation models, MSP performs the best for all datasets, which shows that MSP not only significantly improves the model performance, but it is also applicable to different downstream tasks. Moreover, other models with embedded syntactic information, such MSP-w2v and SynGCN, also perform well. This shows the effectiveness of the introduced syntactic information for this type of task. When compared to the pre-trained language models, the difference between other models' and MSP's performance on the text classification task is slight. However, BERT and other language models are only applicable to tasks with larger granularity, such as those at the sentence level; and they require extremely large numbers of parameters and training costs. Therefore, MSP has its own advantages in this application. H. TEXT SIMILARITY TASK 1) DATASET AND EVALUATION CRITERION The content of text similarity task is to calculate the similarity s 1 of a pair of sentences, and then measure the performance of the model by comparing the difference between the similarity s 1 and the similarity s 2 of manual annotation. We superimpose the word vectors in the sentence, express the average vector as the sentence representation, and take the cosine similarity between the two sentence vectors as the similarity s 1 . Pearson correlation coefficient is used to calculate the correlation between s 1 and the s 2 . We experimented with the SICK and STS datasets. The SICK data set contained 9,927 pairs of sentences (4,500 pairs of training sets /4,927 pairs of test sets /500 pairs of validation sets). The STS data set consists of 8,628 sentence pairs, divided into training sets (5,749 of training sets /1,500 of test sets /1,379 of verification sets). Table 9 list the results of the text similarity task. According to the results, MSP performs best across all data sets. Compared with text-based word representation and multimodal word representation without introduction of syntactic information, the results obtained by MSP are improved by 0.016 and 0.012 respectively. 2) RESULTS AND DISCUSSION Based on the results of the extrinsic evaluation methods in this chapter, it can be concluded that MSP not only performs well in the intrinsic evaluation method, but also gets similar results in the extrinsic evaluation, which indicates that MSP not only can effectively improve the internal mode information represented by words, but also has good applicability for different types of tasks. V. MODEL ANALYSIS Compared with the existing word embedding models, MSP achieves a great improvement. Its gate mechanisms effectively integrate multimodal information, which is reflected by its good performance. The MSP consistently performs better than the MSP-model on all task results; and when MSP removed the modality-specific gate, the performance of the model experienced a significant decrease but was still higher than that of GloVe-Ph. This suggests that after the removal of the modality-specific gate, the model loses the reinforcement effect of syntactic information. However, language-specific gates still play a role in adjusting the weights of the modality; and without this mechanism, MSP would completely degenerate to GloVe-ph. For the text classification task, when compared to other text-based models and multimodal models, MSP is still better than MSP-and has the best performances in three datasets. Moreover, the improvement effect is better than those for the other tasks, indicating that the introduced syntactic information plays a role in making MSP more suitable for tasks that utilized syntactic information. The applicability of MSP to different languages is also quantitatively analyzed. Table 10 presents the combination weights of the linguistic and perceptual representations learned in language-specific gates for English and Chinese. The ratio between the linguistic information and perceptual information was 0.8225:0.1775 for English and 0.8976:0.1024 for Chinese. Linguistic representation has a higher weight for both languages, which indicates that text is more important for carrying information. However, phonetic languages such as English have a stronger dependence on phonetic information than ideographic languages such as Chinese, which is in line with the linguistic viewpoint. The above results indicate the following: • MSP is a word embedding model with better comprehensive performance because the MSP includes extra multimodal information and uses effective mechanisms to process that information. This is demonstrated in a series of tasks. • Adding syntactic information can effectively improve the performance of the model. Similar to perceptual information, syntactic information is also needed for building multimodal representations and can effectively improve the performance of the model on downstream tasks. • MSP is applicable to different languages. The learned weights show clear differences between phonetic and ideographic languages. VI. CONCLUSION Based on the observation that almost all previous multimodal models only focus on introducing perceptual information VOLUME 8, 2020 and ignore syntactic information, we propose the new multimodal word representation model MSP. MSP uses two fusion mechanisms to embed explicit syntactic information and phonetic information and uses supervised training to learn performance-enhancing multimodal word representations. Experimental evaluations show that our proposed model achieves substantial gains on all benchmarks. Qualitative analysis further proved the validity and applicability of MSP. As one of the main research directions related to the development of language representations, the performance of multimodal models depends not only on the source of the perceptual information but also on the method used to incorporate that information. Such an incorporation method should not be limited to the incorporation of only two kinds of information and should also be capable of incorporating information from more than two modes. Future work includes exploring better representations of semantic words by combining information from other modalities. We believe that the multimodal model is of great significance in promoting the development of applications related to natural language processing. SHUANG LIU is currently pursuing the master's degree with Shanghai University. His main research fields include artificial intelligence, natural language processing, and machine learning. CHAOMING LIU is currently pursuing the master's degree with Shanghai University. His main research fields include artificial intelligence, natural language processing, and machine learning. XIAOYA YIN is currently pursuing the master's degree with Shanghai University. Her main research fields include artificial intelligence, natural language processing, and machine learning. XIAPING XV is currently pursuing the master's degree with Shanghai University. Her main research fields include artificial intelligence, natural language processing, and machine learning. VOLUME 8, 2020
2020-12-10T09:07:57.716Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "c07e326daf3068b61f08e82282d4f2d5555006d1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1109/access.2020.3042183", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "e39c536da2da410de9c61c0bcec72473329d3a76", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
261886638
pes2o/s2orc
v3-fos-license
Association between maternal breastmilk microbiota composition and rotavirus vaccine response in African, Asian, and European infants: a prospective cohort study Background. Maternal breastmilk is a source of pre- and pro-biotics that impact neonatal gut microbiota colonisation. Since oral rotavirus vaccines (ORVs) are administered at a time when infants are often breastfed, breastmilk microbiota composition may have a direct or indirect influence on vaccine take and immunogenicity. Methods. Using standardised methods across sites, we compared breastmilk microbiota composition in relation to geographic location and ORV response in cohorts prospectively followed up from birth to 18 weeks of age in India (n = 307), Malawi (n = 119), and the UK (n = 60). Results. Breastmilk microbiota diversity was higher in India and Malawi than the UK across three longitudinal samples spanning weeks of life 1 to 13. Dominant taxa such as Streptococcus and Staphylococcus were consistent across cohorts; however, significant geographic differences were observed in the prevalence and abundance of common and rare genera throughout follow-up. No significant associations were identified between breastmilk microbiota composition and ORV outcomes including seroconversion, post-dose 1 vaccine shedding, and/or post-vaccination rotavirus-specific IgA level. Conclusions. Our findings suggest that breastmilk microbiota composition may not be a key factor in shaping trends in ORV response within or between countries. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 11, 2022. ; https://doi.org/10.1101/2022.11.09.22282115 doi: medRxiv preprint Given the additional amplification involved in library preparation for breastmilk samples, reads 154 were frequently detected in extraction controls (n = 56 individual or pooled controls with >10,000 155 reads after the filtering steps above). Several additional filtering steps were therefore included. First, 156 we retained RSVs if they were detectable at •0.1% abundance in •1% of breastmilk samples from at 157 least one country. Second, we applied prevalence-based filtering using the decontam package with a 158 p value threshold of 0.05 to exclude RSVs that were more common in extraction controls. Finally, we 159 removed samples if their mean Bray-Curtis distance (based on either weighted or unweighted 160 metrics) from breastmilk extraction controls was smaller than their mean distance from other 161 breastmilk samples collected from the same country (Supplementary Figure 1). CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. were detected with a prevalence of >5% in at least one of the groups being compared. We 188 supplemented cross-sectional analyses with longitudinal mixed-effects models of Shannon index 189 and taxon abundances (zero-inflated negative binomial models of genus-level sequence counts), 190 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 11, 2022. ; https://doi.org/10.1101/2022.11.09.22282115 doi: medRxiv preprint including week of life as a covariate and study ID as a random effect. Genera were included in 191 longitudinal models if they were present in 20% of samples in a given country. 217 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 11, 2022. ; https://doi.org/10.1101/2022.11.09.22282115 doi: medRxiv preprint Figure 5). Genera underlying the predictive accuracy (based on mean importance scores) were 273 consistent with the discriminant taxa described above (Supplementary Table 1). 275 We also assessed alpha and beta diversity of breastmilk samples in relation to individual-level 276 variables measured in each cohort (Figure 2) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 11, 2022. ; https://doi.org/10.1101/2022.11.09.22282115 doi: medRxiv preprint discriminant genera were identified with respect to secondary outcomes based on longitudinal 299 models of genus abundance (Supplementary Table 2). 300 302 Breastmilk is a key source of pre-and pro-biotics that shape infant gut microbiota configuration. 303 This, in turn, plays a pivotal role in shaping immune development. We documented significant 304 differences in breastmilk microbiota composition between Malawi, India, and the UK across the 305 first 13 weeks of life. However, no consistent differences in breastmilk microbiota composition were 306 observed with respect to ORV response. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 11, 2022. ; https://doi.org/10.1101/2022.11.09.22282115 doi: medRxiv preprint breastmilk microbiota composition and ORV response in this cohort. Because of their low biomass, 354 breastmilk samples were subjected to extra rounds of PCR amplification to attain adequate material 355 for sequencing (35 cycles vs 25 used for stool), leading to amplification from extraction controls. We 356 accounted for this via stringent abundance-and prevalence-based filtering of potential 357 contaminants and excluded samples which clustered among extraction controls rather than other 358 breastmilk samples. Nonetheless, the potential contribution of contamination and site-specific batch 359 effects to the observed trends cannot be discounted. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 11, 2022. ; https://doi.org/10.1101/2022.11.09.22282115 doi: medRxiv preprint microbiota sequencing work. Above all, we are grateful to the families involved in the study. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint R e s e a r c h D i s s e m i n a t i o n C o n f e r e n c e , B l a n t y r e , M a l a w i , 2 4 t h -2 5 t h N o v e m b e r 2 0
2022-11-12T02:02:42.898Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "1e3df310790e623117fe6d0dd8eec985b1c323b4", "oa_license": "CCBY", "oa_url": "https://archive.lstmed.ac.uk/22749/1/jiad234.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "1e3df310790e623117fe6d0dd8eec985b1c323b4", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1792024
pes2o/s2orc
v3-fos-license
Reduction of Leaching Impacts by Applying Biomass Bottom Ash and Recycled Mixed Aggregates in Structural Layers of Roads This research is focused on analyzing the environmental pollution potential of biomass bottom ashes as individual materials, as mixtures manufactured with biomass bottom ashes and granular construction aggregates, and these mixtures treated with cement. For the environmental assessment of all of the samples and materials mentioned, the following leaching procedures have been performed: the compliance batch test of UNE-EN 12457-3:2003 for aggregates and bottom ashes; the column test according to NEN 7343:1994 for the mixtures prepared in the laboratory; and the tank test by EA NEN 7375:2004 for analyzing the behavior of mixtures after their solidification/stabilization with 5% cement. After the discussion of the data, the reduction of the pollution load of the most hazardous biomass bottom ashes after their combination with different aggregates can be confirmed, which implies their possible application in civil infrastructures, such as filler embankments and road construction layers, without negatively impacting the environment. In addition, the positive effect of the stabilization/solidification of the cement-treated mixtures with a reduction of the heavy metals that were released at the highest levels, namely As, Hg Cr, Ni, Cu, Se and Mo, was proven. Introduction In industrialized countries, it is expected that the future generation of electricity will be from the direct combustion of residues and wastes obtained from biomass [1]. Biomass boilers are a medium for efficiently combusting biomass. Boiler combustion processes are known to produce large amounts of bottom ash. For that reason, the primary concerns are ash storage, disposal and usage. The major inherent biomass bottom ash (BBA)-forming elements in biomass include Ca, Si, Al, Ti, Fe, Mg, Na, K, S and P [2,3]. Bottom ashes are composed of minerals that were either absorbed by the biomass or incorporated into the biomass during harvesting, and unburned organic matter. Although previous studies have demonstrated the mechanical aptitude of bottom ashes for minor constructive uses, their potential reuse in civil engineering works is determined by their chemical and physical properties [4]. Various researchers have investigated different processing methods to improve the mechanical properties of BBA for use in engineering applications. Due to the huge variety of biomass fuel sources with differing ash properties, the identification of the physical, chemical and environmental characteristics of ashes will provide valuable information as to the likely methods for optimizing their processing and reuse as secondary materials [5]. Currently, the construction industry uses fly ashes in different products, such as a cement replacement in concrete [6], for soil stabilization, as road base [7,8] and in cement production [9]. Additionally, bottom bed ashes from the combustion of forest biomass residues are used as substitutes 44 29 Oil cake is the most effective biofuel of all. It is a by-product of olive oil production consisting of seed particles and the fleshy parts of the olive [20]. It contains a large amount of organic substances, of which approximately 4% is oil, which gives oil cake a high potential for energy production [21,22]. This explains why oil cake and olive trees are the biomass components that are burned in higher volumes in all tested plants: approximately 72% of the burning in BA was oil cake, while the BB plant's combustion was 40% wood biomass (poplar, oak and pine) and BC had a lower amount of oil cake (29%). After the combustion of the biomass fuel in the high efficiency modern combustion equipment (such as boilers or stoves) of the power plants, mineral constituents of the fuel, in the form of oxides or salts, fall into two components: fly ash, very fine particles that are carried in the flue gases, and bottom ash, larger particles that fall through the grate during combustion [14]. The general characterization of the tested biomass bottom ashes is listed in Table 2 as a general summary of the physical and chemical properties. After the physical characterization of the BBA samples, it was observed that they were composed of extremely porous particles with rough surface textures. Additionally, the water absorption was measured; for construction materials, it is an important factor to consider because many of the physical parameters of bottom ash are altered in the presence of excess water [23,24]. Water absorption values ranged between 21.75% and 38.74%, notably higher than the absorption levels of aggregates. Recycled and Natural Aggregates As has been previously mentioned, the present study provides an environmental assessment of mixtures of BBA/construction aggregates that could be applied as secondary construction materials. For that reason, it is necessary to include the material properties of aggregates that were physically, chemically and environmentally (according to UNE-EN 12457-3 [19]) characterized prior to their environmental characterization. The recycled mixed aggregate (RMA) was manufactured in the treatment plant of Construction and Demolition Wastes (CDW) of the Gecorsa Company, located in Córdoba (Andalusia). The material was composed of 70%-90% concrete and 10%-30% ceramic particles. In this plant, prior to the treatment of the recycled aggregate, the blocks were previously subjected to a cleaning process. Additionally, manual and mechanical selections were performed to separate different compounds from the waste, such as wood, plastic or iron. The production control was performed according to the Standard. The natural aggregate (NA) was extracted from a material accumulation close to road construction work that was chosen due to its high plasticity. Table 3 shows the information concerning the general properties of both aggregates. From granulometric analysis, it can be stated that the grain size distribution of NA presented a higher fineness than RMA (<0.063 mm). Compared with the BBA, it is observed that the NA and RMA presented higher density values. However, BBA and RMA did not present plasticity, while the natural material NA did present high plasticity values, as indicated in previous paragraphs. Mixtures of Aggregates and Bottom Ashes Because the biomass bottom ashes have been demonstrated as unsuitable for use in material construction with bearing capacity, the present research analyzes the behavior of the BBA/construction aggregate mixtures. The dosages of materials in the mixtures come from previous experiments [35] in which it was shown that these dosage percentages are feasible for the use of such mixtures in civil engineering. The experimental procedure consists of studying the leaching behavior of three BBA samples (representative of each combustion plant) that exhibited a greater release levels of heavy metals. The leaching characterization of the mixtures BBA/NA and BBA/RMA, considered to be granular materials that could potentially be used in civil work, such as roads, was conducted using the percolation test NEN 7343:1994 [36]. According to the previous discussion, the dosage used was 15% BBA and 85% NA or RMA, expressed as the dry weight of the aggregates. The last experimental stage of the present research was the solidification/stabilization of both mixtures with 5% cement. These monolithic samples were environmentally characterized according to the Dutch diffusion test EA NEN 7375:2004 [37], and the most significant findings are included in the present work. To facilitate understanding of the experimental design developed by the present work, Figure 1 provides a graphical diagram regarding the environmental assessment process performed. From granulometric analysis, it can be stated that the grain size distribution of NA presented a higher fineness than RMA (<0.063 mm). Compared with the BBA, it is observed that the NA and RMA presented higher density values. However, BBA and RMA did not present plasticity, while the natural material NA did present high plasticity values, as indicated in previous paragraphs. Mixtures of Aggregates and Bottom Ashes Because the biomass bottom ashes have been demonstrated as unsuitable for use in material construction with bearing capacity, the present research analyzes the behavior of the BBA/construction aggregate mixtures. The dosages of materials in the mixtures come from previous experiments [35] in which it was shown that these dosage percentages are feasible for the use of such mixtures in civil engineering. The experimental procedure consists of studying the leaching behavior of three BBA samples (representative of each combustion plant) that exhibited a greater release levels of heavy metals. The leaching characterization of the mixtures BBA/NA and BBA/RMA, considered to be granular materials that could potentially be used in civil work, such as roads, was conducted using the percolation test NEN 7343:1994 [36]. According to the previous discussion, the dosage used was 15% BBA and 85% NA or RMA, expressed as the dry weight of the aggregates. The last experimental stage of the present research was the solidification/stabilization of both mixtures with 5% cement. These monolithic samples were environmentally characterized according to the Dutch diffusion test EA NEN 7375:2004 [37], and the most significant findings are included in the present work. To facilitate understanding of the experimental design developed by the present work, Figure 1 provides a graphical diagram regarding the environmental assessment process performed. Compliance Test UNE-EN 12457-3:2004 [19] Compliance testing was conducted to check whether the 30 BBAs satisfy European regulations. To classify those materials according to the EU Landfill Directive, not only heavy metals but also inorganic anions were measured. The UNE-EN 12457-3:2004 [19] procedure consists of a two-step Compliance Test UNE-EN 12457-3:2004 [19] Compliance testing was conducted to check whether the 30 BBAs satisfy European regulations. To classify those materials according to the EU Landfill Directive, not only heavy metals but also inorganic anions were measured. The UNE-EN 12457-3:2004 [19] procedure consists of a two-step batch leaching test that uses a solution of 175 g of a dry sample of the material, two liquid/solid (L/S) ratios (an L/S of 2 and an L/S of 10) and deionized water as a leaching liquid. This method involves stirring the solution in two steps. In the first step, the solution is shaken for 6˘0.5 h with an L/S of 2, and the second step uses the same fraction with stirring of the solution for an additional 18˘0.5 h, after adding water to obtain an L/S ratio of 10. In both stages, the samples are left to decant, and the pH, conductivity and temperature are measured. The solution is filtered using a membrane filter (0.45 µm), and a subsample of the leachate was taken for each material for further analysis. By this procedure, 30 samples of BBA from the combustion of pine, poplar, oak, eucalyptus, olive and oil cake (composition shown in Table 1) were tested according to the compliance test with the following sample identification code (Table 4). In addition, the natural aggregate (NA) and the recycled aggregate (RMA) were analyzed by the compliance test, as both materials were mixed with BBA to study how the pollution load of the BBA was affected. Percolation Test NEN 7343:1994 [36] The column test described by the standard NEN 7343 is thought to simulate the leaching behavior of a waste material by relating the accumulated released amount of a contaminant, expressed as mg/kg leached, to the liquid/solid ratio. In each column, the leachates were collected at L/S ratios of 0.1, 0.2, 0.5, 1, 2, 5, and 10 L/kg. The translation of the time scale makes it possible to quantify the retention in the matrix, simulating the release progress of a contaminant during the second life cycle of the material [38]. However, the main limitation of this procedure is that laboratory results do not translate directly to field conditions because of factors such as temperature, channeling, the degree and duration of contact with water, ageing effects (carbonation) and others [39,40]. The columns were designed with an inner diameter of 5 cm and a length of 20 cm. The columns were closed with flanges that were sealed. Depending on the material, between 0.5 and 0.7 L was needed to fill the volume of the column. The leachant was deionized water acidified with nitric acid of analytically pure quality to pH = 4˘0.1. The pH is not controlled during the test. Therefore, the waste dictates the chemical conditions in the pore-solution. All columns were operated concurrently using multi-channel peristaltic pumps. To evaluate the environmental risk of BBAs when they are mixed with other aggregates, nine samples of mixed ashes were tested according to the column test with the following sample identification code (Table 5). It is well known that a leaching monolithic under normal conditions of exposure is essentially governed by diffusion [41,42]. Consequently, the most suitable test for laboratory simulation of the material leaching behavior on site is the tank test. The tank test, as defined by NEN 7375:2004 [37], used concrete cylinders (Ø 10 cmˆ12.5 cm) that were cured for 28 days at 20˘2˝C and >90% relative humidity (RH), then immersed in a given volume of demineralized water (liquid/solid = 4) and kept in static conditions at a temperature of 20˘2˝C. To evaluate the environmental risk of BBAs when they are mixed with other aggregates and cement treated, six samples of mixed-cemented ashes were tested by the diffusion test with the following sample identification code (Table 6). ID Code for BBA Tested Samples Description of Monolithic Samples NA-BA-C, NA-BB-C, NA-BC-C Mixtures of 85% of natural aggregate + 15% of BBA cemented with 5% cement CEM II RMA-BA-C, RMA-BB-C, RMA-BC-C Mixtures of 85% of recycled aggregate + 15% of BBA cemented with 5% cement CEM II Chemical Analysis of Eluates Once each leaching test was completed, filtered and stored leachates corresponding to each step were analyzed by inductively coupled plasma mass spectrometry (ICP-MS) using a Perkin Elmer ELAN DRC-e spectrometer (Perkin Elmer, Waltham, MA, USA). Despite the wide groups of elements measured by the ICP (83 elements), the present study is focused on 12 heavy metals regulated by the Landfill Directive 2003/33 of the European Council regarding the legal criteria and procedures for the acceptance of waste at landfills, the only environmental regulation that is currently accepted by the Spanish Government. Thus, the study includes data concerning the following elements: arsenic (As), lead (Pb), cadmium (Cd), chromium (Cr), copper (Cu), mercury (Hg), nickel (Ni), zinc (Zn), barium (Ba), molybdenum (Mo), selenium (Se) and antimony (Sb). Test results (mg of leached element per liter of leachate, mg/L) were transformed into accumulated emissions (mg of leached element per kg of aggregate, mg/kg) to compare these values with the limit values established according to the following expression [43]: C x pmg X{kg aggregateq " pmg X{L extracting solutionqˆpL extracting solution{kg aggregateq (1) where C x is the concentration of constituent X. Classification of Materials as a Function of Their Pollutant Potential Leachate concentrations according to the compliance test are shown in Tables 7-10. Thirty-two materials (30 BBAs, a natural aggregate (NA) and a recycled aggregate (RMA)) have been classified according to the limit values regulated by the Landfill Directive 2003/33/EC. Green represents inert materials, inert value limits that are exceeded are given in bold and yellow, while non-hazardous limits that are exceeded are underlined and in red. As and Hg are identified as the most conflictive elements. The concentration of As present was at hazardous levels in 43.3% of the 30 samples, and at non-hazardous and inert levels in the other 56.7%. Hg was detected at hazardous concentration levels in 20% of the samples and at non-hazardous and inert levels in the other 80%. Other relevant elements are Cr, Ni, Cu, Se and Mo (exceeding the inert limit values in most cases). Based on the results of the compliance test, due to the high potential contaminants of BBAs, they are unable to be applied in civil engineering as isolated materials. For that reason, the motivation of the present study is to analyze how to reduce the pollutant potential of those products to apply them as secondary materials in construction works. That is why this study proceeded to analyze mixtures of BBA with other materials to evaluate if reducing the volume of BBA in the sample reduced the contaminant load. Regarding the dosage of the mixtures, prior studies were revised. Thus, previous works [35] have proven that mixtures composed of 10%-15% BBA with other aggregates present an appropriate physical and mechanical behavior for use as materials in embankments or road layers [13]. To establish the comparison of results between a recycled material and natural aggregate (which should provide better environmental conditions), two groups of samples were prepared in the laboratory: 85% NA with 15% BBA, and 85% RMA with 15% BBA. Due to the intended application of the materials in civil engineering works (located outdoors and subjected to environmental phenomena), it is necessary that the laboratory study for leaching characterization simulates closely the effect of rain episodes percolating through the granular material and takes place in engineering applications in which this type of material has proven to be suitable and feasible [44][45][46]. Figure 2 shows the results of a statistical analysis performed on data from UNE EN 12457-3 [19] procedure for all of the tested BBA. The analysis is focused on the heavy metals regulated by the UE Landfill Directive. However, Cd and Pb were not included in the analysis because their detected amounts were negligible. The results of the test are shown by means of whisker plots. The first quartile indicates the lowest 25% of the data set, the median separates the lower and upper 50% of the data set, and the lowest 75% represent the fourth quartile. The data from the statistical analysis are summarized in Table 11. That is why this study proceeded to analyze mixtures of BBA with other materials to evaluate if reducing the volume of BBA in the sample reduced the contaminant load. Regarding the dosage of the mixtures, prior studies were revised. Thus, previous works [35] have proven that mixtures composed of 10%-15% BBA with other aggregates present an appropriate physical and mechanical behavior for use as materials in embankments or road layers [13]. To establish the comparison of results between a recycled material and natural aggregate (which should provide better environmental conditions), two groups of samples were prepared in the laboratory: 85% NA with 15% BBA, and 85% RMA with 15% BBA. Due to the intended application of the materials in civil engineering works (located outdoors and subjected to environmental phenomena), it is necessary that the laboratory study for leaching characterization simulates closely the effect of rain episodes percolating through the granular material and takes place in engineering applications in which this type of material has proven to be suitable and feasible [44][45][46]. Figure 2 shows the results of a statistical analysis performed on data from UNE EN 12457-3 [19] procedure for all of the tested BBA. The analysis is focused on the heavy metals regulated by the UE Landfill Directive. However, Cd and Pb were not included in the analysis because their detected amounts were negligible. The results of the test are shown by means of whisker plots. The first quartile indicates the lowest 25% of the data set, the median separates the lower and upper 50% of the data set, and the lowest 75% represent the fourth quartile. The data from the statistical analysis are summarized in Table 11. Table 11 shows standard deviations for the most of the studied metals. Only higher standard deviations were observed in the Cu values for L/S = 2 L/kg and L/S = 10 L/kg. Table 11 shows standard deviations for the most of the studied metals. Only higher standard deviations were observed in the Cu values for L/S = 2 L/kg and L/S = 10 L/kg. Data from the Percolation Test The analysis performed according to the Dutch percolation test, which is itself based on the standard NEN 7343:1994 [36]., was focused on the heavy metals that have been identified as more conflictive according to the obtained results of the prior section: compliance test data (Cr, Ni, Cu, Se, Mo, As and Hg). In addition, one representative sample was taken from each of the three combustion power plants to perform a representative study of the different BBAs produced in the region of Andalusia. Therefore, from each plant, the BBA with the highest contaminant load according to the data provided in Tables 7-9 Data from the Percolation Test The analysis performed according to the Dutch percolation test, which is itself based on the standard NEN 7343:1994 [36]., was focused on the heavy metals that have been identified as more conflictive according to the obtained results of the prior section: compliance test data (Cr, Ni, Cu, Se, Mo, As and Hg). In addition, one representative sample was taken from each of the three combustion power plants to perform a representative study of the different BBAs produced in the region of Andalusia. Therefore, from each plant, the BBA with the highest contaminant load according to the data provided in Tables 7-9 All graphics include the inert legal limit value for the percolation leaching test data imposed by the Landfill Directive at the L/S of 0.1 L/kg (IL-LS0.1, grey rhombus) and the non-hazardous limit (NHL-LS0.1, grey short line). Thus, the samples that have exceeded this limit are marked, being the material (BBA or mixtures) classified as non-hazardous. According to the results, the release of As has exceeded the inert and non-hazardous limits in all samples, with the BBA and mixtures classified as non-hazardous materials according to the percolation data. Regarding the elements Cr, Ni, Cu, Se and Mo, which were detected as more contaminant-filled by the compliance test in the BBA samples based on the column tests performed for the mixtures of BBA with aggregates, their pollutant potential is even higher and, in most of cases, the inert limit is obviously exceeded; the highest release was obtained for the three BBAs, BA-10, BB-3 and BC-7. The patterns described during the percolation test and, as a consequence, the cumulative percolation All graphics include the inert legal limit value for the percolation leaching test data imposed by the Landfill Directive at the L/S of 0.1 L/kg (IL-LS0.1, grey rhombus) and the non-hazardous limit (NHL-LS0.1, grey short line). Thus, the samples that have exceeded this limit are marked, being the material (BBA or mixtures) classified as non-hazardous. According to the results, the release of As has exceeded the inert and non-hazardous limits in all samples, with the BBA and mixtures classified as non-hazardous materials according to the percolation data. Materials 2016, 9,228 13 of 21 curves, were quite similar in both BBA and the mixtures. However, as expected, the cumulative releases of BA-10, BB-3 and BC-7 were the highest of the total data represented. Obviously, and according to the results illustrated in Table 10, the mixtures of BBA with NA presented the lowest release levels, while the RMA mixtures showed higher pollutant levels. The most noteworthy difference between both types of mixtures is observed in the release pattern of Cr: in all cases, the percolation curves of Cr in RMA mixtures were markedly higher than NA mixtures and BBA curves. That difference is caused by the high content of Cr in the ceramic particles present in recycled aggregates from construction and demolition waste (CDW) [47]. (e) (f) After the discussion of the percolation data, the reduction of the pollution load of hazardous BBAs (BA-10, BB-3 and BC-7) following their combination with aggregates can be confirmed. This reduction implies the possibility of reuse as secondary materials in construction works rather than discarding them and depositing them in landfills (which favors a negative environmental impact and does not contribute to increasing the value of industrial by-products). Accordingly, the environmental assessment has confirmed the feasibility of the combination of both materials for its use in civil works. Regarding the elements Cr, Ni, Cu, Se and Mo, which were detected as more contaminant-filled by the compliance test in the BBA samples based on the column tests performed for the mixtures of BBA with aggregates, their pollutant potential is even higher and, in most of cases, the inert limit is obviously exceeded; the highest release was obtained for the three BBAs, BA-10, BB-3 and BC-7. The patterns described during the percolation test and, as a consequence, the cumulative percolation curves, were quite similar in both BBA and the mixtures. However, as expected, the cumulative releases of BA-10, BB-3 and BC-7 were the highest of the total data represented. Obviously, and according to the results illustrated in Table 10, the mixtures of BBA with NA presented the lowest release levels, while the RMA mixtures showed higher pollutant levels. The most noteworthy difference between both types of mixtures is observed in the release pattern of Cr: in all cases, the percolation curves of Cr in RMA mixtures were markedly higher than NA mixtures and BBA curves. That difference is caused by the high content of Cr in the ceramic particles present in recycled aggregates from construction and demolition waste (CDW) [47]. After the discussion of the percolation data, the reduction of the pollution load of hazardous BBAs (BA-10, BB-3 and BC-7) following their combination with aggregates can be confirmed. This reduction implies the possibility of reuse as secondary materials in construction works rather than discarding them and depositing them in landfills (which favors a negative environmental impact and does not contribute to increasing the value of industrial by-products). Accordingly, the environmental assessment has confirmed the feasibility of the combination of both materials for its use in civil works. To expand the scope of the study within the framework of civil engineering, the following section analyses the pollution potential of BBA mixtures treated with cement. Previous studies have proven their viability [44]. For that, the Dutch diffusion test was performed according to the standard EA NEN-7375:2004 [37] for the monolithic samples prepared in the laboratory (six BBA mixtures of RMA and NA with 5% cement). In order to evaluate the immobilization degree reached with the experimental procedure developed, Table 12 shows the percentage of reduction of the release levels measured in the BBA samples (BA10, BB3 and BC7, representative of all the studied combustion plants) compared with the release levels obtained for the mixtures with the natural material NA and the recycled aggregate RMA. For establishing the comparison and calculation of the ratio, the last data (cumulative release at L/S of 10 L/kg) of the column test was considered. The immobilization ratio is shown in Table 12, calculated according to Equation (2) Data from the Diffusion Test After verifying the reduction of the pollution load of the BBA after being mixed with RMA and NA, this investigation advanced a step further by analyzing the behavior of the stabilized/solidified mixtures. The tank test or diffusion process through the Dutch EA NEN 7375:2004 [37] was performed for the monolithic cylindrical samples (Ø 10 cmˆ12.5 cm) prepared in the laboratory with 5% cement. This percentage was chosen because it is the most common for civil engineering materials used as road base and sub-base dosage [48,49]. The objective is to prove the reduction of the pollution load of granular materials after their treatment with cement. Thus, the environmental benefit will be shown in addition to checking that these materials can be applied as construction materials (which was previously demonstrated by other researchers). Figure 6 shows the diffusion curves of the most conflictive elements detected in 6 monolithic samples cemented with CEM II/BL 32.5. Materials 2016, 9,228 15 of 21 Figure 6 shows the diffusion curves of the most conflictive elements detected in 6 monolithic samples cemented with CEM II/BL 32.5. Figure 6 illustrates the diffusion release curves of the elements As, Cr, Se and Ni. Additionally, a slope of 1:0.5 (the grey line) is represented graphically to facilitate the identification of the mechanisms that govern the release. Previous researchers have proven that pure diffusion-controlled release implies a 1:0.5 slope [50]. Additionally, other patterns can be observed in the graphic, such as the depletion of elements that describe a flat line during different periods of time. Figure 6 illustrates the diffusion release curves of the elements As, Cr, Se and Ni. Additionally, a slope of 1:0.5 (the grey line) is represented graphically to facilitate the identification of the mechanisms that govern the release. Previous researchers have proven that pure diffusion-controlled release implies a 1:0.5 slope [50]. Additionally, other patterns can be observed in the graphic, such as the depletion of elements that describe a flat line during different periods of time. The patterns of diffusion have been described according to Figure 6. The element present at the highest levels in the tested BBA was As. It presented mixed behavior: in the first stages, until the fourth day, the release was very stable and there were hardly any significant differences with time. However, from the fourth day on, the diffusion curve of As was parallel to the slope 1:0.5 and, therefore, the governing mechanism was the diffusion of the element. Cr presents a similar pattern to that of As, but Cr showed depletion in the last days of the test. It must be noted that for the monolithic samples (after the treatment with cement of the RMA and NA mixtures), no differences were observed between the release diffusion curves of both mixtures. This could imply/prove that the treatment with cement is causing the isolation of the material matrix, and that the expected higher Cr release in the mixture due to the presence of ceramic particles in RMA is not produced in the monolithic samples. This phenomenon occurred in the granular samples RMA-BA, RMA-BB, RMA-BC due to the absence of isolation of their internal matrix. From the fourth day on, Se exhibited a curve parallel to the 1:0.5 slope; this behavior was more evident in the RMA mixtures. Finally, Ni presented a flat phase for most of the test, which demonstrated its low solubility. Only from day 19 was its release curve parallel to the diffusion slope. Comparison between the Percolation and Diffusion Tests The present section is focused on evaluating the reduction of the contaminant load of the more contaminated BBAs: BA-10, BB-3 and BC-7. The data treatment was performed for Sections 4.3 and 4.4 as follows. The standard EA NEN 7375:2004 [37] contains, in Section "8. Calculation", the formula for the calculation of the measured leaching per fraction: where Ei* is the measured leaching of a component in fraction i in mg/m 2 . C i is the concentration of the component in fraction i in µg/L. V is the volume of the eluate in L. A is the surface area of the test piece in m 2 . f is a conversion factor: 1000 µg/mg. The measured cumulative leach εn in each period (n = 1 to N) was calculated by: Ei for n " 1 to N where εn is the measured cumulative leaching of a component for period n consisting of fractions i = 1 to n in mg/m 2 . E i * is the measured leaching of a component in fraction i in mg/m 2 . N is the number of periods equal to the amount of specified replenishment time (N = 8). According to Equation (3), the cumulative curves were obtained and represented in Figure 5. To compare the data of the percolation test with the diffusion curves, Equation (4) was applied. Thus, concentrations expressed in µg/L were transformed into mg/m 2 , and the data are presented in Figure 6. To compare the results of both tests, the leachate value of the percolation test at an L/S of 2 L/kg is assumed. This value was transformed into the unit time to superimpose the data on the diffusion curves ( Figure 7). The L/S ratio depends directly on the volume (expressed in cm 3 ) and the dry mass (d.m.) of the sample (kg d.m.) because the process described by the standard EA NEN 7375:2004 [37] requires a constant water flow of 0.3 mL/min. Starting from these variables, it was decided to adopt the data at L/S = 2 L/kg (being the commonly used test leaching compliance) to superimpose the diffusion and percolation data. According to the percolation test procedure, this L/S ratio is reached after 77.78 h, which is 3.25 days, as observed in the x-axis of the graphs of Figure 7. As a function of the results obtained, only the two more conflicting elements, from the environmental point of view, As and Cr, have been chosen to be studied in depth. Regarding the release of As in leachates, identical diffusion curves are obtained in both mixtures. Thus, no differences are observed between the mixtures with the natural aggregate, NA, and the recycled one, RMA. Comparison highlights the significant decrease in As released by the monoliths (NA-BA-C, NA-BB-C and NA-BC-C) compared to the high value of metal released from the samples in their granular form (NA-BA, NA-BB and NA-BC). The data are consistent with previous research studies [51]. Regarding leaching behavior of Cr, a different response was observed depending on the type of aggregate used for the mixture elaboration. The fundamental difference lies in the composition of the leachate of NA and RMA (see Table 10). While the natural aggregate released only 0.001 mg/kg and 0.012 mg/kg (at an L/S of 2 and 10, respectively), the recycled construction aggregate released 0.265 mg/kg and 0.374 mg/kg, respectively. Therefore, when preparing mixtures to be tested by the percolation leaching method, the contaminant load is conditioned by the aggregate used. Thus, when the data of the tank and column are overlapped in Figure 7, it is observed that, in mixtures of BBA and RMA, the reduction of pollution potential after the S/S treatment with cement was remarkable compared to the poor reduction observed in the mixture of BBA and NA after the S/S treatment with cement. The results are logical and coherent with expectations, as a comparison confirms that no contamination reduction occurs in materials that presented low Cr levels in the source material (NA, see Table 10). The effectiveness of the S/S treatment with cement for materials with high pollutant loads (BBA samples BA-10, BB-3 and BC-7, classified as hazardous materials) can be based on two observations: (1) The release of mixtures of the granular materials RMA-BA, RMA-BB and RMA-BC were much higher than those obtained by the monolithic RMA-BA-C, RMA-BB-C and RMA-BC-C; (2) The diffusion curves of the mixtures of BBA manufactured with aggregates with different pollutant loads presented similar diffusion curves and similar release levels after the S/S treatment. As a function of the results obtained, only the two more conflicting elements, from the environmental point of view, As and Cr, have been chosen to be studied in depth. Regarding the release of As in leachates, identical diffusion curves are obtained in both mixtures. Thus, no differences are observed between the mixtures with the natural aggregate, NA, and the recycled one, RMA. Comparison highlights the significant decrease in As released by the monoliths (NA-BA-C, NA-BB-C and NA-BC-C) compared to the high value of metal released from the samples in their granular form (NA-BA, NA-BB and NA-BC). The data are consistent with previous research studies [51]. Regarding leaching behavior of Cr, a different response was observed depending on the type of aggregate used for the mixture elaboration. The fundamental difference lies in the composition of the leachate of NA and RMA (see Table 10). While the natural aggregate released only 0.001 mg/kg and 0.012 mg/kg (at an L/S of 2 and 10, respectively), the recycled construction aggregate released 0.265 mg/kg and 0.374 mg/kg, respectively. Therefore, when preparing mixtures to be tested by the percolation leaching method, the contaminant load is conditioned by the aggregate used. Thus, when the data of the tank and column are overlapped in Figure 7, it is observed that, in mixtures of BBA and RMA, the reduction of pollution potential after the S/S treatment with cement was remarkable compared to the poor reduction observed in the mixture of BBA and NA after the S/S treatment with cement. The results are logical and coherent with expectations, as a comparison confirms that no contamination reduction occurs in materials that presented low Cr levels in the source material (NA, see Table 10). The effectiveness of the S/S treatment with cement for materials with high pollutant loads (BBA samples BA-10, BB-3 and BC-7, classified as hazardous materials) can be based on two observations: (1) The release of mixtures of the granular materials RMA-BA, RMA-BB and RMA-BC were much higher than those obtained by the monolithic RMA-BA-C, RMA-BB-C and RMA-BC-C; (2) The diffusion curves of the mixtures of BBA manufactured with aggregates with different pollutant loads presented similar diffusion curves and similar release levels after the S/S treatment. Conclusions The conclusions of the present work can be summarized in three bullet points, corresponding to the three experimental stages developed: ‚ The compliance test data revealed a contaminant potential in some of the samples analyzed. Thus, 37% of tested BBAs were classified as inert, 13% as non-hazardous, and 50% as hazardous, which confirms that they are a material unsuitable for application as an isolated aggregate in any application of civil engineering. The heavy metals released in higher levels were, in order of relevance, As, Hg, Cr, Ni, Cu, Se and Mo. According to that, the second stage consisted of analyzing the mixtures of the most contaminated BBAs (BA, BB and BC) with other construction aggregates, to evaluate whether the volume reduction of BBA in samples implies a reduction of the contaminant load. ‚ The percolation test provided the cumulative curves of all elements for all mixtures. According to the registered data and again applying the legal limit values imposed by the Landfill Directive, it was observed that after mixture, even for the most hazardous BBA with the NA and the RMA, the contaminant loads of the aggregates were reduced. The third stage of the investigation was the evaluation of the leaching behavior of the S/S mixtures with the objective of proving the pollution load reduction of aggregates and BBA after their treatment with cement. ‚ Data from the tank test performed on monolithic samples indicated that no differences were observed between the release diffusion curves of both mixtures types BBA/NA and BBA/RMA, despite their different chemical compositions. This demonstrates the effectiveness of the S/S treatment for materials that present a high pollution potential (as occurred with the BBAs in the study, as they were classified as hazardous wastes). According to the findings, secondary materials such as biomass bottom ash can be reused from an environmental point of view, as long as adequate management of these materials is performed by engineers, constructors or plant managers. Thus, the present work proposes a solution that implies an environmental benefit to those agents, in addition to checking the samples (as unbound aggregates and as S/S mixtures), with the potential for being applied as construction materials during their second life cycle.
2017-05-27T13:07:58.227Z
2016-03-24T00:00:00.000
{ "year": 2016, "sha1": "ab33b91b878626ae0a22886d3799d18b0a0df3cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/9/4/228/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f27ed327099eafc83276328c36de0be6360aa76d", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
736043
pes2o/s2orc
v3-fos-license
Effective Potential and Interdiffusion in Binary Ionic Mixtures We calculate interdiffusion coefficients in a two-component, weakly or strongly coupled ion plasma (gas or liquid, composed of two ion species immersed into a neutralizing electron background). We use an effective potential method proposed recently by Baalrud and Daligaut [PRL, 110, 235001, (2013)]. It allows us to extend the standard Chapman-Enskog procedure of calculating the interdiffusion coefficients to the case of strong Coulomb coupling. We compute binary diffusion coefficients for several ionic mixtures and fit them by convenient expressions in terms of the generalized Coulomb logarithm. These fits cover a wide range of plasma parameters spanning from weak to strong Coulomb couplings. They can be used to simulate diffusion of ions in ordinary stars as well as in white dwarfs and neutron stars. To describe ion diffusion in Coulomb plasmas one needs the expressions for the diffusion currents and diffusion coefficients. The first problem was addressed in our previous work [16]. The second problem is discussed here. There is comprehensive astrophysical literature devoted to diffusion of ions in dense stellar matter. The specific feature of this diffusion is the long-ranged Coulomb interaction between ions. In this respect the diffusion of ions has much in common with the diffusion of particles interacting via a Yukawa potential with sufficiently large screening length. The physics of diffusion has many aspects. One can study different types of diffusion coefficients. Most often considered are self-diffusion coefficients D ii and, somewhat less often, but more important, interdiffusion coefficients D ij , which enter the expressions for the diffusion currents. Here, i, j = 1, 2, . . . enumerate ion species in a multicomponent plasma (MCP). In a one-component plasma (OCP) of ions there is only one self-diffusion coefficient D 1 . Note that a self-diffusion coefficient D ii in MCP should not be confused with a selfdiffusion coefficient D 1 in OCP. One can further consider weak or strong Coulomb coupling, classical or quantum ion motion, the presence of a magnetic field, degenerate or non-degenerate electrons, etc. Diffusion is studied with different techniques such as the Chapman-Enskog approach, Green-Kubo relations, molecular dynamics (MD) simulations, and effective potential method, as well as other methods and their combinations. Some of these cases and methods are discussed below in more detail. We mainly focus on the inter-diffusion of ions in binary ionic mixtures (BIMs) which form either Boltzmann gas or a strongly coupled Coulomb liquid. The ions are assumed to be fully ionized and the electrons strongly degenerate (although these restrictions are not very important). The diffusion in a gas is a classical issue, well studied and described in well-known monographs [17,18]; the diffusion in liquid is less elaborated. Our aim will be to provide a unified treatment of the diffusion coefficients in ion gas and liquid and to present the results in a form convenient for using in numerical simulations of ion diffusion and related phenomena. In a BIM, there is one independent interdiffusion coefficient D 12 = D 21 and two self-diffusion coefficients D 11 and D 22 . Weak Coulomb coupling means that the ions constitute almost ideal gas. They are moving more or less freely and diffuse due to relatively weak Coulomb collisions with neighboring ions. The diffusion coefficients in this limit are usually expressed through a Coulomb logarithm Λ, which can be estimated as the logarithm of the large ratio of the maximum to minimum impact parameters of colliding ions. Calculations are done using the classical theory of diffusion in rarefied gases (see Refs. [17,18]). In astrophysical literature, this theory is often called the Chapman-Salpeter theory (meaning the application of the general theory to diffusion due to Coulomb interaction). Early astrophysical publications based on this theory are cited, for instance, in Ref. [19]. One can further consider the classical and quantum limits in ionion scattering (note that the motion of ions is always classical at weak coupling, quantum effects can emerge only in scattering events). In the classical limit, the minimum impact parameter in the expression for Λ is determined by the classical distance of the closest approach of colliding ions. In the quantum limit the minimum impact parameter is determined by the de Broglie wavelengths of ions. One can also consider the cases of non-degenerate and degenerate electrons. In the latter case the electrons produce much weaker screening of the Coulomb interaction (i.e., contribute much less to the maximum impact parameter) than in the former case. We will focus on the classical scattering limit in the presence of strongly degenerate electrons. When the coupling becomes stronger, the ratio of the maximum to minimum impact parameters decreases reducing the Coulomb logarithm. At intermediate couplings the Coulomb logarithm becomes Λ ∼ 1, and the diffusion coefficients D ∼ a 2 ω p , where a is a typical interion distance and ω p is the ion plasma frequency (see Sec. III). Characteristic ion-ion collision frequencies become comparable to ∼ ω p , and typical ion mean free paths are ∼ a. At strong coupling the ions are mostly confined (caged) in their local potential wells (within respective Wigner-Seitz cells) and constitute either a Coulomb liquid or Coulomb crystal. Thus, the ions mainly oscillate around (quasi-) equilibrium positions and diffuse through thermally activated jumps from one position to another (neighboring) one. The first experimental observations of the caging effect in relaxation of strongly coupled plasmas were made by Bannasch et al. [20]. Here one can distinguish the cases of classical (the temperature T > ∼ T p ) and quantum (T < ∼ T p ) ion motion (where T p =hω p /k B is the ion plasma temperature that is close to the Debye temperature, with k B being the Boltzmann constant). In the quantum case collective oscillations (plasmons) play an important role. As for electrons, one can study the cases of a rigid (incompressible) electron background or weakly polarizable background. The latter case is similar to the case of ions interacting via Yukawa potentials (with sufficiently large screening length). We consider the diffusion in Coulomb liquid neglecting quantum effects but taking into account both cases of rigid and slightly polarizable electron background. These cases give essentially the same results. A semianalytic consideration of weak coupling was developed by Fontaine and Michaud [21] who provided the expressions for D ij through a Coulomb logarithm and developed a computer code for calculating D ij . The authors considered the cases of quantum and classical minimum impact parameters in the Coulomb logarithm and introduced the resistance coefficients K ij (that determine the "friction forces" inversely proportional to D ij ). Their results were extended and used by Iben and MacDonald [5] (in the case of weak coupling) who simulated the evolution of 12 C -16 O white dwarfs. Paquette et al. [19] calculated binary diffusion coefficients at weak and moderate couplings using the Chapman-Enskog (Chapman-Spitzer) formalism with a statically screened Coulomb potential. The authors presented accurate analytic fits of collision integrals (tabulated spline coefficients). Their results are applicable as long as Coulomb coupling is not very strong. They discussed also earlier MD calculations of the self-diffusion coefficient at strong coupling. Pioneering MD calculations of the self-diffusion coefficient D 1 in OCP were performed in 1975 by Hansen et al. [22]. For the ion coupling parameter Γ > 1 (defined in Sec. II) they proposed the fit Hansen et al. [23] carried out MD calculations of D 12 , D 11 , and D 22 in BIMs in the regime of intermediate and strong couplings. They presented the approximate relation [their Eq. (23)] between the inter-and self-diffusion coefficients, They tabulated D 12 , D 11 , and D 22 for some coupling strengths and relative fractions of ions (x 1 and x 2 = 1 − x 1 ) in the 1 H -4 He mixture. Boercker and Pollock [24] performed MD and advanced kinetic theory calculations of the interdiffusion coefficients in BIMs for strong and weak couplings. The results were in good agreement with previous studies. Robbins et al. [25] considered self-diffusion in OCP using MD of Yukawa systems. Rosenfeld et al. [26] performed MD calculations of BIMs for wide ranges of m 2 /m 1 and Z 2 /Z 1 (ion mass and charge ratios) at strong, moderate and weak coupling in Coulomb plasmas and in Yukawa systems; they studied self-diffusion and inter-diffusion, and emphasized close relation between these systems and the systems of hard spheres. Ohta and Hamaguchi [27] did extensive MD calculations of the self-diffusion coefficient in OCP Yukawa systems. They used the Green-Kubo relation and the ordinary space diffusion formula to determine D 1 (and the results converge). They tabulated the computed values of D * 1 = D 1 /(ω p a 2 ) and approximated D * 1 by the expression where T * = T /T m , and T m is the melting temperature. They presented the fit parameters α, β, and γ as functions of the screening parameter in the Yukawa potential and obtained good agreement with the results for Coulomb systems in the cases of large screening lengths in the Yukawa potentials. Daligault and Murillo [28] performed MD calculations of the self-diffusion coefficient in OCP using a semiempirical potential and fitted the results by Eq. (??) with γ = 0.028, α = 0.00525, and β = 1.154. As the next step Daligault [29] analyzed liquid dynamics in a strongly coupled OCP and concluded that although dynamical behavior of ions (with long-range Coulomb interaction) at strong coupling changed from almost free particle motion to the caging regime, the universal laws or ordinary liquids with short-range interaction remained valid there. Hughto et al. [30] performed MD calculations of the self-diffusion coefficient of 22 Ne in a mixture of many ion species at strong coupling. They presented an original fit [their Eq. (8)] for D ii /D 1 (a combination of exponents and powers of Γ). In his next paper Daligault [31] performed MD simulations of self-diffusion in OCP and BIMs at strong coupling and fitted the results by [his Eq. (4)] As the next step Khrapak [33] considered the selfdiffusion coefficient in OCP. He used the standard Chapman-Spitzer theory at weak coupling and results of MD calculations by different authors at strong coupling. Based on those results he suggested a simple and convenient analytic approximation, which reproduced the cases of weak and strong couplings, by introducing a generalized Coulomb logarithm Λ eff . Finally, quite recently Baalrud and Daligault [34] put forward the idea that the cases of weak and strong coupling can be described within the same formalism of the effective binary interaction potential and traditional Chapman-Enskog theory (even at strong Coulomb coupling!). They constructed some examples of the effective potential inferred from radial distribution functions of ions g(r); these functions were computed via the hypernetted chain (HNC) approach. The effective potential allows one not only to account for the screening effects (this can be done by employing the screened Coulomb potential), but also take into account even strong correlations between the ions. This method treats the screening and correlation effects self-consistently; no "external" screening lengths are involved. The authors compared the self-diffusion coefficients in OCP calculated by different methods (their Fig. 2) and emphasized the importance of expressing the diffusion coefficients through generalized Coulomb logarithms. We will follow this approach extending it to BIMs. For the completeness of our consideration let us mention some others methods which have also been used to calculate diffusion coefficients in simulations of some phenomena in dense stars. Bildsten and Hall [7] proposed to employ the selfdiffusion coefficient D 1 to study 22 Ne settling in white dwarfs (at strong coupling). They tried two forms of D 1 . First, they took D 1 using the Stokes-Einstein relation for a particle of radius a p (taken to be the radius of the ion sphere for 22 Ne) moving in a fluid with viscosity η obtained from fits to the results of MD simulations. This method is suitable for inter-diffusion of trace ions of one species in BIMs. Second, the authors used D 1 obtained in Ref. [22]. They found that the values of D 1 estimated in these two ways were close and led to the same results. Deloye and Bildsten [8] compared the same two different forms of self-diffusion coefficients at strong coupling to study the 22 Ne settling in white dwarfs. In addition, they took into account computational uncertainties of η and obtained that these uncertainties did not affect noticeably D 1 . They suggested using D 1 taken from Ref. [22] in modeling diffusion processes. Peng et al. [35] simulated sedimentation and x-ray bursts in neutron stars. In their Appendix they described the resistance coefficients and associated diffusion coefficients. They proposed piece-like interpolation of weak coupling and strong coupling cases. They considered weak coupling following Fontaine and Michaud [36] and strong coupling following Ref. [22]. Although we do not study diffusion in Coulomb crystals let us mention that the problem was investigated by Hughto et al. [37] using MD with the natural result that this diffusion is strongly suppressed in comparison with that in Coulomb liquid. It is also worth to mention some papers devoted to diffusion in magnetized Coulomb plasmas. For instance, Bernu [38] calculated the self-diffusion coefficient in OCP with a constant uniform magnetic field B. Much later Ranganathan et al. [39] repeated MD calculations of selfdiffusion in OCP in a magnetic field. They obtained two self-diffusion coefficients, D and D ⊥ , along and across B. Both coefficients decrease with increasing B, and D ⊥ < D . II. HNC CALCULATION OF EFFECTIVE POTENTIAL Consider a classical (quantum effects neglected) nonmagnetized binary ionic mixture (BIM), which consists of two ion species and neutralizing rigid electron background [40]. An assumption of the rigid electron background allows us to factorize out the electrons while calculating inter-ionic diffusion [16]. Let n j , A j , and Z j be, respectively, the number density, mass, and charge numbers of ion species j = 1 and 2. For certainty, we set Z 1 < Z 2 . Let n = n 1 + n 2 denote the overall ion number density and x j = n j /n the fractional number of ions j (with x 1 + x 2 = 1). Then we can define the mean value f of any quantity f j in a BIM as f = x 1 f 1 + x 2 f 2 . In the following (unless the contrary is indicated) lengths are measured in the units of the ion-sphere radius, and all potentials in units of k B T /e (e being the elementary charge). A state of the BIM is defined by ion charge and mass numbers and by two dimensionless parameters, the fractional number x ≡ x 1 of ions 1, and the Coulomb cou-pling parameter Γ 0 (see Refs. [23,40]), We can also introduce the Coulomb coupling parameter for each ion species (see, e.g., Ref. [41]), where a e = ( 3/4πn e ) 1/3 is the electron-sphere radius, a j = a e Z 1/3 j is the ion sphere radius of species j, and n e = Z 1 n 1 + Z 2 n 2 = Zn is the electron number density. Furthermore, it is convenient to introduce the mean ion coupling parameter Γ = x 1 Γ 1 + x 2 Γ 2 , which can be expressed as and which reduces to Γ = Γ 0 Z 2 in the case of OCP. Let g ij (r), h ij (r), and c ij (r) (i, j = 1, 2) be the radial distribution functions (RDFs), the total and direct correlation functions, respectively (as detailed, e.g., in Ref. [42]). All these functions are symmetric [i.e. g ij (r) = g ji (r)], and h ij (r) = g ij (r) − 1. The effective potential Φ(r) in OCP is introduced by the relation g(r) = exp [−Φ(r)] [34,42]. The extension of this relation to the BIM case is straightforward, One primarily needs Φ 12 (r) for calculating the interdiffusion coefficient. Generally, all these functions cannot be calculated analytically. We calculate them by the HNC method, which is known to be sufficiently accurate (as detailed in Sec. IV) and relatively simple (e.g., Refs. [40,43,44]). Let us outline this method to simplify the reading of this paper. It consists in solving together the equations of two types, the Ornstein-Zernike equations relating direct and total correlation functions and the HNC closure relations. Since the equations are used in Fourier space, we define the dimensionless Fourier transform aŝ (wave number k is measured in units of 1/a ), and its inverse as Then the Ornstein-Zernike relations are readily written as [40],ĥ and the HNC closure is (13) φ ij (r) being the bare Coulomb interaction, Equations (??) and (??) form a closed set of six equations for h ij and c ij , but they cannot be solved directly due to the long-range nature of the Coulomb potential. For OCP this problem was circumvented by Springer et al. [43] and Ng [44] by introducing short-ranged potentials and correlation functions. A similar method was used by Hansen et al. [40] for BIMs. Let us outline this method here for the sake of completeness. In our case the total correlation functions h ij (r) are short-ranged and the direct correlation functions have the asymptotes [40,43,44] lim r→∞ c ij (r) = −φ ij (r). Let us introduce a quantity which has the asymptotic property Then we define the short-range (s) correlation functions and potentials, The long-range (l) functions φ (l) ij (r) have to satisfy two conditions, (1) possess the same asymptotes as φ ij (r) at r → ∞ and (2) be regular at r = 0. Otherwise, they are arbitrary. Following Ng [44], we choose with α = 1.1; its Fourier transform in Eq. (??) iŝ Now we rewrite Eqs. (??) and (??) in terms of shortranged correlation functions and potentials, ij (k), they can be solved analytically once and for all. We will not write here the resulting formulas because they are inconveniently large and their derivation is obvious. Points k = 0 and r = 0 require special consideration because these values cannot be substituted in Eqs. (??) and (??) due to singularities inφ Numerical calculations were performed on a mesh of N p = 2049 points running from 0 to r max and from 0 to k max ; r max was taken to be 80 (other values for N p and r max were also taken to check the stability of numerical procedures); k max was computed from the standard relation Fourier integrals were discretized on a mesh using Simpson's rule (intermediate points were calculated via cubic spline interpolation) and processed by means of appropriate fast Fourier transform. A convergence criterion for iterative process was taken to be because g 22 converges slower than g 12 or g 11 (here q is the iteration number). After the computations have been completed, the accuracy of our results has been checked by comparing the excess (Coulomb) potential energy with the results of Ref. [40]. The agreement has been found to be quite satisfactory (energies have been reproduced up to five to six significant digits). The examples of HNC results for a mixture of 1 H and 12 C ( III. DIFFUSION COEFFICIENTS The standard Chapman-Enskog procedure gives the following leading order approximation to the interdiffusion coefficient in a binary mixture [17,18] (here in ordinary CGS units): where µ = m 1 m 2 /(m 1 + m 2 ) is a reduced mass of colliding ions, and Ω is a collisional integral defined below. The second order approximation to D 12 will be outlined in the next section. Let us introduce "hydrodynamic" plasma frequency for a mixture (e.g., Ref. [23]): m 0 being the atomic mass unit. Let us express the interdiffusion coefficient in units of ω p a 2 through a dimensionless collisional integral, Dimensionless collisional integrals are defined as (see, e.g., Refs. [19] and [45]) where χ 12 is the classical scattering angle, b is the impact parameter, φ 12 the interaction potential between particles 1 and 2, u is the dimensionless relative velocity (at infinity; in units of 2k B T /µ ), r min 12 is the distance of the closest approach [i.e. maximum root of the denominator in the integrand (??)]. We have performed such calculations of the interdiffusion coefficients for 1 H -4 He, 1 H -12 C, 4 He -12 C, 12 C - 16 O, and 16 O -79 Se mixtures for a variety of values of Γ 0 and x 1 . We could have easily considered other BIMs if necessary. The easiest way to present these data is to fit the effective Coulomb logarithm by an analytic expression. We have calculated D * 12 and then Λ eff using the expression: Thus, Λ eff coincides with Λ (WC) , Eq. (??), in the weak coupling limit. The examples of Λ eff for 1 H -12 C mixture are presented in Fig. 3. Fitting Λ eff instead of D * 12 is more convenient because Λ eff is expected to be relatively weakly dependent on plasma parameters (particularly on relative number density x 1 ). We propose the fit Λ eff (Γ 0 , x 1 ) = ln 1 + which contains five parameters p 1 , . . . p 5 . These parameters are presented in Table I along with the root mean square (rms) relative deviation, δ rms , and the maximum relative fit errors, δ max . The mesh points have been selected differently for each BIM (Table II). For each BIM, the mesh points have been distributed over three ranges of Γ 0 labeled as I, II, and III in Table II. These ranges refer to weak, intermediate, and strong Coulomb pairing, respectively (note that the actual strength of Coulomb coupling is determined by Γ, not by Γ 0 ). In range I the points have been taken equidistant (any next point being larger than the previous one by ∆ + ), whereas in ranges II and III logarithmically equidistant (any next point was ∆ × times higher than the previous one). IV. DISCUSSION Before discussing the results let us make a few remarks. (1) There is no strict proof for the existence of an effective pair interaction potential which would entirely incorporate all many-body effects (correlations) between particles in a strongly coupled Coulomb plasma. Moreover, it seems highly unlikely that such a potential could exist in principle. Nevertheless, the effective potential method seems to be a promising tool for obtaining reasonably accurate solutions of some problems of strongly coupled dense plasmas (see the original work by Baalrud and Daligault [34]). (2) We use a standard HNC procedure to calculate RDFs. Although some improved HNC techniques have been developed (e.g. Ref. [46]), we consider the accuracy of the standard HNC method sufficient for our purpose. As seen from Fig. 2 of Ref. [34], even using the "exact" RDFs computed via MD simulations makes almost negligible changes to the resulting Chapman-Enskog diffusion coefficient compared to using RDFs obtained via standard HNC method. Integrals Ω but with Φ jj instead of Φ 12 . We have performed calculations of the second-order corrections and found that they do not exceed 5% for the 1 H -12 C mixture. For mixtures of more similar ions these corrections are even smaller. Consequently, we have neglected them as the accuracy of the results is limited by the fit errors and by the effective potential method itself. Unfortunately, there is not very much data available to compare our interdiffusion coefficients with. As seen from Fig. 4 and Table III, the diffusion coefficients D * 12 obtained via the effective potential are systematically larger than the MD results D * MD 12 of Hansen et al. [23], and the difference increases with increasing Γ 0 . This is exactly the same behavior as in the original work of Baalrud and Daligault [34] (their Fig. 2) who proposed the effective potential method. We have also compared our data to MD data of Refs. [24,26] and obtained similar results. This seems to be the consequence of the approximate nature of the effective potential method itself. Since MD data are obtained from first principles, they should have been considered as superior to HNC ones. Nevertheless, the disagreement between the MD and HNC results appears at strong Coulomb coupling where quantum effects in ion motion become important. Unfortunately, the quantum effects are included neither in the MD nor in the HNC schemes we refer to. In this situation, we see no way to check our results with really exact solutions. Therefore, we propose to use the HNC results, which can be obtained quickly. We do not expect that the exact solution, if available, would lead to very different diffusion of ions in liquid BIMs. Using our (effective potential) D 12 , we have also tried to derive an approximate relation similar to (??). Our . Interdiffusion coefficient D * 12 for 1 H -4 He mixture (x1 = 0.5) and its comparison with MD data of Ref. [23]. Weak, strong and intermediate coupling regions are distinguishable (cf. Fig. 3). Exact values of D * 12 are given in Table III. best attempt gives D 12 ≈ D S 12 , with where D 1 and D 2 are self-diffusion coefficients in "equivalent" OCPs and Such a choice of n j forces the Debye screening length in "equivalent" OCPs to be the same as in the BIM. This resembles the linear mixing rule (see, e.g., Ref. [40]), where "equivalent" OCPs are taken in such a way that they retain the same electron number density as in a BIM [i.e. n j = (Z/Z j )n]. Equation (??) was initially obtained semiempirically for weakly coupled plasma, but is not greatly violated in the strong coupling regime, despite the fact that the concept of the Debye ion screening length does not apply to strongly coupled plasma. Examples of D S 12 are presented in Table III. V. CONCLUSIONS We have considered interdiffusion coefficients D 12 of ions (of two species, 1 and 2) in BIMs under the assumption that the ions consititute either a Boltzmann gas or Coulomb liquid, and the electrons form nearly a uniform background. The problem has been studied for a long time in a number of publications (Sec. I), but a unified practical procedure of calculating many diffusion coefficients important for applications has been absent. The main obstacle consisted in substantial computational difficulties of calculating D 12 by rigorous methods like MD in the regime of strong Coulomb coupling. We have used the method of effective inter-ion potential suggested recently by Baalrud and Daligault [34]. They proposed to determine the effective potential by a reasonably simple HNC scheme and use this potential to evaluate the diffusion coefficient by the standard Chapman-Enskog method. The latter method is known to be strictly valid for rarefied, weakly coupled plasmas, whereas Baalrud and Daligault suggested to apply it in both regimes (gas and liquid). They proved that the method is reasonably accurate for calculating the selfdiffusion coefficient of ions in OCP. We have extended their consideration to BIMs and show that the method remains sufficiently accurate for calculating interdiffusion coefficients in BIMs. The combination of two well-elaborated schemes (the HNC scheme for finding the effective potential and the Chapman-Enskog scheme for evaluating kinetic coefficients) makes this method feasible for determining many interdiffusion coefficients of practical importance in BIMs over wide ranges of temperatures and densities. To demonstrate the efficiency of this method we have calculated D 12 for five BIMs ( 1 H-4 He, 1 H-12 C, 4 He-12 C, 12 C-16 O, 16 O-79 Se). In analogy with the results of Ref. [33], the diffusion coefficients D 12 have been expressed (??) through a generalized Coulomb logarithm Λ eff . We have approximated all calculated values of Λ eff by a unified fit formula (??) which contains five fit parameters for each BIM (listed in Table I). In this way we have obtained a unified description of the interdiffusion coefficients for these BIMs. We may easily consider other BIMs if necessary. Let us stress once more that in the strongly coupled plasma the employed effective potential approach [34] is phenomenological. We expect that our results can be less accurate in this limit than in the limits of weak and intermediate Coulomb couplings. However, when the temperature decreases to the melting temperature T m , quantum effects in ion motion can become important for various properties of the matter (e.g., Ref. [41]). In particular, they can affect diffusion, and the effect has not been studied at all, to the best of our knowledge. In this situation (the quantum effects are neglected anyway) our approach seems reasonable (although the incorporation of quantum effects would be desirable). Although we have not focused on self-diffusion coefficients in BIMs, we remark that they are most probably calculated by the effective potential method less accurately than the self-diffusion coefficients in OCP [34]. The nature of this phenomenon is not entirely clear. It be may because the calculation of self-diffusion coefficients D ii for one component in a BIM requires not only Φ ii , but also Φ ij , whereas, according to Sec. IV, the computation of the interdiffusion coefficient D ij primarily requires only Φ ij . This problem remains to be solved along the basic problem of why the effective potential is reasonably successful in the regime of strong coupling. Our results (combined with those of Ref. [16]) can be used to study various diffusion processes of ions in the crust of neutron stars and in the cores of white dwarfs (e.g. Refs. [1][2][3][4][5][6][7][8][9][10]) as well as in dense Coulomb plasmas of giant and supergiant stars and giant planets. Such diffusion processes can affect thermodynamics and kinetics of dense matter, thermal and chemical evolution of these stars, and their vibrational properties (seismology). The diffusion properties of Coulomb plasmas are also important for dusty plasmas, inertial confinement fusion, etc. (Sec. I). Numerically, our diffusion coefficients are in reasonable agreement with those obtained by other authors and with different techniques (Sec. I). The main advantage of our results is in simplicity, uniformity, and convenient approximate expressions. Another important advantage is that the effective potential method can be easily generalized for calculating other kinetic properties of strongly coupled Coulomb plasmas, for instance, the diffusion and thermal diffusion coefficients in multicomponent ion mixtures which are needed for applications but which are almost not considered in the literature. However, we should warn the reader once more that the method of an effective potential at strong Coulomb coupling is phenomenological in its essence. It would be important to justify this method and understand the conditions at which it is most accurate. It would be even more important to study diffusion in strongly coupled Coulomb plasmas taking into account quantum effects in ion motions. However, all these difficult issues seem to be beyond the scope of the present investigation. Although we have a considered rigid (almost incompressible) electron background, the results can be easily generalized to the case of compressible background produced by electrons of any degeneracy and relativity.
2014-09-04T11:43:00.000Z
2014-09-04T00:00:00.000
{ "year": 2014, "sha1": "5a6c4c2d6c26e43baf1d2a23060cf97bbd2bf85a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1409.1407", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5a6c4c2d6c26e43baf1d2a23060cf97bbd2bf85a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
15349300
pes2o/s2orc
v3-fos-license
Unsupervised Deep Haar Scattering on Graphs The classification of high-dimensional data defined on graphs is particularly difficult when the graph geometry is unknown. We introduce a Haar scattering transform on graphs, which computes invariant signal descriptors. It is implemented with a deep cascade of additions, subtractions and absolute values, which iteratively compute orthogonal Haar wavelet transforms. Multiscale neighborhoods of unknown graphs are estimated by minimizing an average total variation, with a pair matching algorithm of polynomial complexity. Supervised classification with dimension reduction is tested on data bases of scrambled images, and for signals sampled on unknown irregular grids on a sphere. Introduction The curse of high-dimensional learning results from the huge space volume. A uniform sampling requires a number of examples which increases exponentially with the dimension. Scattering transforms reduce the space volume by iteratively applying contractive operators, with a deep convolution network architecture [2,5]. These contractions compute invariants and progressively decrease the space dimension to circumvent the curse of dimensionality. We introduce a deep scattering architecture implemented with a standard multirate filter bank, where learning is introduced with permutation operators. It rotates the space with orthogonal wavelet transforms and implements directional contractions with modulus operators. It provides a simple deep convolution network model, whose properties can be analyzed mathematically, because filter coefficients do not need to be modified. Section 2 begins by introducing scattering transforms with a non-linear multirate filter bank. It computes wavelet transform convolutions and modulus non-linearities. Section 3 describes adapted scattering transforms, which optimize contraction directions with permutations. For Haar wavelets, it amounts to a pair matching problem, as explained in Section 4, which yields hierachical invariants over multiscale groups. The scattering transform is then calculated with a cascade of additions and subtractions over pairs of points. By adapting contractions with simple operators, a goal of this paper is to highlight important principles which govern the structure of deep convolution networks, and relate them to standard signal processing tools. Contractions are potentially dangerous because they can strongly reduce distance along certain directions and hence the discriminability of elements in different classes. Supervised data provides information to evaluate distances across classes, and thus adapt contractions to avoid reducing them too much. The authors' names are in alphabetical order. This work was supported by the ERC grant InvariantClass 320959. Unsupervised learning can only implement weaker contraction criteria on training samples. We show that such optimizations amounts to computing sparse signal representations, similarly to sparse auto-encoders [4]. These results are illustrated by numerical experiments on modified MNIST digit images in Section 5. It shows that unsupervised scattering networks can learn translation and rotation invariant descriptors, with no prior information. Scattering Transforms A scattering transform is computed with a cascade of wavelet transforms and modulus non-linearities [2]. We reintroduce this transformation as a product of multirate filter bank operators, which is closer to deep convolution networks [5], and allows to generalize it for learning. Modulus Filter-Bank A multirate filter bank computes convolutions of x ∈ R d with a low-pass filter h and a high-pass filter g, and subsamples the output by a factor 2: To avoid boundary issues, we consider x as a d-periodic signal, where d is a power of 2. The low-pass filter h is real but g may be complex valued. Both filters have a finite impulse response of size K. We write W x = (Hx, Gx) the resulting multirate filtering. In the following, we shall impose that A fast wavelet transform iteratively applies W to the low-pass output Hx, as illustrated in Figure 1(a). The resulting orthogonal wavelet transform at the scale 2 J is The first component is the averaged signal where φ J is a low-pass filter of support K2 J . The wavelet coefficients are where ψ j is a dilated band-pass filter whose support is of size K2 j . Complex nearly analytic wavelet transforms are implemented with a nearly analytic complex filters g. It however requires to slightly modify the filter bank by first extending the dimension of x from d to 2d with a linear interpolation [6]. Real orthogonal wavelet transforms are implemented with real conjugate mirror filters (h, g) which satisfy The high pass filter is g(n) = (−1) n h(1−n), and n g(n) = 0. One can prove that W = (H, G) is then an orthogonal operator in R d and that Haar filters are simple examples defined by h(n) = 2 −1/2 (δ(n) + δ(n − 1)) and g(n) = 2 −1/2 (δ(n) − δ(n − 1)). The resulting Haar scaling filter is φ J = 2 −J/2 1 [0,2 J −1] and the Haar wavelet is A modulus filter bank inserts a contraction with a modulus operator applied to the high frequency component Gx of x: |G|x(n) = |x g(2n)| . Suppressing the phase or the sign performs a non-linear demodulation which shifts part of the signal energy towards the lower frequencies. This modulus is not applied to Hx which is already a low frequency signal. With an abuse of notation, we denote by |x| the vector obtained by applying the entry-wise modulus operator to a vector x ∈ R d , that is, |x| = {|x(n)|} 1≤n≤d , and we write |W |x = (Hx, |G|x). If x is real positive, which is the case in a scattering cascade, then Hx is positive real so Hx = |Hx|. It is thus as if the modulus was also applied to Hx. A modulus is contractive in the sense that ||a| − |b|| ≤ |a − b|, for any (a, b) and it preserves the norm |W |x = x . It is important to realize that a modulus strongly contracts the signal space. Indeed, Gx is an oscillatory signal whose sign or phase changes rapidly. For real signals, a modulus reduces by 2 the range of variation of its coefficients. Scattering as Iterated Wavelet Transforms The scattering transform of x ∈ R d is defined by a product of modulus filtering operators: Since |W | is contractive and preserves the norm, it results that S J = |W | J is also contractive and preserves the norm: The operator product (2) iteratively computes This tree subdecomposition is illustrated in Figure 1(b). Each S J,k can thus be written a product of 2 J operators equal to H or |G|. Since h and g have a size K, the scattering operator S J has a support of If we supress the modulus nonlineary and replace |G| by G then S J has totally different properties. It is then a linear wavelet packet orthogonal transform introduced in [7]. A scattering operator is implemented as a product of multirate filtering modulus, but its properties depend upon the underlying wavelet transform. A wavelet transform modulus at the scale 2 l is defined by The modulus is applied only to the output of G: The modulus suppresses the wavelet tranform phase and hence computes an envelop which is approximatively invariant to translations smaller than 2 j . A scattering transform S J can be factorized as a product of wavelet transform modulus: Each |G|H j l is calculated by the wavelet transform modulus |W j l |. Figure 1(b) illustrates the factorization of the scattering operator S J into the modulus of wavelet transforms |W j |, for J = 3. The index m(k) is the number of times that |G| appears in the calculation of S J,k . There are J m vectors S J,k of order m. Since the contraction is produced by |G|, the index m can be interpreted as a contraction factor. It plays an essential role to analyze the properties of these deep network coefficients. For a random vector X in R d , we define σ 2 m = k,m(k)=m Var(S J,k X) as the total variance of all coefficients of order m. Each modulus is applied to the output of a filter G which produces a zero-mean random vector. It thus strongly reduces the variance, and σ 2 m typically decreases exponentially with m. Table 1 gives its value for a Gaussian white random vector, which is an average measure of volume reduction in the signal space. One can show that σ 2 m ≈ (1 − 2 π ) m · J m , which is verified numericaly in Table 1. Beyond m = 4 the variance of coefficients becomes negligible and the corresponding scattering coefficients thus carry little information. Eliminating coefficients of order m > 4 reduces the number of scattering coefficients from d to about (log 4 2 d)/24. Adapted Scattering Scattering transforms are computed by deep convolution network, where the network weights are specified by the filters h and g. Learning network weights in convolution networks can thus be interpreted as a filter adaptation. As long as h remains an averaging filter and g is a high pass filter, modifying the filter values has marginal effects. Cascading such subsampled filtering still defines a wavelet transform. The number of vanishing moments or the wavelet regularity may be modified [1], but it does not modify much the wavelet coefficient properties. Major modifications of wavelet coefficients are however obtained by modifying the orbit (ordering of coefficients) along which convolutions are performed. For example, rotation invariant scattering transforms are calculated by also computing convolutions along rotation parameters in a deep scattering network [8]. This suggests to adapt wavelet transforms with permutations of the network variables before applying convolution operators. Let π be a permutation of {1, ..., d}. It acts on x ∈ R d as a linear orthogonal operator, written z = πx ∈ R d , with z(n) = x(π(n)). An adapted scattering inserts a permutation π j of {1, ..., d}, before each multirate modulus convolution: where π j makes a permutation of the d coordinates of S j−1 x before applying |W |. It results that Since each π j is a linear orthogonal operator, S J remains contractive and preserves the signal norm: Geometrically, W π j is a rotation of the signal space. It depends on π j , which thus modifies the "directions" in which the modulus contractions are acting. The group of permutations is a group of rotations among which we shall search for particular rotations which optimize the contraction directions for learning. Optimizing permutations are typically N P -hard problems. However, the problem is not as bad as it seems because |W | computes convolutions with filters h and g of small support K. The output values thus depend on local ordering properties. For Haar filters, Section 4.1 shows that the ordering reduces to a pairing problem. A scattering S J operators is not invertible because the modulus looses a complex phase or a sign. In the real case, for almost all pairs of unitary operators (A, B) it has been proved [11] that the operator (|A|, |B|) is invertible on R d . If W is real, one can thus expect that (|W π 0 |, |W π 1 |) are invertible for "sufficiently different" permutations π 0 and π 0 . A necessary and sufficient condition is given on π 0 and π 1 for this result to hold for a Haar filtering. An adapted Haar scattering is thus a cascade of permutation invariant operators over matched pairs. We say that two permutations π 0 and π 1 are "interlacing" if there exists no strict subset Ω of {1, . . . , d} such that π 0 and π 1 are pairing elements within Ω. The following theorem derives a condition to recover a signal from 2 J vectors of invariant Haar scattering coefficients. Theorem 3.1. Suppose that x ∈ R d takes more than 2 different values. Proof. Property (2) is proved by applying (1) recursively to To prove (1), notice that if n 1 , n 2 , n 3 is a triplet where (n 1 , n 2 ) is a pair in π 0 and (n 1 , n 3 ) a pair in π 1 then the values x(n 1 ), x(n 2 ), x(n 3 ) are uniquely determined from (|W |π 0 x, |W |π 1 x), unless x(n 1 ) = x(n 2 ) and x(n 2 ) = x(n 3 ). The interlacing condition implies that π 1 pairs n 2 to an index n 4 which can not be n 3 or n 1 . Moreover, the four values of x(n 1 ), x(n 2 ), x(n 3 ), x(x 4 ) are specified unless x(n 4 ) = x(n 1 ) = x(n 2 ) = x(n 3 ). This interlacing argument can be used to extend to {1, . . . , d} the set of all indices n i for which x(n i ) is specified, unless x takes only two values. If that is not the case then x can be recovered, which proves the theorem. Unsupervised Learning The optimization of a Haar scattering metric is reduced to an optimization of pairing operators. Ordering problems are typically N P -hard, but not pairing problems. If C(p π ) is an additive cost of each pair C(p π ) = d/2 n=1 c(π(2n), π(2n + 1)) (6) then p π * = arg min pπ C(p π ) is computed with O(d 3 ) operations with the Blossom Algorithm of Edmonds [9]. Computations in this paper use the implementation in [10]. The next section explains how to construct such additive costs for unsupervised learning. Unsupervised Contraction Learning A contraction can reduce too much the distance between two points which do not belong to the same class, and thus strongly reduce their discriminability. In general, this can not be fully avoided with unsupervised data. However, one can optimize contractions by maximizing the average distance between training samples, so that this loss of discriminability becomes less likely. We show that it implies computing sparse signal representations. A scattering progressively contracts the scattering metric S j−1 x − S j−1 x , by applying |W |π j for 1 ≤ j ≤ J. We consider unlabeled examples as realizations of a random vector X ∈ R d , which is an unknown mixture of different classes. The operator |W |π j strongly contracts the signal space but it should reduce as little as possible the average Euclidean distance between realizations of S j−1 X, to avoid confusing elements of different unknown classes. We thus want to maximize the variance σ 2 (S j X) of S j X. Since σ 2 (S j X) = E S j X 2 − E(S j X) 2 and E S j X 2 = E S j−1 X 2 , this is equivalent to finding a permutation π j which minimizes It is a mixed l 1 norm along realizations x i and an l 2 norm across the scattering coordinates. This is a sparsity norm which is optimized so that W π j S j−1 X has a sparse representation. Sparsity appears in this optimization because the distance between two vectors having many zero coordinates is not much reduced by applying a modulus on the coordinate of these vectors. Indeed, ||a| − |b|| = |a − b| if a or b is zero. The mixed l 2 and l 1 norm can also be replaced by a simpler l 1 sparsity norm. For a Haar scattering transform, the sparsity norm is an additive cost on the pairing p πj : It is solved with the Blossom algorithm with O(d 3 ) operations. The unsupervised learning algorithm iteratively computes an optimized pairing p πj for j going from 1 to J. One can also impose constraints on the pairing to reduce computations and incorporate some prior information. An inner-node pairing in the filter bank tree imposes that π j pairs coefficients within each vector S j,k x, and performs the same pairing for all 0 ≤ k < 2 j . There is typically not a unique pairing which minimizes the unsupervised contraction cost. Let us consider for example a random vector X which is stationary and has a period d. Any translation of the pairing yields the same cost because of the stationarity. These pairing being equally valid, one can reduce the classification variance by using them all. We thus use a bagging algorithm which estimates several ordered scattering from several training sets, and aggregates these multiple scattering vectors to compute a linear SVM classification. Numerical Experiments Unsupervised contraction learning does not differentiate subclasses and can thus mostly learn sources of variability which are in common for most of the classes. Geometric variability such as translations, rotations or deformations are such examples. The MNIST digit recognition data bases provides a simple framework to study the learning of these sources of variability. Learning of translations and deformations is evaluated in Section 5.1 on the original MNIST database, which has 60,000 training samples and 10,000 testing samples. A modified MNIST data basis with 3D digit rotations is studied in Section 5.2. MNIST Digit Recognition We consider a random permutation of the MNIST image pixels, illustrated in Figure 2a. Each image is thus considered as a non-ordered bag of pixels. These experiments tests several aspect of the algorithm: the ability to recover spatial neighborhood information and the classification accuracy without location information. The permutation learning is first performed with inner-node pairing optimizations, which means that for each level j, the same pairing function is used to associate the coefficients of each S j,k x for 0 ≤ k < 2 j . These pairings perform a multiscale estimation of relative spatial locations. We say that two coefficients S j,k (n) and S j,k (n ) are spatially connected if they are computed with operators whose supports of size 2 j are spatially connected in the original image domain. We only consider coefficients whose amplitude are non-negligible and thus play a role in the classification. Over the first three levels 1 ≤ j ≤ 3, a 100% of the pairs are connected, which shows that all pairing are spatialy connected. For j = 4 and j = 5 the proportion of connected pairs are respectively 85% and 67%. The connectivity ratio decreases with the scale because long range correlations are weaker and spatially non-connected pairs may become more similar than spatially connected ones. MNIST images have d ≤ 2 10 pixels so scattering operators can be computed at scales 2 J ≤ 2 10 . The classification error decreases when J increases and the best accuracy is obtained for J = 10, which corresponds to a maximally invariant representation. We compute N adapted scattering operators with unsupervised pairing over N different subsets of training samples. The N different scattering vectors S J x are aggregated and fed into a linear SVM. The energy of scattering coefficients decrease exponentially with m, as shown by Table 1 for a Gaussian process. Table 2b shows that best classification results are obtained by keeping only coefficients of order m ≤ 4. Table 2c gives the classification error rate as a function of N . The performance increases slowsly for N ≥ 10, and does not improve beyond N = 50, which is much smaller than 2 J . Almost all MNIST classification algorithms use prior information on the spatial location of pixels to build spatially localized descriptors, and deep convolution networks as well as translation invariant scattering transforms further use prior information on translation invariance. State of the art results, without making any modification of training vectors, is achieved by deep convolutional neural networks (0.53%, [5]), and scattering networks with Gabor wavelet (0.43% [2]). This unsupervised learning, uses no prior information on pixel location or on translation invariance. It reaches an error of 0.9% by optimizing inner-node pairing. If no constraint is imposed on the pairing algorithm, which may thus associates coefficients computed across different nodes in the filter bank tree, then the error increases to 1%. Indeed, for MNIST, intra-class variabilities are mostly due to translations and deformations. Appropriate invariants can be thus computed with inner-node pairing. Providing more flexibiilty increases the pairing variance, which explains the slight error increase. This flexibility can however produce useful invariants for non-translation invariants. MNIST with 3D Rotations To test the algorithm ability to build invariant to different source of geometric variability, we use the 3D rotated MNIST data basis constructed in [3]. Digit '9' is removed from the data set as it is equivalent to the digit '6' after rotation. Each digit is projected on a 3D sphere sampled over d = 4096 points, and randomly rotated on the sphere, with a rotation variance σ 2 = 0.2 [3]. Translations in the plane are now replaced by rotations over the sphere, on which one can not define convolution operators after sampling. The classification algorithms in [3] introduces an elegant solution which replaces convolution operators (diagonal in Fourier) by operators which are diagonal over the Laplacian eigenvectors on a graph. These algorithms use the 3D neighborhood of points on the sphere to define the graph connectivity. Table 2 gives the results reported in [3], with 19% error for a nearest neighbor algorithm, 5.6% for a two-layer fully connected neural network, and 6% for two best locally connected network algorithms. The adapted Haar scattering algorithm yields a smaller error of 2.2%, when computed at the maximum scale 2 J = d, by aggregating N = 10 scattering transforms, up the order m = 4. The Haar scattering thus reduces the error by an important factor although it uses no prior information on the 3D connectivity of points, which illustrates the learning abilities of this deep network structure. Table 2: Percentage of errors on MNIST 3D rotation data set [3], with a nearest neighbor classifier, a fully connected two layer neural network, a locally connected network, a spectral network [3], and an adapted Haar scattering.
2014-11-03T07:25:16.000Z
2014-06-09T00:00:00.000
{ "year": 2014, "sha1": "a4b24b375c99c4526a6fd39580dfedf8f1fa5d3b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d9124dc249a6dcd1b4bb65460fe58e878ecc2a7a", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
2002145
pes2o/s2orc
v3-fos-license
The assessment of the spondyloarthritis international society concept and criteria for the classification of axial spondyloarthritis and peripheral spondyloarthritis: A critical appraisal for the pediatric rheumatologist This review refers to the origin and current state of the assessment of the SpondyloArthritis International Society (ASAS) criteria for the classification of axial and peripheral spondyloarthritis (SpA) and the possible implications in the pediatric population. The ASAS criteria evolved from the idea that the earlier the recognition of patients with ankylosing spondylitis, the better the efficacy of tumor necrosis factor blockers. Strategies included the development of new concepts, definitions, and techniques for the study of clinical signs and symptoms. Of relevance, the new definition of inflammatory back pain (IBP) and the introduction of sacroiliitis by magnetic resonance imaging represented the most important advance in the early identification of AS in the “pre-radiographic stage” of the disease. AS is considered in this paper as a disease continuum with symptoms depending on age at onset. The application of those specific strategies in children and adolescents with SpA seems limited because the most important manifestation in the early stage of disease is not IBP, but peripheral arthritis and enthesitis. In this instance, the logical approach to juvenile onset SpA according to ASAS criteria should not be through the axial criteria but rather the peripheral set of criteria. radiographic changes of the sacroiliac joints (Table 1) [3]. The course of the disease varies from one individual to another. Disease activity may show a fluctuating pattern and the structural damage, particularly late spinal changes such as syndesmophyte formation and the notorious "bamboo spine", that illustrates the relatively slow progression. With such variation in disease course, the long-term consequences of AS, particularly health related quality of life (HRQoL) and functioning, can differ among the individuals that suffer from the disease [4][5][6]. Finally, some data suggest that the mortality of AS is increased when compared to that of the general population [7,8]. The term "ankylosing spondylitis" means "stiff vertebrae" (from the Greek ankylos and spondylos). Alternative names, most in disuse, include seronegative polyarthritis, seronegative spondarthritis, seronegative spondyloarthritis, seronegative spondylarthropathies, spondyloarthritides. The stereotypic AS patient has a long-standing and severe disease characterized by spinal deformity and vertebral ankylosis for which no effective therapy has been available. Yet, it is clear that not all patients with AS fit into that stereotype. Moreover, the introduction of tumor necrosis factor (TNF) blockers has resulted in the control of signs and symptoms related to inflammation and the improvement of most outcome measures, including HRQoL, particularly in patients with a short disease duration. Within the last ten years, clinicians have tried to recognize and diagnose AS in the early inflammatory stage of the process that ultimately leads to bone proliferation [9,10]. The purpose of this effort was to treat AS in the early pre-radiographic stage of the disease with TNF-blockers and prevent its long-term consequences. These strategies were utilized to identify the early inflammatory stage of the disease in patients by focusing on the definition, classification, and diagnostic criteria of IBP and SpA, including the use of magnetic resonance imaging (MRI) for the detection of sacroiliac and vertebral inflammation, as well as the use of HLA-B27 testing. Ultimately, the Assessment of SpondyloArthritis International Society (ASAS) developed concept and classification criteria for axial SpA [11] and then for peripheral SpA [12]. At the same time, we learned about the efficacy of TNF blockers in controlling inflammation, but probably not in suppressing bone proliferation. In this paper, the characteristics and development of the ASAS criteria for the classification of axial and peripheral SpA are reviewed. These classification criteria and their possible role in the classification of children and adolescents with SpA are examined by defining and comparing the juvenile and adult criteria as they apply to pre-radiographic AS, AS, u-SpA, and ASAS axial and peripheral SpA. The relationship between adult and juvenile-onset AS and SpA Although the most common age at onset of AS is around 25 years, a variable percentage of patients start Table 1 Modified New York criteria for ankylosing spondylitis ref. [3] A. Diagnosis* 1. Clinical criteria a) Low back pain and stiffness for more tan three months, which improves by exercise, but is not relieved by rest b) Limitation of motion of the lumbar spine in both the sagittal and frontal planes c) Limitation of chest expansion relative to normal values correlated for age and sex *The modified New York criteria for ankylosing spondylitis are mostly used for classification. All three clinical and the radiographic criteria refer exclusively to axial involvement, including the spinal, costovertebral, costosternal, and sacroiliac joints. The proportion of children and adolescents that fulfill those criteria before they reach the age 17 years is probably <15%. In such a case they usually have a combination of peripheral and axial symptoms. It is assumed that stated otherwise, publications on ankylosing spondylitis never refer to definite or probable disease, but to definite ankylosing spondylitis. complaining in childhood or adolescence. Despite the fact that juvenile and adult onset forms differ in their mode of presentation, both forms represent a disease continuum in which age-related factors might play a role in clinical expression. Compared with juvenile onset AS, patients with adult onset disease have a higher prevalence of axial symptoms and, in contrast, a much lower prevalence of peripheral enthesitis and arthritis in the initial years of the disease [13][14][15][16][17][18][19][20][21][22]. Inconsistent genetic and minor synovial histologic differences between adults and juvenile patients have been also described [23][24][25][26]. The consequences of AS seem more severe in juvenile patients [13,[15][16][17][19][20][21][22]. Despite the fact that there are no studies comparing other SpA in adult and childhood subjects, the information available up to now on SpArelated, u-SpA, and ReA does not reveal major differences between the two age-at-onset populations. No important differences in pathogenesis or therapeutics have been reported. Thus, the concept that juvenile and adult onset AS represent the same disease has important implications in the clinical and therapeutic field. Indeed, this concept does not consider the subgroup of PsA in younger children (mostly girls, with dactylitis, polyarthritis, and antinuclear antibodies). In keeping with the traditional idea of Moll and Wright about the SpA group [27], we conceptualize juvenile-onset SpA, including AS, as a group of HLA-B27-associated pediatric rheumatic diseases characterized by enthesitis and arthritis involving in most cases the lower extremities in the initial years and, in a variable proportion of cases, the sacroiliac and spinal joints some years later [28]. This concept differs from that of the enthesitis related arthritis (ERA) and PsA subgroups of juvenile idiopathic arthritis (JIA) classification of the International League for Associations of Rheumatology (ILAR) [29]. In that classification, the overlapping of diagnostic manifestations between two or more subgroups excludes definite diagnosis and puts any such case in the "undifferentiated arthritis" category. Figure 1 The modified schematic representation of Rudwaleit's scheme [49] on the transition of undifferentiated juvenile-onset spondyloarthritis (u-SpA) to ankylosing spondylitis (AS) in the context of axial and peripheral SpA. Note: In this model, we present the idea that axial SpA is mostly representative of adult-onset patients (upper panel) whereas the subset that could mostly be adapted to children and adolescents with juvenile-onset SpA is that of peripheral SpA (lower panel). Time to transition is an estimation based on scarce published reports. Note that during the interval between disease onset and the 5th year of symptoms, the most frequent and characteristic symptoms and signs in juvenile-onset SpA are peripheral arthritis and enthesitis and not inflammatory back pain (small fonts; IBP). In contrast, IBP, and not peripheral symptoms, (small fonts) is the most important characteristic in adult patients. Beyond the 5th year of disease, the proportion of patients with both juvenile and adult onset SpA fulfilling the modified New York criteria for AS [3] increases to reach a maximum around 10 years after onset. Imaging of the sacroiliac joints, particularly the early inflammatory stage by magnetic resonance imaging (MRI) is essential in the study of adult patients with IBP. This approach does not seem the logical in the study of juvenile-onset SpA since IBP occurs in less than 15% in the initial years of disease. While the demonstration of edema in the sacroiliac joints by MRI has therapeutic implications in adult-onset SpA -the earliest the treatment, the better the response, the indications for early treatment of children and adolescents with SpA with tumor necrosis factor (TNF) blockers could mostly rely on peripheral symptoms. Regarding the ostechondral proliferative stage of spinal involvement, it is only probable that in juvenile-onset patient this stage takes a long time to develop, perhaps similar to that seen in adults. The relationship between u-SpA and AS The rationale for the development of the ASAS axial and peripheral classification criteria was to facilitate the recognition and detection of patients with back pain at risk of AS. It is widely known that most patients with early AS present with undifferentiated manifestations, most characteristically with IBP, less frequently with peripheral arthritis and enthesitis, and rarely with extrarticular manifestations. Thus, the initial stage of AS corresponds to that of u-SpA, a group that accounts for an important proportion of SpA in the community [30,31], specialized clinics [32], and multiplex-case families [33,34]. The initial descriptions of u-SpA dated back to 1983 and 1984 [35,36] and consisted of patients fulfilling the Amor, et al. [37] as well as the European Spondylarthropathy Study Group (ESSG) [38] classification criteria for SpA, but not fulfilling criteria for AS or the specific diagnostic features of ReA, PsA, Crohn's disease, and ulcerative colitis. Besides IBP, peripheral arthritis and enthesitis have been prominent features at onset in patients with u-SpA [39,40]. In retrospect, most patients with u-SpA fulfill AS criteria within five to ten years after onset ( Figure 1) [41][42][43][44][45][46][47][48][49]. Peripheral u-SpA has a childhood counterpart, which is traceable to clinical descriptions embraced under terms such as "probable Still's disease" [50], type II (HLA-B27 + boys) oligoarticular juvenile rheumatoid arthritis (JRA) [51], atypical SpA [52], and most clearly to HLA-B27associated SpA and enthesopathy in children [53] as well as the seronegative enthesopathy and arthropathy (SEA) syndrome [54]. This progression of juvenile-onset u-SpA to AS starts in most cases with peripheral symptoms and follows a clinical course leading to AS within ten years of disease [55][56][57][58][59][60][61] (Figure 1). There are important variations in the proportion of adult-onset and juvenile-onset patients with u-SpA fulfilling AS criteria that occur throughout the disease courses in various studies ( Figure 2). The reasons for these variations are probably related to the characteristics of the population included in the study, patient entry and inclusion criteria, design of the study, as well as the type and periodicity of clinical assessments. Information on those who remain in the undifferentiated stage or on those evolving into other SpA (i.e.: PsA) as well as on those entering into sustained remission, is limited in these studies. Duration of disease (years) Adult patients Juvenile patients Figure 2 Percentage of adult and juvenile onset patients with undifferentiated SpA (u-SpA) progressing to ankylosing spondylitis (AS) throughout the course of the disease as reported in retrospective studies. Note: Variations are probably related to the characteristics of the population included in the study, entry criteria, and the type of assessments carried out. Interestingly, most references related to adult-onset patients include a moderately high proportion of patients with peripheral arthritis in combination or not with axial symptoms. Most studies consider AS the outcome measure, but some others referred to other parameters, for example radiographic sacroiliitis. Prospective data of patients with axial SpA according to ASAS criteria or IBP differ in some important aspects from retrospective studies. These differences are seen in the prospective German Spondyloarthritis Inception Cohort (GESPIC) [65], the Maastricht's early SpA clinic population (ESPAC) [66], as well as the Leeds [67], and French (Devenir des Spondylarthropathies Indifférenciées Récentes or DESIR) [68] IBP clinics. GESPIC [65] consisted of 462 patients with axial SpA, including 236 with AS of whom 50.4% had already developed AS within five years of symptoms. In these patients, male sex was predictive of radiographic sacroiliitis and >1 syndesmophyte and CRP ≤6 mg/L for >1 syndesmophyte and >1 bridging syndesmophyte. In ESPAC [66], 14 patients (21%) out of 68 with IBP had already developed AS within <2 years of symptoms, 24% had PsA and 15% each had IBD and uveitis. In the Leeds clinic data [67], 13 patients out of 40 (33%) with IBP had AS within two years and the others had PsA SpA (n = 3), ReA SpA (n = 6), IBD SpA (n = 1), and u-SpA (n = 17). The combination of severe sacroiliitis on MRI and HLA-B27 was highly predictive for AS. The DESIR [68] study included 708 adults with IBP of <3 years disease duration. Of this group, 184 (26%) had AS, 77% SpA, and 67% axial SpA. HLA-B27 positivity was associated with a younger age at the onset of IBP, less delay in diagnosis, lower frequency of psoriasis, and higher frequencies of sacroiliitis and spondylitis on imaging. The importance of such prospective studies is that up to one-third of patients with AS fulfill the modified New York criteria for AS within two years of symptoms and 50% by five years. While cohort studies include patients with IBP, those studies in retrospect include a significant number of patients with peripheral symptoms. Despite a trend for male sex, HLA-B27, and uveitis in patients with AS, most features of this peripheral AS disease do not differ from those of patients with axial SpA. Ultimately, these prospective risk factors include some factors already identified in retrospective studies and help define and illustrate the relationships between u-SpA, preradiographic AS, and axial and peripheral SpA. Although Zeidler, et al. [49] proposed four possible outcomes for u-SpA, only two of them are clearly found in the rheumatology clinic: 1) a subgroup of patients representing Table 2 Variables associated with the likelihood of developing ankylosing spondylitis after a mean of 12.2 years (10 to14) ref. [60]* and radiographic sacroiliitis after a mean of 14.9 years (11.7 to 25.1) ref. [ the early stage of a definite, well-categorized SpA (for example, AS) and 2) a subgroup consisted of patients with "definite" u-SpA. The stage between the onset of symptoms and the demonstration of radiographic sacroiliitis in patients with u-SpA is the pre-radiographic stage of AS. This recent ability to recognize u-SpA patients in this stage, thereby allowing the earlier use of TNF blockers such as etanercept or infliximab, has appeared to result in better clinical responses [69]; this improvement has had a major influence in the development of the concept of axial SpA [9]. The initial strategy was the identification of patients with axial symptoms at risk of AS [9][10][11]; lately, the a new strategy has evolved in focusing on identification of those with peripheral disease at risk of axial SpA [12]. Rudwaleit, et al. [9] first calculated the pre-test probability of axial SpA and AS among mostly adult patients with any kind of back pain according to the sensitivity, specificity, and positive likelihood ratio (LR) of SpA features as appeared in different publications. He built two algorithms to be used in clinical practice, one starting with the identification of IBP in patients with back pain and the other with by HLA-B27 testing alone. The former algorithm increases the probability of having axial SpA (including AS) up to around 90% and the latter up to 59%. Based on real-world clinical findings, Heuft-Dorenbosch, et al. [66] proposed changes on the level for placing MRI and HLA-B27 in the algorithm. In another study, IBP, HLA-B27, and sacroiliitis by MRI performed well in detecting axial SpA in patients referred by orthopedists and primary care physicians who had back pain >3 months, and age at onset of <45 years [70]. Ultimately, such criteria and algorithms provide the clinician, particularly the general practitioner and the orthopedic surgeon, with diagnostic strategies to differentiate IBP from mechanical back pain. There is no such analyses yet in juvenile-onset SpA, but the association of some variables with the development of AS and radiographic sacroiliitis suggest a role for genetic, demographic, and clinical features for the progression of u-SpA to AS (Table 2). Essential in interpreting such information is the recognition that while the development of axial SpA starts with back pain, data on juvenile-onset SpA is derived from JIA and JRA data, conditions whose principal characteristic is not axial disease, but peripheral arthritis. Axial and peripheral ASAS SpA criteria Today, the concept of axial SpA has moved from the nonradiographic sacroiliitis stage of AS to the wider spectrum of SpA, including the axial and peripheral categories [11,12] (Table 3). The ASAS criteria for SpA scope focus on the two most frequent groups of clinical features: the Table 3 Axial and peripheral spondyloarthritis Assessment of Spondyloarthritis International Society classification criteria Axial spondyloarthritis ref. [11] Peripheral spondyloarthritis ref. [12] Individuals <45 years with back pain >3 months* Individuals with arthritis or enthesitis or dactylitis Imaging HLA-B27 IBP group and the peripheral arthritis, enthesitis, and dactylitis group. In the initial studies of adult patients with IBP of less than 2 years, these diagnostic and classificatory properties of both the ASAS axial and peripheral SpA criteria appear to be better than those of reported by Amor, et al. [37] and ESSG [38] groups and implementation is under way. Drug efficacy studies are being carried out in patients with axial SpA to determine the role of TNF blockers in remission and prevention of structural damage [71][72][73]. Nevertheless, there are more issues in axial [11] and peripheral [12] SpA criteria that need to be considered: 1) The existence of two sets of criteria has academic and research implications, yet their validation in various populations and clinical scenarios are needed before they would be widely used in clinical practice; 2) The cost of MRI and HLA-B27 testing may limit the applicability of ASAS criteria in countries with budget restrictions or in segments of the population not covered by any health security system; 3) Despite the fact that the definition of some variables listed in both axial and peripheral SpA criteria differ from each other to avoid classification overlaps, the existence of different definitions may be confusing (Table 4); 4) Except for radiographic sacroiliitis, signs and symptoms in the ASAS criteria for SpA refer to active inflammatory and not structural damage; and 5) Regarding nomenclature, the term "peripheral SpA" may appear to be contradictory in itself and confusing The role of axial and peripheral SpA criteria in children and adolescents ASAS criteria for axial and particularly peripheral SpA may have important implications for the recognition of children and adolescents with SpA and the understanding Table 4 Definitions of parameters applied in the Assessment of Spondyloarthritis International Society classification criteria for axial and peripheral spondyloarthritis Axial SpA ref. [11] Peripheral SpA ref. [12] IBP According to experts (14): ≥4 out of 5 parameters present: of the relationship between juvenile and adult SpA. Children and adults would be classified under the same criteria, the long-term follow-ups of children with SpA would be more easily carried out, and the results of clinical trials and management would be interchangeable to some extent. Where do this leave us? Before quickly accepting these ASAS criteria for children and adolescents, however, it is important to answer two specific questions: 1) Is there a rationale for alternative criteria for children with ERA, PsA, and undifferentiated arthritis according to ILAR criteria? ERA and PsA (and perhaps some cases of undifferentiated arthritis) are the subgroups representing juvenile-onset SpA in the ILAR JIA classification criteria for JIA [29]. The concept of ERA and PsA in ILAR JIA classification does not correspond to the traditional concept of SpA because such ILAR criteria precludes the overlapping of inclusion criteria among different subgroups. Thus, the features that link ERA and PsA as SpA are the ones that may be thought to make them incompatible with each other. In contrast, the concept of SpA behind the ASAS axial and peripheral SpA criteria [11,12] reflects the Moll and Wright's [27] original idea of clinical overlaps between the different diseases. A B This author's opinion is that we need common concepts and criteria if we want to keep juvenile and adult onset types of SpA as a disease continuum. Amor, et al. [37] and ESSG [38] criteria have been validated in children and at least the latter has been used to some extent (Table 5). Today, ASAS axial and peripheral SpA criteria might be a good substitute for Amor, et al. [37] and ESSG [38]criteria and perhaps a good alternative to ILAR criteria [29]. Regarding nomenclature, it seems more appropriate for pediatric rheumatologists to use of the ILAR terms ERA and PsA than the contradictory term "peripheral SpA" since most children and adolescents have peripheral disease and rarely axial symptoms. 2) If needed, which set of ASAS criteria is more appropriate for children, axial or peripheral? It seems clear that axial and peripheral SpA classifications have different purposes. While the former [11] is intended to identify the spinal and sacroiliac involvement in the early inflammatory stage of AS, the latter [12] relies on peripheral arthritis, enthesitis, and dactylitis as entry criteria (Table 3). Regarding axial involvement, children and adolescents may have both active sacroiliitis on MRI ( Figure 3) and radiographic sacroiliitis grade 2 bilateral or grades 2 or 4 unilateral (Figure 4), but in most cases these events occur in association with peripheral arthritis and enthesitis ( Figure 5). Axial symptoms, as isolated features, are unusual in youngsters. The ASAS axial SpA criteria suggest the need for a history of back pain for at least three months as entry criteria before performing MRI and/or radiographic studies of the sacroiliac joints. There seems to be no clear clinical rationale to perform MRI studies of the sacroiliac joints and the spine in children in the absence of back pain. Certainly, the logical criteria for children and adolescents is the ASAS peripheral SpA criteria since they include the most important signs and symptoms in patients with juvenile-onset SpA. Except for "good response to NSAIDs", on that no specific reports in children exist, children and adolescents with juvenile-onset SpA could well fulfill all axial and peripheral ASAS SpA criteria ( Table 6). The diagnostic properties of some of these criteria were determined during the validation of the Amor et al. [37] and ESSG [38] classification criteria of SpA [74] and in a comparative study of juvenile-onset AS and u-SpA with JRA [60]. As expected, the sensitivity of back pain in the validation study of SpA according ESSG [38] was very low, but its specificity very high (Table 5). In the latter study, sensitivity, specificity, and + LR of tarsitis and enthesopathy were very high suggesting that tarsitis should be considered an additional criterion in any classification criteria ( Figure 6). The frequency of each criterion depends on the classification category. By definition, for example, IBP and radiographic sacroiliitis should be found in all patients with AS, whereas arthritis or enthesitis should be found in all patients with ERA. On the other hand, the definition of each criterion and its diagnostic value should be assessed in children. The question of whether ASAS criteria for axial and peripheral SpA [11,12] have any role in the classification of children with SpA, ERA, PsA, and even undifferentiated arthritis remains to be determined. Ideally, all related clinical conditions in children and adults should be encompassed under the same criteria in order to facilitate scientific communication and patients transition from childhood to adulthood medical care. From the therapeutic point of view, there should be some advantages if the management of juvenile and adult onset forms could have the same opportunity to be treated in the early inflammatory stage of the disease. Conclusions The dilemma of how to apply adult AS and related SpA criteria to children has challenged pediatric rheumatologists for decades. The ILAR JIA criteria for PsA and ERA do not corresponds well with adult AS criteria. The new ASAS criteria offer rheumatologists a chance to reexamine how children with SpA, ERA, PsA, and undifferentiated arthritis can fit and not fit into these new criteria. Adult and pediatric rheumatologists want to diagnose these patients described by the ASAS criteria early and be able to offer these patients aggressive TNF inhibitor therapy, when possible and affordable, to possibly prevent joint and bone damage. The application of those specific strategies in children and adolescents with SpA is challenging as the most important manifestation in the early stage of disease is not inflammatory back pain as it is in adults, but peripheral arthritis and enthesitis. In this instance, the best approach to juvenile onset SpA according to ASAS criteria may be not to use the axial criteria but rather to use the peripheral set of criteria. The question of whether pediatric rheumatology needs new separate criteria for SpA, ERA, PsA, and undifferentiated arthritis remains controversial and goes against our need to encompass such similar adult and pediatric diseases under the umbrella of one set of criteria. Competing interests The author declares that he has no competing interests. Author contribution RB-V is the author and corresponding author and has designed and written the article. As author read and approved the final manuscript.
2018-01-15T20:11:29.214Z
2012-05-31T00:00:00.000
{ "year": 2012, "sha1": "c1dd2892063cad30082f2d3e2bed99429ee0acc6", "oa_license": "CCBY", "oa_url": "https://ped-rheum.biomedcentral.com/track/pdf/10.1186/1546-0096-10-14", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c29817df7f393f77669c70c58432e34f094bae3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21501540
pes2o/s2orc
v3-fos-license
ISO-1 Binding to the Tautomerase Active Site of MIF Inhibits Its Pro-inflammatory Activity and Increases Survival in Severe Sepsis* MIF is a proinflammatory cytokine that has been implicated in the pathogenesis of sepsis, arthritis, and other inflammatory diseases. Antibodies against MIF are effective in experimental models of inflammation, and there is interest in strategies to inhibit its deleterious cytokine activities. Here we identify a mechanism of inhibiting MIF pro-inflammatory activities by targeting MIF tautomerase activity. We designed small molecules to inhibit this tautomerase activity; a lead molecule, “ISO-1 ((S,R)-3-(4-hydroxyphenyl)-4,5-dihydro-5-isoxazole acetic acid methyl ester),” significantly inhibits the cytokine activity in vitro. Moreover, ISO-1 inhibits tumor necrosis factor release from macrophages isolated from LPStreated wild type mice but has no effect on cytokine release from MIFdeficient macrophages. The therapeutic importance of the MIF inhibition by ISO-1 is demonstrated by the significant protection from sepsis, induced by cecal ligation and puncture in a clinically relevant time frame. These results identify ISO-1 as the first small molecule inhibitor of MIF proinflammatory activities with therapeutic implications and indicate the potential of the MIF active site as a novel target for therapeutic interventions in human sepsis. MIF is an important pro-inflammatory cytokine that has been implicated in the pathogenesis of inflammatory disorders (1)(2)(3)(4)(5)(6). Administration of neutralizing anti-MIF antibodies has proven therapeutically effective in numerous animal models of systemic inflammation, including Gram-negative, Grampositive, and polymicrobial sepsis, arthritis, and autoimmune diabetes (1-4, 7, 8). Circulating MIF levels are elevated in animals with sepsis and in patients with severe sepsis and septic shock (1). These and other results indicate that inhibiting MIF is a promising approach to develop new anti-inflammatory agents. Three-dimensional x-ray crystallography of MIF shows that the molecule exists as a homotrimer (9 -11). This trimer possesses the ability to catalyze the tautomerization of the non-physiological substrates DL-dopachrome methyl esters (supplemental Fig. 1) into their corresponding indole derivatives (11,12). Crystallographic analysis of MIF complexed with p-hydroxyphenylpyruvic acid, a known MIF substrate (13), has revealed an active site which lies in a hydrophobic cavity formed between two adjacent subunits of the homotrimer (14). Tautomerase activity is an evolutionarily ancient phenomenon, which early life forms presumably utilized for synthesis, but there is no evidence that modern species use this in synthetic pathways. We reasoned that molecules that bind this site could be useful to target MIF function, because the tautomerase activity is expendable. We have designed a molecule to fit into the catalytic site and shown that (S,R)-3-(4-hydroxyphenyl)-4,5-dihydro-5-isoxazole acetic acid methyl ester (ISO-1) 2 is a potent inhibitor of MIF tautomerase activity (15). The crystal structure of MIF complexed to ISO-1 reveals that ISO-1 binds to the enzymatic active site. MATERIALS AND METHODS All chemicals were obtained from commercial suppliers and used without further purification. Succinimidyl ester of Rhodamine Red-X was purchased from Molecular Probes. Methylene chloride (CH 2 Cl 2 ) was distilled from phosphorous pentoxide. Dimethyl formamide (DMF) was stored under argon in capped DriSolv TM bottles and used without further purification. Aluminum-backed Silica Gel 60 with 254 nm fluorescent indicator TLC plates were used. Developed TLC plates were visualized under a short wave UV lamp, stained with an I 2 -SiO 2 mixture, and/or by heating plates that were dipped in ninhydrin. Flash column chromatography (FCC) was performed using flash silica gel (32-63 m) and usually employed a stepwise solvent polarity gradient, correlated with TLC mobility. All 1 H and 13 C spectra were recorded on a JOEL spectrometer at 270 MHz for the 1 H NMR spectra and at 67.5 MHz for the 13 C NMR spectra. Chemical shifts are relative to the deuterated solvent peak and are in parts per million. The coupling constants (J) are measured in Hertz. The signals are described as s (singlet), d (doublet), t (triplet), m (multiplet), and br s (broad singlet). Low resolution mass spectra were acquired using a liquid chromatography mass selective detector. Synthesis of ISO-1-ISO-1 was synthesized in three steps as described previously (16) and is presented in supplemental Fig. 1. Synthesis of Fluorescent Derivatives of ISO-1 (FL-ISO-1)-Fluorescent derivatives of ISO-1 were synthesized as described in supplemental material. Intracellular Localization of Fluorescent ISO-1 Uptake by RAW 267 Macrophage-RAW 267.4 macrophages were plated on coverslips in a 24-well plate and treated with FL-ISO-1. After incubation with FL-ISO-1, the cells were washed five times with phosphate-buffered saline (PBS) (5 min each wash) and fixed using 4% formaldehyde (Ted Pella, Redding, CA) in PBS for 20 min at room temperature. Cells were washed and stained with DAPI (20 ng/ml) in PBS for 15 min at room temperature, mounted in 70% Vectashield * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. mounting medium (Vector Laboratories, Burlingame, CA) in dH 2 O on a microscope slide and sealed using nail polish. Observation and imaging of the cells were carried out on an Olympus IX70 microscope using a 40ϫ oil immersion objective and 568 nm (for FL-ISO-1) and 360 nm (for DAPI) excitation filters. Images were captured using a digital camera (Hamamatsu) and the ISee software program (Inovision). Measurement of NF-B Activation by Electrophoretic Mobility Shift Assay-RAW 267.4 macrophages were treated with various concentrations of ISO-1 (1-100 M) 30 min prior to LPS (endotoxin, Escherichia coli 0111:B4, Sigma) addition. Macrophages were collected 2 h after activation with LPS and washed one time in PBS (pH 7.4). Nuclear extract was isolated using NE-PER Nuclear and Cytoplasmic Extraction Reagents according to the manufacturer's instructions (Pierce). For detection of NF-B binding, nuclear extract from cells (ϳ5 g of protein) was incubated with 0.2 ng of 32 P-labeled double-stranded oligonucleotide sequence in a 10-l reaction volume containing 5ϫ gel shift binding buffer (20% glycerol, 5 mM MgCl 2 , 2.5 mM EDTA, 2.5 mM dithiothreitol, 250 mM NaCl, 50 mM Tris-HCl (pH 7.5) and 0.25 mg/ml poly(dI-dC)-poly(dI-dC)) for 30 min at room temperature. The samples were resolved on a 4% polyacrylamide gel and visualized directly by autoradiography after drying the gel. The NF-B consensus sequence (Promega) was labeled with 10 units of T4 polynucleotide kinase (Promega) per 25 ng of oligonucleotide, 1ϫ kinase buffer, and 5 l of [␥-32 P]ATP (Amersham Biosciences, catalog no. PB10168, 10 mCi/ml) for 30 min at 37°C. The bound NF-B bands were quantified by scanning densitometry of a bio-image analysis system. The results for each treatment were expressed as relative intensity compared with the control (salinetreated) intensity. Animal Experiments-All animal experiments were approved by the Institutional Animal Care and Use Committee of the North Shore-Long Island Jewish Research Institute. Male Balb/C mice, ϳ8 weeks old, were subjected to endotoxemia or cecal ligation and puncture (CLP). Endotoxemia-Endotoxemia was induced by injection of a sublethal dose of LPS (5 mg/kg; intraperitoneally). Mice were treated with various concentrations of ISO-1 (3.5-35 mg/kg; intraperitoneally) or vehicle (aqueous 5% dimethyl sulfoxide) 30 min before and 6 h after LPS infusion and then twice daily for 3 days. Animals were monitored for survival for 2 weeks. Cecal Ligation and Puncture-induced Polymicrobial Sepsis-Mice were anesthetized (ketamine 100 mg/kg and xylazine 8 mg/kg administered intramuscularly), and abdominal access was gained via a midline incision. The cecum was isolated and ligated with a 6-0 silk ligature below the ileocecal valve and the cecum punctured once with a 22-gauge needle; stool (ϳ1 mm) was extruded from the hole and the cecum placed back into the abdominal cavity. The abdomen was closed with two layers of 6-0 Ethilon sutures. Antibiotics were administered immediately after CLP (0.5 mg/kg Premaxin, subcutaneously, in a total volume of 0.5 ml/mouse) and a single dose of resuscitative fluid (normal saline solution administered subcutaneously (20 ml/kg body weight) immediately after CLP surgery (18). Control (aqueous 5% dimethyl sulfoxide as a vehicle) or ISO-1 (35 mg/kg; intraperitoneally) treatment was started 24 h after the induction of sepsis and repeated twice daily for 3 days. In one series of experiments, anti-MIF antibody (2.8 mg/kg; intraperitoneally) treatments were started 24 h after CLP-induced sepsis and then given once daily for 3 days. Animal survival was monitored for 2 weeks. Statistical Analysis-Student's t test, p Ͻ 0.05, was accepted as statistically significant and graphs show mean Ϯ S.D. Data were analyzed with Fisher's Exact Test using Prisim software. RESULTS AND DISCUSSION Current data suggests that neutralization of the pro-inflammatory activity of MIF would be highly beneficial in the treatment of many diseases (5,6). This assertion is supported by the substantial therapeutic effects of MIF-specific antibodies in several models of inflammatory and autoimmune diseases. Although anti-MIF antibody attenuates the inflammatory cascade in sepsis improving the survival rate, it is difficult to use anti-MIF antibodies as tools to delineate the mechanisms underlying MIF proinflammatory activity. Here we identify a method of inhibiting MIF pro-inflammatory activities by targeting MIF tautomerase activity, an evolutionary ancient activity of MIF. We reasoned that a more desirable approach than antibodies would be to develop non-toxic molecules that can specifically block the pro-inflammatory activities of MIF (15). We have designed small molecule inhibitors of MIF by targeting a unique, catalytically active site within the cytokine molecule. It is known that disruption of the active site by insertion of an alanine between Pro-1 and Met-2 abolishes the MIF tautomerase activity, and the resultant mutant is defective in the in vitro glucocorticoid counter-regulatory activity of MIF. Several other studies have supported the link between MIF bioactivity and this active site (19 -21). Our first class of MIF inhibitors was a p-hydroxyphenylimine derivative of amino acids, rationally designed based on the scaffold of dopachrome and p-hydroxyphenyl pyurvate (13,22). L-Trp Schiff base emerged as the most potent inhibitor of MIF tautomerase activity (17). Unfortunately, L-Trp Schiff base compounds lack long term stability. We also have shown that a P450-dependent metabolite of acetaminophen, N-acetyl-p-benzoquinone imine (NAPQI), covalently binds to MIF at its enzymatic site (23). The NAPQI adduct inactivates MIF cytokine activity in a number of in vitro bioassays, including interference with the anti-inflammatory effect of dexamethasone, After incubation with FL-ISO1, the cells were washed with PBS and fixed using 4% formaldehyde. Cells were washed and stained with DAPI (20 ng/ml) and mounted in 70% Vectashield mounting medium on a microscope slide. Observation and imaging of the cells were carried out on an Olympus Iϫ70 microscope using a 40ϫ oil immersion objective; b, inhibition of intracellular MIF tautomerase activity by ISO-1. RAW 267.4 macrophages (1 ϫ 10 5 ) were treated with various concentrations of ISO-1 (1-100 M) and then lysed using Tris buffer under non-denaturing conditions. The lysates were analyzed for the tautomerase activity of MIF using L-dopachrome methyl ester as we described previously (15,17), and values represent mean Ϯ S.D. of three separate experiments (*, p Ͻ 0.05 and **, p Ͻ 0.001). confirming the role of the active site in mediating MIF bioactivity. However, NAPQI is not useful as an anti-inflammatory agent, because it has an unacceptable toxicity profile. Thus, we have designed ISO-1 as an inhibitor of MIF tautomerase and glucocorticoid regulating activity (15). The crystal structure of MIF complexed to ISO-1 reveals that ISO-1 binds within the active site in a similar manner to p-hydroxyphenylpyruvic acid (14). Our previous studies have demonstrated a strong correlation between the specific ISO-1 inhibition of MIF tautomerase activity and suppression of MIF pro-inflammatory activities (15), but it was not known if inhibiting endogenous MIF tautomerase activity with ISO-1 would suppress inflammation in vivo. As a first step we examined the uptake of ISO-1 by RAW 264.7 mouse macrophages using fluorescently labeled ISO-1 conjugated to rhodamine (FL-ISO-1) (supplemental Fig. 2). After exposing macrophages to FL-ISO-1 for 30 min, the presence of the fluorescent ISO-1 derivative was observed in the cytoplasm and nuclei of the cells. The accumulation of fluorescence increased from 2 to 30 min (Fig. 1a). To determine whether ISO-1 could inhibit intracellular MIF tautomerase activity, macrophages were treated with various concentrations of ISO-1 (10 -100 M) for 30 min. The medium was then replaced with ISO-1-free medium. The cells were then lysed and the tautomerase activity determined (17,23) (Fig. 1b). ISO-1 applied to the cell cultures inhibited intracellular tautomerase activity in a dose dependent manner, indicating that the exogenous ISO-1 led to suppression of intracellular MIF tautomerase activity. Intracellular MIF occupies a critical role in mediating the cellular responses to pathways activated by LPS (24). Endogenous MIF is required for the basal expression of TLR4, the endotoxin receptor, and MIF-deficient cells are hypo-responsive to endotoxin (2,24). Accordingly, we reasoned that ISO-1 inhibition of MIF would suppress endotoxin responses in macrophages. ISO-1 dosedependently inhibited endotoxin induced TNF release (Fig. 2a) and nuclear translocation of NF-B up to 70% at 100 M (Fig. 2b). Thus ISO-1 recapitulates the phenotype of the MIF deficient macrophages and is associated with decreased NF-B activation and TNF production in response to LPS. To determine the anti-inflammatory effects of ISO-1 in vivo, we next administered ISO-1 to wild type and MIF knock-out mice. Consistent with the study by Mitchell et al. (25), peritoneal macrophages from LPS-treated MIF knockout mice produced 60% less TNF than those isolated from wild type mice under similar conditions (Fig. 3a). ISO-1 inhibited TNF release by 67% in peritoneal macrophages from wild type mice (Fig. 3a). Importantly, ISO-1 administered to endotoxemic MIF-knock-out mice did not attenuate the macrophage TNF release. This indicates that the effect of ISO-1 in suppressing macrophage responses to LPS requires endogenous MIF. We tested whether ISO-1 can improve the survival rate from lethal endotoxemia. ISO-1 treatment for 3 days dose-dependently improved survival of endotoxemic mice (p Ͻ 0.001 at 35 mg/kg), as compared with vehicle-treated controls (Fig. 3b). This level of survival improvement by ISO-1 is comparable with the effect of anti-MIF antibodies in the same model (3). We also tested the toxicity of ISO-1 in mice and found no evidence of lethality up to 250 mg/kg. This concentration is almost 7-fold higher than the maximum dose used in our in vivo studies. The importance of MIF as a molecular therapeutic target in sepsis has been confirmed by the observation that treatment with anti-MIF antibodies signif- For detection of NF-B binding, nuclear extract from cells (ϳ5 g of protein) was incubated with 0.2 ng of 32 P-labeled double-stranded oligonucleotide sequence, and the samples were resolved on a 4% polyacrylamide gel and visualized directly by autoradiography after drying the gel. The results for each treatment were expressed as relative intensity compared with the control (saline-treated) intensity. FIGURE 3. ISO-1 is protective agent in mouse model of endotoxemia. a, ISO-1 inhibition of TNF␣ production is specific to MIF. As described previously (25), C57Bl/6 MIF ϩ/ϩ and MIF Ϫ/Ϫ mice (each n ϭ 7) were injected intraperitoneally with LPS (10 mg/kg) and either ISO-1 (1 mg/mouse) or vehicle. Peritoneal macrophages were cultured ex vivo for an additional 24 h, at which time supernatants were collected for determination of TNF concentration by ELISA; b, male Balb/C mice were injected (intraperitoneally) with LPS (10 mg/kg) and followed by different doses of ISO-1 (3.5-35 mg/kg) or vehicle twice a day for 3 days. Mice were observed for 2 weeks. Data points are from three independent experiments (n ϭ 33; **, p Ͻ 0.001). icantly improves survival in septic mice (1). We examined the time course of MIF release in mice with CLP-induced peritonitis, a widely used model of sepsis. We found that serum MIF levels increased to 70% of maximum levels within 24 h post-CLP and peaked at 36 h (data not shown). This identified MIF as a late mediator in sepsis, with potential to be inhibited in a clinically relevant time frame. Based on these results we reasoned that a delayed treatment with ISO-1, consistent with the kinetics of MIF release, could be successfully applied to improve survival. ISO-1 treatment (35 mg/kg) initiated 24 h after CLP surgery and continued for 3 days resulted in survival of 77% (p Ͻ 0.001) compared with 38% in the control (vehicle-treated) group (Fig. 4a). Anti-MIF antibodies (3.5 mg/kg) given in a similar time frame improved survival comparable with ISO-1 (Fig. 4b). This is the first evidence that a specific inhibitor of MIF that targets the tautomerase site is protective against lethal sepsis in an established, widely used animal model. These data indicate that MIF activity can be therapeutically regulated with a molecule that specifically targets the tautomerase active site, a non-essential enzymatic function. It is not likely that the tautomerase activity is directly involved in the proinflammatory action of MIF, because no physiological substrates of the D-series of molecules have been identified in vertebrates. Rather, it is much more plausible that ISO-1 interacts specifically with MIF at the tautomerase site, resulting in altered binding of MIF to other cellular signaling protein partners. Previously we have shown that inhibition of the MIF tau-tomerase active site abolishes the ability of the molecule to regulate glucocorticoid activity in vitro (15). Here we demonstrate the importance of ISO-1 inhibition of this MIF catalytic site in the suppression of the cytokine proinflammatory activities and identify potential therapeutic implications. ISO-1 significantly increases the survival rate in severe sepsis induced by endotoxin or CLP. The successful treatment of sepsis in mice by ISO-1 in a clinically relevant time frame indicates that this strategy to design MIF inhibitors can be developed further. These studies also raise the possibility that the deleterious sequelae of MIF in diseases of MIF excess may be abrogated by treatments with ISO-1 or related, specific, small molecule MIF inhibitors.
2018-04-03T02:52:47.994Z
2005-11-04T00:00:00.000
{ "year": 2005, "sha1": "658a66519fe290e4e65b876fafe91f47c127dd35", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/44/36541.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "b3127ae2b96b3a43a53066627360b7aaef8061d5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine", "Chemistry" ] }
221902633
pes2o/s2orc
v3-fos-license
Understanding the Science of Indigenous Health System: Key to Sustainable Collaborations Most of the health systems in developing countries are dysfunctional and hardly responsive to the needs and demands of patients. Access to a plural healthcare system and reports of patients abandoning western medicine for indigenous medicine are signs of nonresponsive health system. The major contributing factors are the failures of the allopathic health system to recognize that indigenous medicine is a living and practised science, with its own philosophy, beliefs and practices developed over centuries. Indigenous communities and the patient’s worldviews are intertwined with indigenous traditions, practices and beliefs. While the two health systems, allopathic and indigenous, coexist in Africa, they must collaborate in the management of patients. The two systems assign different etiological explanations and meanings to health, disease and illness based on worldviews, epistemologies and methodologies developed over time. Change of mindset, attitudes and practices through decolonization will lead to sustainable collaboration. "Death is a spiritual illness to eradicate physical and biological life" After reading this chapter, the reader should be able to appreciate the need to interrogate the predominant Euro-western mindset, attitudes and practices, which have existed as the results of centuries of colonization. There should be a new approach which enables the reader to explore the multiplicity of epistemologies and worldviews to include the voices of the indigenous communities and its science, which tend to be referred to as witchcraft, evil and inferior practices. The reader is challenged to critically evaluate the power and extent of the influence of Euro-western history, its culture and philosophy on practices of medicine as science and monolithic approach to the search for answers to illness and diseases. The principles of the Euro-western approach are that if you do not know it, it does not exist and if you do not understand it, it's not science and therefore should be rejected. Their tendency is that of breaking it into pieces and flattening it to fit the mindset. At the end of the chapter, the indigenous and Euro-western paradigms are compared in terms of what counts as scientific knowledge and ways of knowing, including the respective value systems applied in research. The reader is expected to continue to search for the path which will lead to full discovery of own truth. As for the indigenous researchers, they should be able to remove the shackles which chained them to Euro-western practices and change their mindset of being loyal followers and consumers of western developed knowledge without considering the relevance thereof in the context of time, space and place. Where the content is alien to community beliefs and practices, it should be analyzed and interpreted through the worldview of the indigenous communities. The institutions of higher learning, and especially those entrusted with the responsibility of approving, awarding and granting permission to conduct research, should consider whether they are promoting the discovery of unreported knowledge or whether they are promoting existing but unreported indigenous knowledge. Comments and questions from members of ethics committees, such as "Which methodology are you following?" and" Is there a similar approach reported in the literature?", suggest that those members do not understand that an indigenous approach to research and to access knowledge entails a ceremony which often involves communication with ancestors, facilitated by indigenous healers. It must be indicated from the onset that an indigenous health system consists of multiple connections which indigenous communities experience with individuals around them, the environment, living and nonliving beings and objects in a state of physical, mental and spiritual consciousness. It is the frame of reference through which indigenous communities and their healers see the world and interpret events, including the diagnosis and management of illness and misfortunes in their environment. It further exposes the reader to the existing ignorance and misunderstanding regarding the science behind indigenous health systems and the philosophy of Ubuntu applied in the management of patients in indigenous communities. The authors strongly advocate that an environment of professional neutrality and open-mindedness should be the premise on which negotiations for collaboration between indigenous and Euro-western health systems are conducted. Background "When the body is smarter than the brain…" Most of the Euro-western-based social and health sciences disciplines have inherited the logic that when they mediate and interact with indigenous communities, their disciplines constitute the gold standard [1]. This logic represents a colonial mindset of authority over and superiority to indigenous knowledge systems and is critical of any systems and science, which does not adopt or conform to their views of what constitutes science [2][3][4]. The sciences and philosophies of indigenous knowledge systems are labeled as witchcraft, pagan and barbaric. In the past this approach has resulted in representatives from indigenous communities to abandon their indigenous character, practices and own particular scientific reasoning and methodology [5,6]. Where colonization to change indigenous practices failed after applying the conventional methods and means, it resorted to drastic and draconian actions such as banning it [7]. In order to survive the powers of colonialism, it appeared that those representing indigenous health systems and knowledge have adapted to "a new knowledge" and experienced their environment along the rules of a western system [8,9]. Despite an increase in awareness of indigenous beliefs, forms of living and practices, the latter still get destroyed when they do not meet western standards. Destruction takes place through inquiries based on the relational realities and forms of knowing that are predominantly western, and anything not complying with this should be revised to fit the mold [9]. "Knowledge is acquired, without respect it's a self-imprisonment" Ways of knowing follow a particular trajectory of searching for knowledge and is influenced by how one relates to the source of knowledge and the people who own and share that knowledge. Most of our understanding of science as determined by western standards is about compartmentalizing knowledge as it is being discovered and fragmentation thereof to fit the western model while ignoring the environment in which it is to be applied. It is common for practitioners of western-based science in the process of the so-called "new discoveries" of things that have existed in indigenous communities to disregard indigenous characters and names and rename them according to western concepts. Where there is poor understanding of the indigenous sciences, the modus operandi would be to destroy it to prevent it from competing with western standards. As Kaptchuk and Miller [10] explains it, western science seems not to understand that indigenous sciences do not characterize ways of knowing as higher and lower knowledge. The dominant Eurocentric model of thinking and relating to items and experiences is an attempt at homogenizing everything to become comprehensible [1,[11][12][13][14]. Indigenous health practitioners There are different categories of indigenous health practitioners in Africa. Depending on the region, most of them are also known as a traditional healers, medicine doctors [15], etc. For our readers, an indigenous health practitioner is defined as someone who is recognized by the community in which she/he lives as a competent person to provide advice on the causation of disease, misfortunes and disabilities in their community and diagnose and provide treatment for both physical, spiritual and psychological conditions in individuals and the community as a whole [14]. The calling to become an indigenous health practitioner may manifest in different ways and at different ages or times in life. Some are "called" before they are born, while others are "called" during childhood or adulthood. Some are "called" through illness, while others are "called" by experiencing persistent unnatural occurrences in their lives such as dreams and visions of departed relatives and ancestors [1,9,15]. There are instances where the call is either not realized soon or sometimes the person ignored it [15]. If the "calling" is not obeyed, the person becomes ill or continues to suffer until he or she accepts the "calling" and enters into an apprenticeship with a more experienced indigenous health practitioner [12]. In South Africa, the process of training to become a health practitioner is called "u thwasa" [15]. The training period may range from a few weeks to months. During this period the intern/thwasana discovers his or her ancestors and means and methods in which they would communicate with and through him or her [9]. Visits by ancestors would often take place during the night, and prescriptions and directions would be provided on how and where to obtain treatment for the patients. Upon mastering the art of abiding and obedience to the ancestral spirits, a graduation function is organized [9]. While the knowledge about diseases is passed on from the training supervisor to the intern/thwasana during apprenticeship, the knowledge of medicines, preparation and application thereof is directly communicated to the thwasana by his/her ancestors only. Both the trainer and thwasana closely guard these secrets from the ancestors [9,15]. Indigenous health practitioners are broadly categorized according to the techniques they employ and the methods of diagnosis [15]. The three main categories are discussed below. Diviners Diviners are a category of indigenous health practitioners who diagnose diseases and illness through divination. It's the unique and special process of interpreting the message of ancestors through possessed crafted objects such as bones, shells, wood, etc. This category of indigenous health practitioners also possesses the spirit to interpret misfortune and to perform family rituals to secure the protection and guidance of ancestors. They represent the memories of ancestors in human form and establish a crucial link between humans and the supernatural [2,16,17]. Herbalists Unlike diviners, this category of practitioners are predominantly ordinary people who have acquired an extensive knowledge of herbal medicine and the application of plant components such as roots, barks, leaves, oils, minerals, etc. in treatment. It is a category in which skills are learned and acquired without the involvement of ancestors. They voluntarily decided to undergo training with an established herbalist and then practice independently. They diagnose and prescribe medicines to prevent and to alleviate illness and to provide protection against witchcraft and misfortune or evil, as well as to bring prosperity and happiness [16]. Traditional birth attendants This category probably existed long before all other categories of health practitioners. Through the centuries their services had been utilized by all humanities albeit as a matter of necessity due to cultural beliefs or medical conditions which allopathic health practitioners were not able to explain and manage, such as birthmarks [15]. Their focus is on mother and child health, starting from conception right through till the child reaches the age of 5 years. The health of the nursing mother is managed together with that of the child. It is believed that the newborn will not survive unless prenatal conditions and infections that the mother may develop are left untreated. The traditional birth attendants are mostly elderly women of 60 years or older and use herbal medicines to treat their patients. Spiritual healers/prophets/faith healers The spiritual or faith healers and prophets have recently emerged as another category of indigenous health practitioners, and whether they should be recognized and accepted as indigenous health practitioners continues to be debated. They use prophesy and faith in supernatural beings as the source of their power. A common practice among them is the use of prayer, candlelight and or water to heal their patients [18]. There is division within this category of indigenous health practitioners, and it is based largely on legitimacy and beliefs. Prophets/spiritual healers among themselves differ on the legitimacy of spiritual healers based on calling and supernatural sources of communication. One group of spiritual healers claim to have revelations and visions related to supernatural beings and a so-called heaven as their calling. This group also claim to communicate directly with God in the healing process and do not make use of roots and other raw plant materials to prepare traditional medicines. Instead, they use water and processed herbs to heal. However, the second group of spiritual healers claim to have visions of objects and people as their calling, similar to diviners. Convergent and divergent views between allopathic health practitioners and indigenous health practitioners Apart from a few areas of possible convergence between the two health systems, it is the divergent views which have obstructed the development of sustainable collaborations between allopathic and indigenous health practitioners. Some are highlighted in Table 1 below. The areas of convergence between the two systems are that both display sympathy towards their patients and care about the wellbeing of their patients. In addition, they accept accountability for their patients' health, work from a body of underlying empirical knowledge and both engage in elaborate processes of discernible empiricism in their efforts to diagnose and treat their patients [16]. There is, however, a view among allopathic health practitioners that indigenous medicine in terms of its body of knowledge and practices had remained stagnant during the course of human evolution [19]. The irony is that much of western knowledge, which is vowed to be scientifically based, originated from indigenous medicine by selecting certain practices from the latter, subjecting it to analyses and then incorporating some of that into allopathic settings. Very little, if any, recognition is given to the science and philosophy of indigenous knowledge, let alone assigning intellectual ownership. From the above table, it is evident that the two health systems display differences in their approach to knowledge and science. These differences could be explained using ontology as it evolved culturally and historically over time. The allopathic perspective is based on western science, while indigenous medicine is based on indigenous sciences. Allopathic health practitioners seem to find it difficult to accept the indigenous sciences into their "rational" scientific framework because it does not fit their model. Another difference relates to the belief of what causes disease and illness. Allopathic medicine associates disease and illness with invading pathogens such as bacteria, parasites and viruses and or physiological changes. The indigenous system believes that disease and illness are caused by supernatural forces. Various explanations are offered for "why me and now" [20][21][22]. It can be as a result of the individual's own spiritual mishaps, provocation of ancestors by violating taboos, obligations or responsibilities or a mere "call" by ancestors to perform certain rituals. Witchcraft and evil spells are regarded as common causes [7]. Another aspect on which the two systems differ is on what is understood to be science. The point of departure would be on how knowledge or empiricism as a science is defined. Science as it is known from a western perspective in modern times is the accumulation of knowledge through experience/experimentation and observation, and it is stored in books or electronically [23]. In order to be educated, one has to read the books or access the information electronically. For this reason, allopathic medicine utilizes textbooks and other archived material to pass knowledge on. On the other hand, among indigenous health practitioners, knowledge is handed down, often verbally, from healer to apprentice, from one generation to the next [24,25]. It provides the "paradigm" through which and by which they understand and interpret their environment. The entire constellation of beliefs, values, techniques, etc. is shared by the members of a given community as health practices [12,18,26]. The two systems have a different understanding and explanation of what constitutes a healthy individual and society and illness in the community. The allopathic health system subscribes to World Health Organization's (WHO) definition of health as "a state of complete physical, mental and social wellbeing and not merely the absence of disease or infirmity" [27]. This definition of health is limited to an individual within the society and does not comply with the indigenous standard of health. For the indigenous communities, health is not experienced at an individual level. It is defined in terms of the completeness of society as a whole, connectedness and harmonization between the living human kingdoms/beings and their ancestors, animal kingdoms and environment. It values health as a system, similar to the human system, with different components, and each component contributes to the functionality and completeness to purpose [28,29]. There is growing evidence that the two main health systems-indigenous and allopathic-are operating side by side in Africa [9,30]. Depending on the country and history of colonization, allopathic health practitioners tend to be well resourced and supported by the government, while neglecting and, in some situations, suppressing indigenous health systems and its practices. At times, the lack of communication and the adversarial relationships between the two systems impact negatively on the delivery of health services to communities [31,32]. Patients are receiving conflicting advice from their health practitioners. Treatment overdose and drug interactions are very common, and this is not surprising as the two systems have divergent worldviews of the causes of diseases; why, when and how a person becomes ill; and finally the diagnostic tools, processes and approaches to [33]. Their understanding of what constitutes a diseased patient and or community and the healing process is largely influenced by their respective values and meaning of life and death. Why people get sick and why they should die or live longer will determine their acceptance of the outcome: death or recovery to health/healing. For example, if the death of a diseased individual is viewed as a means of joining the ancestors to provide guidance and advice to the living, the outcome of healing will not be considered as good and beneficial to the indigenous communities. Indigenous health systems acknowledge that there are diseases and/or illness which infect or attack the human spirit without affecting the physical body [34]. The illness could be as a result of spiritual attacks by evil spirits or evil spells, demonic forces, ancestors' way of communicating with an individual, family and communities. While western science has not accepted this concept of disease, not everything that western science practices and observes meet their own standard of science. For example, western science believes that life in human beings constitutes the coexistence of the physical body, spirit or soul. The existence of the spirit as part of giving life to the body is not based on sciences, but on a belief system which is common to all [35][36][37]. Indigenous science believes that the spirit, which inhabits individuals, does not present with physical signs and symptoms which could be detected and diagnosed by modern technology as employed by allopathic health systems, e.g. a stethoscope, diagnostic radiography (X-rays), ultrasound, computed tomography (CT) scans, magnetic resonance imaging (MRI) scans and nuclear medicine scans. The opposite is also true. There are diseases, which infect/attack the physical body without affecting the spiritual aspect [35,[38][39][40]. Allopathic practitioners are well resourced to diagnose and manage both that of the body and spirit. At the centre of the two health systems is the phenomenon of dual consultation which is being exercised by patients based on their preferences of health provider, accessibility and affordability of the services and integration of the disease management model with their belief systems and practices. The perception created over the years under colonial rule by western authorities is perpetuated with the mindset suggesting that patients belong to the allopathic health system with no right of choosing and consulting health providers other than allopathic health practitioners [4]. Failure to recognize indigenous worldviews and beliefs had created a crisis for allopathic healthcare which persists to this day [41,42]. This is particularly evident among HIV/AIDS and TB patients who are reported to be abandoning western treatment in favor of indigenous remedies and practices. In most cases allopathic health practitioners are made aware of this, often at an advanced stage of treatment, when patients who have been exercising their rights to choose disclose that they are also receiving treatment from indigenous health practitioners [43][44][45] . Without the recognition of patients' rights and the establishment of collaborations and referrals of patients between the two systems, the postcolonial health system will remain dysfunctional and ineffective to fully respond to the needs of the indigenous communities. Integration of the two healthcare systems The point of departure should be the interrogation and understanding of the existing health system which was operating in communities before colonization and globalization of their environment [46,47]. The definition of indigenous in our context refers to the root, something natural or innate (to), a way of life, living, beliefs and practices which is an integral part of community culture. It is embedded in the culture and therefore tacit knowledge. It is communal, a shared form of knowledge achieved through experience. It is a linguistic phenomenon. This phenomenon serves cognitive interests of three types namely technical, moral and critical of own environment [48]. Due to globalization, indigenous communities have become increasingly exposed to foreign cultures and practices. There are no aspects of their social life, customary practices or traditional behavior which remained untouched. Communities are now living in countries without borders, and they seem to be short-changed by globalization and colonization. Foreign cultures and practices have intruded into indigenous inner self and being without respect and invaded their living space similar to a declaration of war against cultures that were different to that of the colonizers. The character and nature of globalization and colonization is to perpetuate the dominance of that which is being introduced to communities: western or foreign culture, language and health systems, including diseases against which indigenous communities had no innate immunity, constantly displacing indigenous knowledge systems of managing their patients. For centuries, indigenous communities have maintained their dignity and trust in that which worked for their communities and which was gained through experience over many years. They rebelled against colonization and resisted to be mere bystanders and simply witness their indigenous norms and values to become extinct. With the rediscovery of self, communities are increasingly reclaiming their past and striving to retain their cultures and ways of knowing which were previously marginalized and dubbed unscientific and barbaric. This is no easy feat as they are split between claims of global science on the one hand and the equally compelling claims to recover the "African past" on the other hand. Health systems are defined as all activities in the community which serve to promote, restore and maintain people's health. In a postcolonial and globalization context, both the indigenous and allopathic health systems are operating side by side. For the two systems to function optimally, it would require the leveling of the playing field through decolonization of mindsets, attitudes and practices. The desired outcome should be the gaining of knowledge, together with acknowledgment and recognition of the important role that indigenous health system plays in the delivery of primary healthcare services. Globally, indigenous medicine has been declared a component of Primary Health Care (PHC) by the World Health Organization's Health Promotion: Strategy for the African Region. The strategy recommends that different countries should promote and incorporate their indigenous health practitioners into healthcare systems. The implementation of the recommendation has been met with resistance and criticism from allopathic practitioners [9]. A significant number of indigenous communities prefer indigenous medicine as their first choice. Indigenous medicine has always been acceptable, accessible, available, affordable and attainable to them. Several countries have adopted legislation promulgating traditional medicine initiatives. In response to the World Health Organization's Health Promotion: Strategy for the African Region, South Africa promulgated the Traditional Health Practitioners Act [49], to establish a regulatory body controlling the registration and education of THPs [49]. Despite this legislation, allopathic and traditional healthcare sectors remain in conflict and disjointed. Few allopathic health practitioners understand the philosophy, ontology and epistemology of indigenous medicine, let alone accept it as scientific with its own long-standing experiments and standards comparable to western medicine [4,12]. Simply stated, most of the allopathic practitioners are not able to free themselves from the shackles and deeply embedded mindsets of colonization. Because there is no true understanding of indigenous healthcare systems and its sciences, allopathic health practitioners do not want indigenous medicine to be recognized as a health science [4,12]. Misinterpretation and misrepresentation of indigenous healthcare system The introduction of Euro-western culture, practices and religious beliefs, such as the Christian faith, dominated and disregarded the indigenous knowledge system. Indigenous still remained alive among communities even though it was not recognized by colonizers [4]. This had a significant impact on colonizing the minds of indigenous people. It enforced a change in indigenous culture, behavior, practice and belief. The continued alienation and exclusion of indigenous health practitioners in the management of patients is largely based on a monopolistic health system, which recognized allopathic health systems as the only practice of health systems emanating from the prevailing dominant practices by allopathic health practitioners and the lack of respect and recognition of traditional health systems [12]. In many of the formerly colonized countries, indigenous healthcare systems continue to be regarded as less important by Eurocentric healthcare providers and funders of healthcare services [12]. It is often perceived as a threat to western norms of standard of healthcare and at times associated with "witchcraft", actively discouraged and suppressed through powerful legislation [4]. Anecdotal actions, supported by published reports, reinforce the stereotype which appears to suggest that patients belong to allopathic health practitioners [4] and have no right to seek alternative opinions and treatment other than what western medicine prescribes. These actions go against the provisions of the Patient and Human Rights Charter in South Africa. In general, communities and patients are denied the power of self-determination, based on experience and informed by their understanding of health in their own particular context. Most of the health training curriculums in universities and colleges do not expose students to the science of indigenous health systems, community belief systems and their particular worldviews. When confronted with patients demanding alternative health services from indigenous health practitioners, allopathic health practitioners have the perception that such demands for pluralism would lower their standard of heath service provision and result in inappropriate management of "their" patients by indigenous health practitioners through poor treatment, lack of compliance and a possible overdose of medication. These views of allopathic health practitioners have been commonly expressed to and reported by HIV/AIDS patients using traditional medicine concurrently with allopathic medicine. Due to misinterpretation and misrepresentation, there is a lack of trust between the allopathic and indigenous healthcare sectors, which is exacerbated by a lack of understanding regarding the knowledge base of each sector. Allopathic healthcare providers simply expect indigenous health practitioners to use allopathic principles to treat ailments and promote health instead of indigenous practices. Throughout the era of colonization, and even during postcolonization in Africa, westernized healthcare training institutions have not incorporated traditional medicine and its philosophies in their curriculums. In instances where mention of indigenous health practices is made, it is usually done in a unilateral manner without incorporation of indigenous health practitioners as tutors and lecturers. As a result, allopathic healthcare practitioners deny students the opportunity of exposure to the multitude of traditional health practices, among others the traditional preparation and packaging of medicines; reproductive health; indigenous preventative and promotive health practices; diagnostic measures; curative and rehabilitative practices; management of diseases and health promotion; lifestyle and dietary preferences; the status of women; music, ancestral drumming and dance and its influence on wellbeing; spirituality; types of traditional healers; traditional leadership; patient management; palliative care; and maternal and child health. Although traditional health practices are considered to be primitive and backward, they continue to thrive due to its cultural importance among communities. In some communities, traditional healthcare practices are the only available healthcare services, given the prohibitive cost and inaccessibility of allopathic healthcare. It is estimated that to this day between 60 and 80% of patients in Africa consult indigenous health practitioners [41]. Despite years of colonization, the prohibition of indigenous health practices and its sciences, indigenous communities have not completely abandoned their ways of life, practices and beliefs [37]. For an outsider, this may be construed as being stubborn, backwards and ignorant of modern sciences and its achievements. For the local and indigenous communities, the allopathic health system has until now been unable to offer explanations for the onset of illness, the "Why me? Why now?" rationale which forms a crucial part of African indigenous understanding of health and healing [50]. In many instances the instructions by allopathic health practitioners to not use and mix allopathic medicine with traditional herbs confuse patients and do not achieve the desired effect [33]. Patients perceive it that they are expected to abandon their indigenous practices and roots and become part of the western culture. If parity is to be reached, the two healthcare systems should embrace pluralism and respect the rights of choice for all communities. All parties should acknowledge that globalization created contemporary societies where there are different and coexisting competing health systems arising from different traditions, practices and bodies of knowledge. Although pluralism is now recognized as a global phenomenon, its application in colonized communities seems to remain a pipe dream. It will remain a challenge until such time as allopathic healthcare practitioners and students respectively provide and receive training based only on a western-orientated curriculum that excludes alternative methods of care acceptable to the indigenous communities. The worldviews that inform the current curriculum for allopathic healthcare practitioners are monolistic, hospital-centred and disease-oriented and exclude self-care or healing. Furthermore, the curriculum perpetuates health disparities and power imbalances that adversely affect patient outcomes [4,12,31]. Indigenous health systems as a living science One of the common arguments by proponents of exclusivity health systems is that "our value system, science of medicine and standard of care will be compromised if we recognize and accept indigenous health practitioners to treat our patients" [4]. There are three fundamental problems associated with this approach, which require elaboration. Firstly, it's the mindset and attitudes which seem to suggest that patients and communities are owned by health providers. Secondly, the perception that allopathic medicine is the standard against which all health knowledge is measured. Lastly, the notion that for "others" (indigenous health practitioners) to exist, function and be accepted by communities, it will require approval and support from allopathic health practitioners. The plausible explanation for the above problems could be lack of knowledge and understanding of indigenous health systems and its sciences. In life, what we do not know or understand does not mean that it does not exist or does not make sense/work. It may possible mean that one does not understand and comprehend the whole picture and/or is not yet exposed to it. Indigenous health systems constitute a life force of science practised by indigenous health practitioners before and after colonization; It has a history, origin, philosophy and epistemology [51][52][53]. Indigenous health system has its own level of excellence in providing an answer to-"why me, why now", resources, the dynamic that carries communities forward. Indigenous communities consider it as the knowledge inherent to its own identity, with its own science and technological advances beyond physical limitations. It is an institution in its own right, with consumers and pioneers [51,52]. Long before colonial rule invaded indigenous communities, indigenous health practitioners were developed and advanced comparable to the allopathic health system. This is supported by a report by a Scottish medical anthropologist who witnessed the indigenous surgeons in Buganda performing a cesarean section (Figure 1). The Scottish colonizers interacted with the indigenous communities and learnt from them while conducting studies through observation. It culminated in the publication of an article that appeared in the Edinburgh Journal of Medicine and in a dissertation titled "Ueber die Lage und Stellen bei der Geburt" which he submitted to the Marburg University in Germany in 1885 [53]. That article is now part of the Annals of Obstetrics and Gynecology history, describing in detail how cesarean section was performed. He gave an illustration of procedures and how they were carried out: anesthetics practices, aseptic measures, performance of the actual cesarean section, how the uterus was massaged and delivery progressed, the final postoperative measures and how the mother and baby responded were all included. From this it is evident that the procedures and the science practised by indigenous healthcare providers at that time compared to the best standard of performing a cesarean section that existed in Europe. Research reports and dissertations published by medical anthropologists in 1885 confirmed that indigenous health practice is a field of health science practised by indigenous health practitioners with high ethical standard of care and value systems. The question of what is defined as science, how it is practised and how the standard thereof is measured is worth exploring and explained in this context. It is not disputed that science is an art, a pathway and systematic process of finding solutions to societal problems. There are also different pathways of knowing and finding solutions to problems facing communities. Different communities had explored different mechanisms and at different times during the development of their healthcare systems, through experimentation and testing the efficacy of their different medicinal products, beliefs and practices. Some solutions are yet to be explored and discovered. Reports confirmed that indigenous health practitioners have perfected the art of sciences long before colonizers and missionaries introduced western medicine. Their processes of diagnosis and patient management are documented as being thorough, scientific and of comparable standard to other practices. Despite what Felkin witnessed as being no different, in principle at least, from what modern doctors do, allopathic health practitioners of the twenty-first century do not recognize that indigenous health systems are a science and could play a significant role in existing health systems. There are several factors which contributed to poor working relationship between the two systems. Key among them is the effect and impact of colonization, globalization and commercialization of health and healthcare services as a commodity. Indigenous communities were encouraged to abandon their practices, beliefs and sciences. High levels of suspicion and mistrust supported the enforcement by law that prohibited the use of indigenous medicines. There is no doubt that the impact of colonization extended beyond politics and the economic life of indigenous communities, for it disorientated and destabilized their psychosocial interactions with reality. There are perceptions that most scientific scholars raised and educated according to the western doctrine are unable to use their worldview to interrogate and interpret the world and environment, unless it meets the western worldview. They subscribe to western principles despite its limitations in African settings. While most of the colonized countries may have achieved political freedom from their erstwhile masters, the pervasive socioeconomic mindset persists and liberation from western scientific inclinations evades indigenous scholars. Figure 2 Allopathic health practitioners of the twenty-first century applying similar principles, protocol and standards to that of indigenous health practitioners reported by medical anthropologist in 1885 during the delivery of a baby through Cesarean section (Google source). Exploring the indigenous epistemologies and sustainable collaborations The author argues for the need of a different approach to collaboration with indigenous communities who have experienced centuries of colonization and dehumanization of their traditional beliefs, health systems and practices. It is the author's view that postcolonial indigenous researchers should develop indigenous epistemologies and methodologies which dismantle, deconstruct and decolonize the Euro-western paradigms of thinking. It provides a platform for the rethinking of the indigenous health system, its philosophies and the sciences involved when a complete healthcare service was provided for centuries before colonization [54]. Although the two healthcare systems operate side by side at different levels of science, i.e. theory of disease causation and management of disease, a mutually agreed upon collaboration between the two systems could positively impact the establishment of a complete health system. A new trajectory and respect for the views that an individual or a community holds on health and diseases should be established, which will not only influence the interpretation of different health conditions and beliefs regarding causation of diseases but will also determine the type of providers who are consulted for the management, restoration of health and the wellbeing of communities. A sustainable collaboration would require exploring approaches that eliminate the "come join us" attitude and monopolistic health system of allopathic health practitioners who regard themselves as holding the gold standard against which all others are assessed. Development of sustainable collaborations through decolonization processes Studies show that the integration of allopathic and traditional medicine should include co-learning and mutual respect [19,55,56]. Traditional and allopathic healthcare practitioners already have common practices, for example, the physiotherapists' use of steam for inhalation therapy which is similar to ukugquma and using a warm towel compress which is similar to ukuthoba. Midwives recommending alternative positions during delivery is similar to methods used by traditional birth attendants throughout the ages. Creating opportunities for collaboration and capacity development through training of allopathic healthcare practitioners in traditional healthcare practices is emancipatory, will stimulate awareness and creates a cultural sensitivity among allopathic healthcare practitioners [50,57]. Collaboration will create an opportunity to enhance the transfer of skills and sharing of knowledge between the traditional and allopathic healthcare sectors [58,59]. It should translate into curriculum transformation through co-teaching, co-supervision and transfer of knowledge on diagnostic measures applied by indigenous practitioners in preparation and packaging of traditional medicines. Through such a training process, trust will be fostered between the traditional and allopathic healthcare sectors, and co-operation will be facilitated, leading to sharing of critical information and ultimately empowerment of both types of healthcare practitioners [60]. Indigenous communities, through colonization, have been oppressed, stripped of human dignity and have died inside a long time ago. Existing collaborations have failed to recognize the importance of redressing the inequalities of the past and to acknowledge the importance of indigenous knowledge [4]. It is the belief that the experience gained when indigenous and allopathic health practitioners work alongside each other would result in lasting collaborations. The view has been expressed by indigenous scholars that decolonization of healthcare requires a change in mindset and the establishment of agendas that would allow for mutual exchange and recognition of indigenous knowledge [61]. The success of it relies on a change in attitude, recognizing the value of indigenous health systems, beliefs and the Ubuntu spirit in African communities. The process of decolonization requires a participatory approach which requires commitment from all stakeholders [12,62,63]. It begins with demystifying traditional healthcare practices and community empowerment through honest and open discussion about the need for allopathic healthcare practitioners to learn from indigenous health practitioners. The main objective is changing the mindset and attitudes of the colonized indigenous and allopathic health practitioners through a participatory process. The demystifying stage involves the five phases of a decolonization process [4,12,62]: (1) rediscovery and recovery, (2) mourning, (3) dreaming, (4) commitment and (5) action (Figure 3). Rediscovery and recovery process This is the first phase in the process of decolonization. Allopathic healthcare practitioners are encouraged to rediscover and recover their historical cultural practices, languages and identities. They are to rediscover the many traditional practices including traditional methods of preparation and packaging of medicines; reproductive health, indigenous, preventative, promotive and diagnostic measures; curative and rehabilitative practices; management of diseases and health promotion; lifestyle and dietary preferences; the status of women; music, ancestral drumming and dance and its influence on wellbeing; spirituality; types of traditional healers, traditional leadership; patient management and palliative care; and maternal and child health. Similarly, the colonized indigenous practitioners and communities should rediscover, interrogate and question the current status of their practices. Rediscovery and recovery give the oppressed and colonized people the ability to decontaminate their minds and thought process in which they can define their real world and problems associated with it. Indigenous practitioners should decide on their terms of references and rules for engagement among themselves and with others. In this case, allopathic healthcare practitioners go through the process of rediscovery and recovery through learning about existing traditional healthcare practices, languages and identities. This process is the cornerstone for sustainable collaboration. Mourning the disrespect of the indigenous medicine This stage refers to the process of lamenting the injustices that have been done by colonization and how this has affected the self-esteem and image of the indigenous practitioners in the communities, including the impact it had on their practices and traditions. It has been argued to be an important part of healing and preparing for moving forward. The years of assault upon and damage done to the minds of indigenous people, their traditions, values and belief systems were reported on literature. The scars from years of colonization and the indoctrination of African people to disown their own ways of living and of health practices are still evident years after achieving independence from colonizers. The perception that traditional beliefs and practices belong to the dark ages and uncivilized societies appears to have resulted in a refusal to accept indigenous heath practitioner. Even the so-called educated and liberated middle-class African health professionals have not been prepared to free themselves of the limitations of colonization. "The main challenge is the existing negative perceptions you have about us. This is more prevalent among the educated and middle-class people… consult secretively, with skepticism, doubts and pride…." as quoted by an indigenous member in the study by Nemutandani and others [4]. Dreaming process The third decolonization process involves dreaming in which the allopathic healthcare practitioners will allow the traditional healers to educate them about different possibilities of knowledge and skills that can still be helpful to offer alternative care. In the environment in which the dreaming should take place, two processes are required. Commitment process The allopathic healthcare practitioners should take on the positions of activism to advocate for incorporation of indigenous healthcare practices into the curriculum. They will therefore write monographs and textbooks to take the knowledge from tacit to explicit. Action process The last process in decolonization is the joint development of a plan of action by allowing indigenous health practitioners to build capacity among allopathic health practitioners. Dreams and commitments are translated into strategies for capacity building and skill transfer to ensure that their collaboration is sustainable. The existing collaborations between the two health systems without understanding and acknowledging that the indigenous health system is a living science are not sustainable. Finally, there are reports which found that allopathic and indigenous medicine are compatible in their sciences of treating and managing their patients. For example, allopathic health practitioners, using their existing biomedical knowledge of HIV-/AIDS-related illness, would set a course of treatment that emphasize antiretroviral medications and hospital treatment. On the other hand, indigenous health practitioners, invoking existing knowledge of sicknesses caused by spirits, set a course of treatment that emphasize herbal medicines, sacrifices and ritual ceremonies to appease ancestors. It can be argued that both approaches are typical of all medical systems in that they "frame problems in relation to the solutions they have to offer" and how they understand it to be according to their existing knowledge as defined by their health system-in textbook or through ancestors. In conclusion, any health intervention which disregards the existing community health beliefs, traditions and cultural practices is likely to be resisted passively by communities if not openly by creating parallel systems acceptable to the communities. Despite the existing bias against indigenous health practitioners and the negativities associated with those consulting them, collaboration between allopathic health practitioners and indigenous health practitioners in the management of patients is certainly possible. Reflection: Why is it that indigenous health sciences are not incorporated in the curriculum of most health professional training institutions in Africa? Despite the strong beliefs and practical experiences of both academics and students being products of indigenous systems, few seem capable of associating with it. One could conclude that the prevailing educational system does not encourage either students or academics to think for themselves but rather follow the path traveled by their Euro-centric predecessors, despite well knowing that their environment is different. There seems to be a deeply embedded western paradigm of reasoning among members of human research committees who seem to be fixated on whether similar research had been done and whether tried and tested methods are being followed. Reflection: For a long time, when we go out for research, if we are honest enough, what we are gathering or we went out for is a collection of existing information and raw data. It's only when we process it in our university (standards) that we call it knowledge. There are many of us who still go out and do research that way; it is the habit of the heart and mind and the habit of relating to people, society and healers as objects. © 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2020-09-10T10:17:12.336Z
2020-09-09T00:00:00.000
{ "year": 2020, "sha1": "a2f3ec55fa83547dc981f508133150985e8b6079", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/72878", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1c18e7163471e7ea4728dcda9c749a2317df0183", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
239617032
pes2o/s2orc
v3-fos-license
Phylogenetic Analysis of European Brown Hare Syndrome Virus Strains from Poland (1992–2004) European brown hare syndrome (EBHS) is lethal to several species of free-living hares worldwide. The genetic characterization of its virus (EBHSV) strains in European circulation and epidemiological knowledge of EBHSV infections is not yet complete. The study determined the nucleotide sequences of the genomes of EBHSV strains from Poland and analyzed their genetic and phylogenetic relationships to a group of hare lagoviruses. The genome of five virus strains detected in Poland between 1992 and 2004 was obtained by RT-PCR and sequencing of the obtained amplicons. The genetic relationships of the EBHSV strains were analyzed using the full genome and VP60 gene sequences. Additionally, the amino acid sequence of the VP60 gene was analyzed to identify mutations specific to recognized EBHSV subgroups. Partial amplification of the virus open reading frame (ORF)1 and ORF2 regions obtained nearly complete nucleotide genome sequences of the EBHSV strains. Phylogenetic analysis placed them in a GII.1 cluster with other European strains related to nonpathogenic hare caliciviruses. VP60 gene analysis allocated these EBHSV strains to the G1.2, G2.2–2.3 or G3 virus genetic groups. The amino acid sequence differences in the entire genome ranged from 1.1 to 2.6%. Compared to a reference French EBHSV-GD strain, 22 variable amino acid sites were identified in the VP60 region of the Polish strains, but only six were in VP10. Single amino acid changes appeared in different sequence positions among Polish and other European virus strains from different genetic groups, as well as in VP10 sequences of nonpathogenic hare caliciviruses. The results of the study showed a high genetic homogeneity of EBHSV strains from Poland despite their different location occurrence and initial detection times. These strains are also phylogenetically closely related to other EBHSV strains circulating in Europe, likely confirming the slow evolutionary dynamics of this lagovirus species. The first genetic and phylogenetic analyses of the EBHS virus strains focused on the short fragments (265-398 bp) of the VP60 gene that encode the capsid structural protein [2,28,29]. They showed the phylogenetic distinctiveness of EBHSV from RHDV and indicated high strain similarity within each calicivirus species. Despite the small number of EBHSV sequences available, phylogenetic variability of EBHSV strains associated with their geographical origin and time of detection has been observed [29]. Analysis of complete VP60 gene sequences also disclosed temporal-and geographic-related differences and revealed the existence of two main genetic groups of EBHSV: group A, encompassing Swedish EBHSV strains detected before 1989, and group B, with virus strains detected after 1989 in the rest of Europe [30]. Contrary to these results, the analyses of partial VP60 gene fragments of approximately 200 bp, have shown at least three independent directions of the virus' spread from Scandinavia across Europe [1,3,5,8,16,19,25,28]. This finding has confirmed the slow evolutionary dynamics of EBHSV, evidenced by low genetic variability of the strains and formation of new genetic subgroups of limited geographical range [8,28,31,32]. The variability of the EBHSV strains could be related to the evolution of RHDV, in which there have been observed recombination events between different virus types including nonpathogenic hare caliciviruses [23]. No similar type of recombination encompassing the capsid protein gene has been observed among EBHSV strains so far [30]. Nevertheless, it was mapped in the genome region encoding the nonstructural proteins, resulting in the emergence of EBHSV/RHDV2 recombinants [33]. Data on the epidemiology of EBHSV infections and strain virulence circulating in the European hare population are very limited. Gaining full understanding of these aspects of EBHSV is circumscribed by the difficulties in growing it in cell cultures. These difficulties significantly hamper research related to the assessment of the virus' variability and recognition of the role and functions of particular genes in adaptation of the virus to new hosts. The purposes of the study were to determine the genome nucleotide sequences of the EBHSV strains from Poland, and to identify conserved or variable single-nucleotide polymorphisms (SNPs). Viral phylogenetic relationships to previously detected EBHSV strains in Europe were also investigated. In order to determine the genetic affiliation of the Polish strains to currently identified EBHSV subgroups, a nucleotide sequence analysis of the VP60 gene was conducted. The amino acid sequences of this gene were also analyzed to identify the presence of mutations specific to the virus subgroups. EBHSV Strains and Lagovirus Sequences Five EBHSV strains (NP1192, L98, K501, G104, and K204) detected in hares from 1992 to 2004 were used for the genetic analysis. These virus strains now belong to the strain collection of the National Reference Laboratory for Rabbit Myxomatosis at the National Veterinary Research Institute in Puławy, Poland. Infected animals were found dead in different geographical locations across Poland and liver and spleen samples were taken from the carcasses. The presence of EBHSV in the animals was confirmed by testing the tissue samples using an RHDV-EBHSV CR Mab ELISA kit (IZSLER, Brescia, Italy). Amplification of the EBHSV Genome To determine the genome sequences of Polish virus strains, amplification of the overlapping genome fragments was conducted using primers specific to the EBHSV genome (Supplementary Table S1). Additionally, several sets of primers were designed based on the reference EBHSV-GD89 (Z69620) sequence to cover the missing regions of the virus genome using the primer-BLAST tool (https://www.ncbi.nlm.nih.gov, accessed on 22 September 2018). Primers were synthesized by Genomed S.A. (Warsaw, Poland). EBHSV RNA was extracted from animal tissues using an RNeasy Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. Subsequently, for amplification of individual EBHSV genome fragments, the OneStep RT-PCR Kit, (Qiagen, Hilden, Germany) was employed. The reactions were conducted in 50 µL of PCR mixture containing 2 µL of enzyme mix, 1 × buffer, 400 µM of dNTPs, 0.6 µM of the appropriate forward and reverse primer pair, 10 U of RNAsin (Invitrogen, Waltham, MA, USA), 5 µL of RNA template and redistilled water to the required reaction volume. The following amplification protocol was used: reverse transcription at 50 • C for 30 min and initial denaturation at 94 • C for 30 s followed by 35 cycles consisting of denaturation at 94 • C for 30 s, primer annealing (from 54 • C to 58 • C depending on the primer set) (Supplementary Table S1) for 45 s, extension at 72 • C for 1 min, and a final elongation step at the same temperature but for 10 min. PCRs yielded products varying in length from 265 to 858 bp. Sequencing and Phylogenetic Analyzis PCR amplicons were visualized in 1.5% agarose gel, purified and directly sequenced in both directions using the ABI Prism BigDye Terminator v3.1 Cycle Sequencing Kit on an ABI 3730XL DNA sequencer (Life Technologies, Carlsbad, CA, USA) at the Genomed S.A. sequencing service. For comparative analysis of nucleotide sequences of EBHSV strains, BLASTn software was used (accessed on 22 January 2021) [34,35]. The sequences were aligned using Clustal W [36]. The phylogenetic trees were constructed based on 15 full genome, 51 VP60 and 15 VP10 complete gene sequences of EBHSV using a Maximum Likelihood method with the Tamura-Nei model with uniform rates, and Nearest-Neighbor-Interchange (NNI) method for topology searching in MEGA version 7.0.26 [37]. This tool has also been employed in recent studies on lagovirus phylogeny [38,39]. A branch support was estimated using 1000 bootstrap replicates. The phylogenetic relationship among the sequences analyzed was considered reliable when the bootstrap value was ≥70%. The obtained genome sequences of Polish virus strains were deposited in GenBank under accession numbers MK440613-MK440617. Sequence Analyzis of EBHSV Strains from Poland The amplicons' sequences were merged to the nearly complete genome sequences of the EBHSV L98 (7424 bp), NP1192 (7422 bp), G104 (7414 bp), K204 (7409 bp) and K501 (7330 bp) lacking only the initial twenty-nucleotide fragment upstream of the 5 untranslated region. Only the K501 sequence did not contain the 3 UTR fragment. The highest sequence similarities in the entire genome (97.6%), the VP60 gene (97.8%), and in the nonstructural protein (NSP) gene (97.6%) were observed between the K501 and L98 strains. However, the lowest genetic resemblances corresponding to 93%, 93.4%, and 92.8% nucleotide identity in similar genome fragments were between the K204 and NP1192 strains. In the ORF2 region encoding the structural VP10 protein, 98.8% sequence similarity between EBHSV K501 and NP1192 was found despite the nine-year difference between their first detections. Although detected at similar times, K204 and G104 revealed Table 1. At 98%, the highest nucleotide sequence similarity of the whole virus genome was observed between NP1192 and the GD89 (Z696290) reference EBHSV strain. When each genome fragment was analyzed separately, the sequence genetic resemblances were 99.4% (VP60), 99.4% (RNA-dependent RNA polymerase), 97.0% (NSP), and 99.1% (VP10). The other Polish virus strains were 93-94% similar to EBHSV GD89 in the whole genome. The comparison of the NP1192 ORF1-ORF2 sequence with the Swedish O4022-10 reference virus strain gave 94% sequence similarity. It ranged from 91 to 92.5% for other Polish strains, although the individual NSP, VP60, and VP10 virus segments showed a relatively higher (91-94%) sequence identity. Analysis of the Lagovirus Phylogenetic Relationships Based on ORF1-ORF2 Genome Sequences The phylogenetic analysis of genome sequences of EBHSV strains encompassing the ORF1 and ORF2 region (nucleotide positions 21-7350 according to the reference EBHSV-GD (Z69620) strain) assigned them to one common cluster including strains from group A and group B ( Figure 1). In the group of EBHSV strains from Poland, NP1192 clusters together with the French reference EBHSV-GD. The L98, K501, K204 and G104 EBHSV strains form one genetic cluster (bootstrap value 100%) jointly with the Italian Wolf17-2016 EBHSV sequence. Within this virus group, two subclusters are formed, one containing the three Polish strains L98, K501, and K204 and the other the G104 strain and virus sequences detected in 2019-2020 in Germany. The results of phylogenetic analysis of the whole virus genome (ORF1-ORF2) and its segments encoding the VP60 and VP10 structural proteins indicates the distinctiveness of G104 from the group of the other EBHSV strains from Poland. Nevertheless, some relatedness was observed to younger European virus strains originating from France and Germany. In the VP60 region, the nucleotide sequence differences between G104 and the other virus strains from the G1.3 and G1.1 subgroups including EBHSVs from group A did not exceed 9.7%, while they were 5% for strains clustered to the G2 group. The highest nucleotide sequence similarity of 99.4% was found between G104 and the 0836 (HF571040) French strain detected in 2008. In the case of the EBHSV strains from France, Sweden, Italy, and Germany clustered together in the G3 group, the nucleotide sequence differences were in the range of 1.3-3.5% (Supplementary Table S2). virus strains originating from France and Germany. In the VP60 region, the nucleotide sequence differences between G104 and the other virus strains from the G1.3 and G1.1 subgroups including EBHSVs from group A did not exceed 9.7%, while they were 5% for strains clustered to the G2 group. The highest nucleotide sequence similarity of 99.4% was found between G104 and the 0836 (HF571040) French strain detected in 2008. In the case of the EBHSV strains from France, Sweden, Italy, and Germany clustered together in the G3 group, the nucleotide sequence differences were in the range of 1.3-3.5% (Supplementary Table S2). Figure 1. The phylogenetic Maximum Likelihood tree constructed using the ORF1-ORF2 (21-7350 bp) nucleotide sequences of European EBHSV strains and other hare and rabbit caliciviruses. Bootstrap values (1000 replicates) greater than 70% are shown at the corresponding tree nodes. The tree is drawn to scale with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. Evolutionary analyses were conducted using MEGA7 [37]. The rabbit hemorrhagic disease virus-RHDV (GI.1c), EBHSV GII.1/RHDV2 GI.2 recombinants, and rabbit calicivirus RCV-A1 (GI.4) were used as an outgroup to root the tree. EBHSV strains from Poland are marked by black dots. Nucletide and Amino Acid Sequence Analysis of the Structural Genes of EBHSV Strains from Poland and elsewhere in Europe The phylogenetic analysis conducted based on the VP60 and VP10 sequences showed that the EBHSV strains from Poland belonged to the G1, G2, or G3 virus genetic groups (Figure 2 Nucletide and Amino Acid Sequence Analysis of the Structural Genes of EBHSV Strains from Poland and Elsewhere in Europe The phylogenetic analysis conducted based on the VP60 and VP10 sequences showed that the EBHSV strains from Poland belonged to the G1, G2, or G3 virus genetic groups (Figures 2 and 3 The differences in the amino acid sequences between the Polish virus strains range from 1.1 to 2.6% in the entire two-open-reading-frame genome. For the genome fragment encoding the VP60 protein, the variability of the amino acid sequence is 0.9-2.7%; however, for the VP10 fragment it is higher than those observed for the entire genome, at 0.9-5.5%. In comparison to the VP60 region of the reference EBHSV-GD strain, this re- The differences in the amino acid sequences between the Polish virus strains range from 1.1 to 2.6% in the entire two-open-reading-frame genome. For the genome fragment encoding the VP60 protein, the variability of the amino acid sequence is 0.9-2.7%; however, for the VP10 fragment it is higher than those observed for the entire genome, at 0.9-5.5%. In comparison to the Z69620 VP10 reference amino acid sequence, EBHSV strains from Poland and elsewhere in Europe were characterized by changes in six amino acids. Substitutions were identified at four amino acid positions (3, 64, 94, and 113). The E3D substitution was found in the three strains L98, G104, and K204, the N64S change appeared in the pair L98 and G104, and in the N64G form also in K204. The N94S change occurred in the L98, G104, and K204 strain sequences, whereas the N113D change was present in the K501 and G104 strains. Amino acid changes in positions G60S and L85F were present in the Polish G104 strain; however, they also appeared in EBHSV strains detected in 2019 and 2020 in Germany. Analysis of the genome of nonpathogenic GII.2, GII.3, and GII.4 hare caliciviruses proved that the same change at position 60 was also present in the HaCV Bs12-1 sequence and in the G/N form also in the HaCV-A1 sequence. The L85F substitution occurred in all sequences of analyzed nonpathogenic hare caliciviruses. Table 2. Amino acid substitutions in the VP60 region of EBHSV strains from Poland (compared to the reference EBHSV-GD 89 strain and representatives of the EBHSV genetic groups). Italy 2016 B/-undefined A A D S I T V S N S I M V A A I L A E T L T 0330 AM408588 France 2003 B/G3 V A E C I T I T N S I M T A A I L A E T L T E14-40/2 LT168848 France 2014 B/G3 A A E C I T V T N S I M T A A I L A E T L T 08-36 HF571040 Sweden B/G3 V A E C I T V T N S I M T A A I L A E T L T L03596/2019 LR899140 Germany B/G3 A A E C I T V S N S I M T A A I L A E T L T L03594/2020 LR899152 Germany B/G3 A A E C I T V T N S I M T A A I L A E T L T L03613/2019 LR899171 Germany B/G3 A A E C I T V S N S I M T A A I L T E T L T L03475/2019 LR899182 Germany B/G3 A A E C I T V T N S I M T A A I L A E T L T L03476/2019 LR899185 Germany B/G3 A A E C I T V S N S I M T A A I L A E T L T L03477/2019 LR899188 Discussion The population of hares has decreased sixfold over the last 30 years in Poland. Among the factors responsible for the significant reduction in animal numbers are heavy predation, poaching, and the occurrence of infectious diseases, with EBHSV infections being considered the main cause of death [40]. Additionally, over the whole continent of Europe, a constant decline in the populations of hares and other related species has been observed due to EBHSV outbreaks [1]. A new threat is the RHDV 2 rabbit lagovirus, which crossed the species barrier causing infections in hares with a similar course to RHD [41][42][43][44][45][46]. Although in this study a phylogenetic analysis of EBHSV strains utilizing the complete sequences of the virus genome was carried out, only the VP60-based analyses revealed deep phylogenetic relationships among EBHSV strains. In this context, EBHSV strains from Poland were clustered to the G1, G2, and G3 lineages within the larger virus group B, together with other virus strains from Europe detected in similar time spans [32]. In contrast to virus group A, which encompasses the oldest strains from the Scandinavian Peninsula which disappeared in the late 1980s, group B covers strains detected in Europe more recently [30]. These findings are consistent with previous partial VP60 sequence analyses of Polish EBHSV strains [18,47,48]. Subsequent deeper analysis of the VP60 sequence of the NP1192 strain (G1 group) confirmed its evolutionary relationships with the phylogenetically older French EBHSV-GD89 and B-EBHSV-6 strains as well as with the Swedish V58 strain belonging to the G1.2 subgroup. However, the NP1192 strain did not reveal mutations in codons 406, 407, or 469, which are characteristic of EBHSV strains in the closely related G1.3 subgroup. The four EBHSV strains L98, K501, K204, and G104 belong to the G2 and G3 groups together with French and other European EBHSVs detected since 1999. The phylogeny of these Polish virus strains was also confirmed by analysis of the amino acid sequences of the VP60 gene at the mutational hot spots (aa 427 V/T, 522 V/I, and 524 M/L) specific to G2 viruses [32]. The fifth strain, G104, is related to a much younger virus lineage consisting of Swedish and French strains discovered from 2008 to 2014, as well as to the newer German strains from 2019 to 2020. The close genetic relationships between EBHSV strains from Poland and Germany may have resulted from the geographical proximity of their detection sites and could indicate the common routes of virus spread in a given geographical area. The profile of amino acid sequences deduced from the nucleotide sequences of the EBHSV strains from Poland in the mutational hot spot positions of VP60 reflects the division of EBHSV strains into the genetic groups previously identified among French strains [32]. The V427, V522, and M524 amino acids characteristic of the G1 group are present in the NP1192 strain. The remaining Polish strains, L98, K501, K204, and G104, contain T, I, and L amino acids in their sequences, which are typical of French, Swedish, and Italian EBHSV strains forming the G2 group as well as strains of the G3 group [30]. Despite the Polish strains described in this study belonging to different genetic virus groups, they seem to represent the same serotype, which can still be detected using currently available ELISA kits (data not shown). Although in other studies distinct antigenic profiles of EBHSV strains have been observed, which could mirror changes in VP60 physiochemical properties, the phenotypic effect was still uncertain [30]. Analysis of the ORF2 genome sequences of EBHSV strains from Poland and elsewhere in Europe from 2016 to 2020, and of nonpathogenic HaCV GII.2-4 caliciviruses, indicates the presence of mutations at amino acid positions 60 and 85 in the VP10 structural protein gene. Among the EBHSV strains from Poland, the amino acid substitutions G60S and L85F occur only in the G104 sequence from 2004. They are also characteristic of much younger EBHSV strains from Germany from 2019 to 2020, although they did not appear in the sequence of the Wolf-17 strain from Italy from 2016. These mutations reflect changes observed in the structural part of the virus genome and can be considered as indirect evidence for the evolutionary processes related to the emergence and spread of a new genetic lineage of the virus strains in the hare population in Poland. The observed variability of the amino acid sequence and results of the phylogenetic analysis of the ORF2 region, supported by the phylogeny of VP60 and the entire genome, confirm G104's relationship to EBHSV strains circulating in Europe between 2008 and 2020. Moreover, these results support the assumption that the ORF2 region can be employed in tracking the evolution of EBHSV strains to a greater extent than previously thought. The analysis of the shared phylogenetic resemblance of the Polish virus strains indicates their low genetic variability and likely slow evolutionary dynamics compared with other lagovirus species such as RHDV. Low genetic diversity among the European EBHSV strains has previously been observed [28,29,31,32]. The virus seems to be more conservative; however, the evolutionary process still occurs and has resulted in the disappearance of the oldest group A strains and the emergence of several virus lineages within the new genetic group B [30]. Evidence for the slow evolutionary dynamics of the virus could be the presence of only a relatively small number of new virus variants within the existing genetic subgroups. A possible slow evolution mechanism may be also supported by the close phylogenetic relationships observed between Polish strains from 2001 to 2004 and a much younger Italian EBHSV strain detected in 2016. The close genetic relationship of EBHSV strains is mainly determined by their geographical origin rather than the time span in which their first isolation was achieved [32]. Another factor which may have an impact on evolution dynamics and the emergence of new EBHSV variants are infections of hares with nonpathogenic HaCV and RHDV2, which are phylogenetically related to EBHSV. Nevertheless, the observed lower genetic variability among EBHSV strains in comparison to RHDV could also be associated with the smallness of the hare population limiting the frequency of events of interspecies transmission of the virus and could be associated with the possible natural resistance of those species to EBHSV infection [1,10,15,31,32,40]. It must be noted that based on ORF1-ORF2 sequence analysis of Polish RHDV variants and EBHSV strains, there were no recombination events observed between these two virus species (unpublished data). Likewise, the analyses of the VP60 and VP10 EBHSV sequences, as well as available full genome sequences of lagoviruses, did not reveal any evidence confirming this recombination phenomenon in the group of Polish EBHSV strains. Furthermore, they formed a separate phylogenetic branch in relation to pathogenic RHDV (GI.1) and RHDV2 (GI.2), as well as to nonpathogenic RCV-A1 (GI.4), which were clustered as outgroup sequences. In addition, there is no vaccine available against EBHSV infection, so mutation is not driven by vaccine-related selective pressure on the population of wild virus strains [1,32]. This situation significantly diminishes the risk of the appearance of virus variants able to evade the host's immune system. Nevertheless, the detection of new genetic subtypes of nonpathogenic hare and rabbit caliciviruses in Europe and Australia emphasizes the role of asymptomatic infections in fostering the genetic variability of lagoviruses [38,39], and it can be assumed that these infections could enhance the further differentiation of EBHSV strains. A recombination mechanism analogous to that previously observed for nonpathogenic RCV and RHDV2 could also differentiate EBHSV [23,38,42]. The main limitation of each study aiming to analyze the genetic variability of EBHSV is the small number of full genome sequences of the virus available. It also hinders the assessment of their phylogenetic relationships and the investigation of lagovirus evolutionary paths. Nevertheless, a certain degree of genetic diversity among Polish strains was established, allowing for their classification to the G1, G2 or G3 genetic group. Conclusions EBHSV strains from Poland confirmed their close phylogenetic relationship to other EBHSV strains circulating in Europe. The results of this retrospective study also provide evidence that they show high genetic resemblance despite their different occurrence locations and detection time spans.
2021-10-15T15:35:10.884Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "14c8dac4d5226b88c992a4bc71ea0c9f910142e1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/13/10/1999/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e733306073f0fe8fb9e40ec0a465f7632bb6368", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
263231891
pes2o/s2orc
v3-fos-license
Novel endolysin LysMP for control of Limosilactobacillus fermentum contamination in small-scale corn mash fermentation Background Traditional bioethanol fermentation industries are not operated under strict sterile conditions and are prone to microbial contamination. Lactic acid bacteria (LAB) are often pervasive in fermentation tanks, competing for nutrients and producing inhibitory acids that have a negative impact on ethanol-producing yeast, resulting in decreased yields and stuck fermentations. Antibiotics are frequently used to combat contamination, but antibiotic stewardship has resulted in a shift to alternative antimicrobials. Results We demonstrate that endolysin LysMP, a bacteriophage-encoded peptidoglycan hydrolase, is an effective method for controlling growth of LAB. The LysMP gene was synthesized based on the prophage sequence in the genome of Limosilactobacillus fermentum KGL7. Analysis of the recombinant enzyme expressed in E. coli and purified by immobilized metal chelate affinity chromatography (IMAC) showed an optimal lysis activity against various LAB species at pH 6, with stability from pH 4 to 8 and from 20 to 40 °C up to 48 h. Moreover, it retains more than 80% of its activity at 10% ethanol (v/v) for up to 48 h. When LysMP was added at 250 µg/mL to yeast corn mash fermentations containing L. fermentum, it reduced bacterial load by at least 4-log fold compared to the untreated controls and prevented stuck fermentation. In comparison, untreated controls with contamination increased from an initial bacterial load of 1.50 × 107 CFU/mL to 2.25 × 109 CFU/mL and 1.89 × 109 CFU/mL after 24 h and 48 h, respectively. Glucose in the treated samples was fully utilized, while untreated controls with contamination had more than 4% (w/v) remaining at 48 h. Furthermore, there was at least a fivefold reduction in lactic acid (0.085 M untreated contamination controls compared to 0.016 M treated), and a fourfold reduction in acetic acid (0.027 M untreated contamination controls vs. 0.007 M treated), when LysMP was used to treat contaminated corn mash fermentations. Most importantly, final ethanol yields increased from 6.3% (w/v) in untreated contamination samples to 9.3% (w/v) in treated contamination samples, an approximate 50% increase to levels comparable to uncontaminated controls 9.3% (w/v). Conclusion LysMP could be a good alternative to replace antibiotics for mitigation of LAB contamination in biofuel refineries. Supplementary Information The online version contains supplementary material available at 10.1186/s13068-023-02400-5. Background The biofuel industry has experienced steady growth over the past few decades due to the increasing cost of petroleum and renewable fuel standards.By August 2022, the bioethanol production capacity in the United States has surpassed 15.6 billion gallons and is projected to reach 60 billion gallons per year by 2030 to meet the increasing need for ethanol blending in gasoline [1,2].Bacterial contamination remains a major concern at bioethanol fermentation facilities, as chronic and acute contamination occurs frequently, leading to stuck fermentation [4,5].These events often necessitate the shutdown of facilities for cleaning and result in significant economic loss due to contamination [6,7].Microbial analysis of these facilities has shown that lactic acid bacteria (LAB) are the most prevalent contaminant [5,6].More specifically, Limosilactobacillus fermentum and phylogenetically related heterofermentative LAB species have been identified as major culprits causing stuck fermentation across bioethanol facilities [6,8].To minimize contamination at the facilities, hop acids, chlorine gas and antibiotics are often used prophylactically to treat active contamination [9][10][11].Antibiotic residues in distillery products, such as distillers dried grains have been reported, raising concerns on potential antibiotic resistance and contamination in animal feed [12,13] Endolysins are lytic enzymes of bacteriophage that, with the help of holins (pore forming transmembrane protein), lyse infected host cells and release viral particles during a lytic infection cycle [14].Endolysins have the potential to be an excellent alternative to antibiotics because bacteria theoretically are unable to develop resistance against endolysin and they target a narrow group of microbes [15,16].Endolysins against Grampositive bacteria typically contain cell wall enzyme activity domain (EAD) for cell wall lysis and cell wall binding domain (CBD) while, endolysins against Gramnegative bacteria generally lack the CBD domain [17,18].In this work, we searched for potential endolysin genes using an endolysin specific domain associated with prophage sequences found in L. fermentum KGL7 (isolated from rice beverages of Northeast Indian tribe (SRA accession number: PRJNA563537) [19].LysMP with a putative GH25 enzymatically active domain and a SH3b homologue cell wall binding domain was identified, synthesized and recombinantly expressed in Escherichia coli.The purified endolysin was characterized for lytic activity against various LAB strains, optimum activity in various conditions and its potential to be used as an antimicrobial agent in biorefineries to mitigate LAB contamination. LysMP concentration-dependent lytic activity Growth delay was observed when different concentrations of purified LysMP (75, 100, 150, 200 and 250 µg/ mL) were used to treat the target bacterium, L. fermentum 0315-25 (Fig. 2A).When compared to no treatment control at the end of 24 h (OD 600 = 1.6), the growth of bacteria was significantly suppressed (OD 600 = 0.4) and delayed at 250 µg/mL concentration (Fig. 2A).The lytic activity of LysMP was validated with spot plate assay by aliquoting 5 µL (1 µg/µL) of purified endolysin on polyacrylamide gel polymerized with L. fermentum 0315-25.The visual observation of the zone of clearing with the purified LysMP validated the lytic potential of the endolysin when compared to the PBS negative control (Fig. 2B).The endolysin's lytic activity was further confirmed with bacterial viability assay using SYTOX ® dye and counting cells with a Cellometer cell counter.SYTOX ® nucleic acid stain penetrates compromised bacterial cell membranes but does not cross intact cell membranes of live cells.The treatment control showed very few dead cells (< 3%), while more than 90% of cells stained positive with SYTOX, after 30 min of LysMP (100 µg/mL) treatment (Fig. 2C), which further confirmed the lytic activity of LysMP against target bacteria. Biochemical characterization of LysMP The ionic strength of the buffer is one of the key parameters to optimize the activity of endolysin [23]. The LysMP activity was examined in the presence of 0.1-1 M salt concentration (Fig. 3A).The optimal range of sodium concentration was determined to be from 0.3 to 0.5 M, with the biggest difference being between 0.1 and 0.5 M NaCl (p < 0.01).Further increase of NaCl concentration from 0.5 to 0.7 5 M (p < 0.05) and 1 M (p < 0.01) resulted in a significant decrease in LysMP lytic active.No significant differences in the enzymatic activity were observed between 0.5 and 1 M NaCl. LysMP endolysin stability under fermentation environmental stress We examined the stability profile of LysMP by subjecting the purified enzyme under stress conditions commonly found in industrial bioethanol fermentation facilities.The enzymatic activity was analysed by bacterial viability assay as described in the method section following treatment under a range of temperature, pH, and ethanol concentrations for 0, 24, and 48 h.Thermostability of the enzyme was observed from 4 to 95 °C for at least 48 h (Fig. 3B).LysMP was stable in the range of 20-37℃ for at least 24 h.Significant decrease in enzymatic lytic activity can be observed at 48 h when comparing to 0 h at 20 ℃, 30 ℃, and 37 ℃ (p < 0.0001).There were no significant differences in LysMP activity when comparing between different time points (0, 24 and 48 h) at 50 °C.Significant decrease in lytic activity was observed at around 20% and continued to deteriorate as temperatures were increased to 60 ℃ and 95 ℃ for up to 48 h.Under various pH conditions, LysMP performed optimally at pH 6 where no significant loss in lytic activity was observed up to 48 h (Fig. 3C).The enzyme was functionally stable in a range of pH 4-8, up to 24 h.The relative lytic activity was significantly reduced to about 55% at pH 4. In all pH conditions tested at the 48-h time point, LysMP's lytic activity were significantly impacted when comparing to time 0 h and 24 h (p < 0.0001) except for pH 6.When purified LysMP is exposed to ethanol, it is at least 80% stable in 10% (v/v) ethanol concentration for up to 48 h.(Fig. 3D).The lytic activity significantly decreased starting at 20%, and at 30% (v/v) ethanol concentration.Values dropped to less than 40% within the first hour (0 h) and substantially decreased to below 10% activity for 24 and 48 h when compared to 0-h time point (p < 0.0001). Divalent cation binding prediction of LysMP The LysMP activity in the presence of divalent cations were examined (Fig. 4).The growth of L. fermentum was measured for 9 h at 37 °C in the presence of divalent cations Ca 2+ , Cu 2+ , Fe 2+ , Mg 2+ , and Zn 2+ at 0.1 mM and 1 mM concentration, with and without the presence of LysMP (Fig. 4A-E).Significantly more growth was observed with the addition of Ca 2+ at 0.1 mM (p < 0.05) and 1 mM (p < 0.0001) compared to LysMP alone (Fig. 4A).When both LysMP and Ca 2+ were added as treatment, a significant reduction of L. fermentum was observed at 0.1 mM (p < 0.0001), but not at 1 mM concentration.Similar trends were observed when comparing endolysin only treatment to addition of Cu 2+ at 0.1 mM and 1 mM (p < 0.0001) and its cation only controls (0.1 mM and 1 mM without endolysin); Fig. 4B).LysMP showed optimum activity in the presence of 1 mM Fe 2+ when compared with LysMP control only (p < 0.0001; Fig. 4C).While the improved lytic activity was observed in the presence of 0.1 mM Mg 2+ (p < 0.0001; Fig. 4D), no significant effect was detected with addition of Zn 2+ at any concentration (Fig. 4E). Synergistic effect of combined endolysin in infected corn mash The introduction of crude lysate LysMP (~ 250 µg/mL; MP) and crude lysate LysKB317 (~ 250 µg/mL; KB), both independently and in combination (~ 250 µg/mL; MP + KB), resulted in significant difference (p < 0.0001) in glucose utilization, ethanol generation, lactic acid and acetic acid concentrations compared to infection control (Y + L); Fig. 8A-D.Bacterial load were below the detection limit in yeast only control (Y) and crude lysate treatment (MP, KB, MP + KB; Fig. 8E).A significant difference (p < 0.0001) in bacterial load for infection control (Y + L) was observed (8.7 log CFU/mL) compared to endolysin treatment and control at the end of 48 h (below detection).No synergistic nor additive effect was observed between the addition of crude LysMP and crude LysKB317 (Fig. 8). Discussion Amidst the global crisis on antimicrobial resistance and heavy emphasis on responsible use of antibiotics and global collaboration on antimicrobial stewardship programs, recent research has shifted focus towards antibiotic alternatives to combat antibiotic-resistant bacteria [24].This has drawn attention to the development of bacteriophages and their encoded peptidoglycan hydrolases, also known as endolysins, which specifically target and lyse bacterial cell walls [25] without development of bacterial resistance [15,17].Research on endolysins has mostly been focused on human and animal health [26,27], but the use of endolysins to prevent bacterial contamination with fuel ethanol production has been shown to be an effective strategy [1,16].Furthermore, yeast cell surface display of endolysin [39] and direct yeast Fig. 5 LysMP activity against various LAB strains.LysMP at 100 µg/mL was tested for activity against the various lactic acid producing species and bioethanol isolates.Activity of LysMP was measured by percent (%) growth inhibition of susceptible bacteria strains.Asterisk (*) strains are below detectable limit.Data are means ± standard deviations with three independent biological replicates (n = 3) secretion of endolysin [40] have the potential to improve control of contaminants. In general, endolysins can be divided into four major classes based on the peptidoglycan cleavage activity [41]: (1) glycosidases cleave glycosidic bonds associated with N-acetylglucosamine (GlcNAc) or N-acetylmuramic acid (MurNAc); (2) amidases cleave the amide bond between the MurNAc and L-alanine residue, the first amino acid of the peptide component of the peptidoglycan (PG); (3) endopeptidases cleave the amide bonds between amino acids present in the PG; and (4) lytic transglycosylases cleave the glycosidic linkages between disaccharide subunits within PG [15].The group I glycosidases are further divided into two classes: (1) glucosaminidases, which cleave specifically N-acetyl-β-D-glucosamine residues and contiguous monosaccharides in N-glycans, cell walls, and chitin, and (2) muramidases or more commonly known as lysozymes including endolysins such as LysMP, which hydrolyse the β-1,4-glycosidic bonds connecting Mur-NAc and GlcNAc.Lysozyme activity is attributed by five distinct glycoside hydrolase families, named GH22, GH23, GH24, GH25, GH73 and more recently GH108 was identified [42]. Table 1 Bacterial and yeast strains used in this study a Amp R ampicillin-resistant; Cam R , chloramphenicol-resistant; Kan R , kanamycin-resistant b Biofuel contaminant wildtype were isolated from a Midwestern dry-grind fuel ethanol facility from a previous screen [32,57] c USDA-ARS Culture Collection (NRRL); German collection of microorganisms and cell cultures (DSMZ); National collection of industrial food and marine bacteria (NCIMB) Strain Relevant genotype/phenotype Bioethanol fermentation facilities are inherently prone to bacterial contamination found in raw materials such as corn mash and process water [4,6,28,29] which is difficult to eradicate from facilities [30,31] even with chemical treatments such as hop acids and chlorine-based oxides [9,10].LAB species, in particular, L. fermentum, have been found to be one of the primary contaminants that thrive under the conditions typically found in bioethanol fermentation, such as low pH, high glucose concentration, and anaerobic conditions, and can have a negative impact on ethanol-producing yeast, leading to unpredictable acute stuck fermentation [32].Furthermore, LAB isolated from bioethanol fermentation facilities have demonstrated antibiotic resistance, with some strains harbouring multiple drug-resistant genes against commonly used antibiotics like penicillin G, erythromycin and virginiamycin [31,33,35].This poses a challenge in the treatment of infections in industrial corn ethanol fermentation settings.The economic loss associated with the premature shutdowns of the bioethanol facilities due to stuck fermentation have been well documented [7].Research and development of bio-mitigation methods for LAB contaminants in fermentation tanks have gradually received more attention, and innovative approaches are being explored to combat such issue.One recent report by Kapetanakis et al. had investigated the possibility of engineering a S. cerevisiae strain with knockout mutations of amino acid transporter (Qdr) to limit cross-feeding and propagation of L. fermentum [37].Other research avoided stuck fermentation caused by Lactiplantibacillus plantarum using S. cerevisiae quorum sensing signal molecule, 2-phenylethanol [38].However, these methods do not address the source of contamination inside the fermentation tank. Biochemical stabilization of LysMP According to Hadinia et al., LAB such as L. fermentum can tolerate up to 6% salinity (or 1 M) solution without negative impact on bacteria [44].Our result show that LysMP can tolerate NaCl concentration up to 1 M without substantial impact to the lytic activity (Fig. 2A), which is at least equivalent to previously described endolysins [45].The LysMP showed improved activity in the presence of 300-500 mM NaCl.Previous studies showed that cell wall binding domain of endolysin contains various hydrophobic patches that interact with bacterial peptidoglycan [46].At higher salt (> 250 mM) concentration, the stable salt bridges may be established between peptidoglycan layer and endolysin and improve catalytic potential of endolysin (Fig. 1B). Divalent metal cations have been shown to improve lytic activity of endolysins such as Listeria targeting endolysin HPL118 and Bacillus targeting endolysin LysB4 [47].The LysMP activity in the presence of divalent cations were examined based on metal ion-binding site prediction (MIB2; [39]) and protein-ligand binding site prediction (COACH; [48]).The structural prediction analyses have revealed the potential presence of a zinc finger domain, with various divalent metal ion binding sites distributed across the endolysin (Additional file 1: Figs.S1, S2).Addition of divalent ions such as calcium, iron and magnesium ions resulted in some improvement of endolysin's activity (Fig. 4).Nonetheless, the accuracy of the protein-ligand binding prediction model and limited enhancement of activity with copper ion (Fig. 4B) and zinc ion (Fig. 4E) supplementation warrant further evaluation to understand the co-factor requirement for LysMP. Lytic potential of LysMP in fermentation The LysMP endolysin showed efficacy against all tested L. fermentum strains and other LAB species such as L. casei, L. rhamnosus, L. reuteri and L. brevis, but was ineffective against L. plantarum, L. lactis and L. buchneri (Fig. 5).The LysMP sensitivity profile result was expected and similar to other endolysins previously characterized Endolysins have shown the catalytic activity to be either strain or species specific [15,17,18,41].Traditional corn mash bioethanol fermentation tanks are operated in a temperature range of 30-35 °C with some exception for thermotolerant yeast at 42-45 °C and under the pH range of 4-6 and at ethanol concentration below 25% for up to 48 h or more [5,51].Therefore, the LysMP lytic activity should be stable at typical fermentation pH and temperature condition for at least 48 h (Fig. 3B, C).The ethanol tolerance of LysMP remains stable up to 10% for a period of 48 h; however, the enzyme's stability is significantly reduced at a concentration 20% or greater potentially due to enzyme precipitation (Fig. 3D).Despite LysMP precipitation at high ethanol concentration, it may still be possible to prevent the proliferation of contaminants prior to ethanol concentration reaches 20% (Figs.2A, 6).When exploring contamination mitigation strategies, the complexity of bacterial contamination in bioethanol facilities is not limited to a single strain [6].Thus, a reasonable intervention strategy should consider employment of endolysin(s) capable of broad-spectrum activity against LAB or combining different endolysins to target problematic LAB.We examined whether the efficacy of using LysMP as an antimicrobial agent in corn mash fermentations could be improved when used in combination with another previously described endolysin, LysKB317 [1].The results indicated that both enzymes, which were added as crude lysates, were efficacious in controlling L. fermentum infection and able to prevent stuck fermentation when compared with no treatment infection control.However, we were not able to demonstrate any synergy or advantage of using the combined enzyme mix (Fig. 8).Phylogenetically, LysMP is closely related to LysKB317 based on protein sequence alignment (Additional file 1: Fig. S3).Structurally, based on predicted protein folding, LysMP and LysKB317 demonstrated differences in the EAD active site, linker length as well as the CBD binding site as observed by the superimposed structure and domains [EAD and CBD; (Additional file 1: Fig. S4)]. Exogenous addition of LysMP prevents stuck fermentation Using corn mash fermentations, we confirmed that the addition of LysMP can inhibit L. fermentum infection and prevent stuck fermentation (Fig. 5).More than a 3-log reduction in bacterial load was observed in infected corn The improved glucose utilization and the production of ethanol in the LysMP treated L. fermentum challenged fermentations compared to no treatment infection control was substantial and demonstrates the enzyme's effectiveness with corn mash fermentations.The reduced lactic acid and acetic acid production levels in LysMP treated fermentations (Fig. 7C, D) further support the value of this endolysin for preventing stuck fermentation and restoring the ethanol production in level similar to the uninfected corn mash samples. Conclusion The findings of this study demonstrated the effectiveness of LysMP in preventing stuck fermentation by exogenous addition of the endolysin to treat L. fermentum infection in an environment commonly found in fermentation tanks.Bacteriophage-derived endolysin such as LysMP can be a good alternative or supplement to existing antibiotics mitigation strategies to treat LAB contaminations commonly found in fermentation tanks of bioethanol facilities. Bacterial and yeast strains and culture conditions Saccharomyces cerevisiae NRRL-Y2034 was grown in YPD Broth (BD Biosciences) at 30 °C with aeration at 200 rpm.All LAB strains used in this study (Table 1) were grown in MRS Broth (BD Biosciences) at 30 °C without shaking.Escherichia coli strains were grown in LB Broth (BD Biosciences) with 50 µg/mL ampicillin (Amp; Sigma-Aldrich) or 50 µg/mL kanamycin (Kan; Sigma-Aldrich) depending on the plasmid selectable marker. Expression and purification of endolysin E. coli BL21 (DE3) pLysS with pET-21a(+)::LysMP was induced similar to previously discussed methods [1] with 0.5 mM of isopropyl β-D-1-thiogalactopyranoside (IPTG; Sigma-Aldrich) in LB broth and ampicillin (50 µg/mL) overnight at 37 °C with agitation.Cells were harvested by centrifugation at 5000×g for 20 min at 4 °C.Cells were then lysed with B-PER (ThermoFisher Scientific) and the addition of 200 µg/mL lysozyme (20 mg/mL in 1 mM Tris-HCl, pH 8.0; ThermoFisher Scientific), DNaseI (10 U/mL; ThermoFisher Scientific), and RNase I (10 U/ mL; ThermoFisher Scientific), followed by gentle inversion for 30 min at room temperature.Soluble protein fraction was separated from whole cell lysate via 8000×g centrifugation at 4 °C for 20 min and purified using FPLC.The FPLC purification was carried out using an ÄKTA pure ™ 150 (Cytiva) with a TALON ® Superflow ™ 5 mL cartridge (Cytiva).A 30 mL sample of the clarified cell lysate was loaded at 0.5 mL/min onto the column previously conditioned with 5 column volumes (CV) of equilibration buffer (300 mM NaCl in 50 mM Tris-HCl buffer, pH 7.7) at 5 mL/min.After the lysate loading, the column was washed with 10 CV of equilibration buffer or until the absorbance was below 1.0 mAU at 280 nm.The bound protein was eluted in 5 mL fractions with 10 CV of equilibration buffer supplemented with 200 mM imidazole and 10% glycerol.All collected fraction were analysed on 4% to 15% (w/v) stain free Tris-glycine precast SDS-PAGE [sodium dodecyl sulfate (SDS)-polyacrylamide gel electrophoresis; (Bio-Rad)] as described previously [1].The buffer was exchanged in the purified protein to remove imidazole using 15 mL Amicon Ultra-15 column with 30 kD molecular cutoff (MilliporeSigma) and 50 mM Tris-HCl buffer (300 mM NaCl and 10% glycerol (v/v), pH 7.0).Buffer was exchanged 3 times leaving the final volume of 1 mL purified LysMP.A polyethersulfone (PES) membrane syringe filter with a 0.22-µm pore size (Millipore-Sigma) was used to filter purified enzyme.The purified protein was further quantified using a Qubit 3 fluorometer (ThermoFisher Scientific) and Qubit Protein Assay Kit (ThermoFisher Scientific). Spot plate assay Antimicrobial spot assay of LysMP against L. fermentum 0315-25 was performed using polyacrylamide 5% gel.Target bacterial strain was inoculated in 50 mL MRS and grown overnight at 37 °C without shaking, and centrifuge.The culture was centrifuged at 8000×g for 10 min and the bacterial pellet was suspended in phosphate buffer (50 mM NaH 2 PO 4 , pH 7.0) and adjusted to OD 600 = 50.One millilitre of resuspended cells was mixed with 7.50 mL of sterile PBS buffer in a 15 mL Falcon tube.Contents were then mixed with 2 mL of acrylamide/bis (30% (w/v), 200 µL of 10% (w/v), ammonium persulfate (APS), and 50 µL of N,N,N′,N′-tetramethylethylenediamine (TEMED) and poured into a petri dish (100 × 15 mm) to polymerize.Purified LysMP protein [1 µg/µL; 1 µL] was spotted on the polymerized gel and dried for 10-15 min.Sterile PBS was used as negative control.The plate was then incubated at 37 °C overnight and examined for zone of clearing. Growth inhibition assay Growth inhibition assay was performed at 37 °C for 24 h.Using a SpectraMax M2e Microplate Reader (Molecular Devices) with purified LysMP endolysin.Overnight grown bacteria cultures (Table 1) were adjusted to a final OD 600 = 0.05 in MRS medium.Eighty microliters of cells were mixed with 20 μL of LysMP (treatment; 75 µg/mL to 250 µg/mL) or PBS buffer (pH 7.4; control), and then diluted to 200 µL per well with MRS medium in 96-well microtiter plates (round bottom; Greiner).Plates were incubated at 37 °C without shaking and OD 600 was measured every 30 min for 24 h using.Treatment and control Patel et al.Biotechnology for Biofuels and Bioproducts (2023) 16:144 wells were performed in triplicates to determine growth inhibition compared to control at 24 h. Bacterial viability assay Endolysin activity was confirmed by examining the viability of bacterial cells population using Cellometer X2 image cytometer (Nexcelom) as described by Hodgkin et al. [53].Ten-microliter of OD 600 = 0.5 L. fermentum 0315-25 (10 7 CFU/mL) were mixed with 90 μL of 20 mM Tris-Cl buffer (pH 7) containing final concentration of 100 μg/mL LysMP endolysin and incubated for 30 min.Subsequently, 9 μL of sample was mixed with 1 μL of 10 mM SYTOX ® and 4 μL was pipetted into a Nexcelom counting chamber (CHT4-SD025), where the inlet and outlet ports were closed with clear tape to prevent evaporation.The bright-field and fluorescent (Excitation: 490 nm/Emission: 530 nm) images were acquired at four different locations in the chamber.The images were analysed by the Cellometer Spectrum software (version 3.2.1.2).Total bacterial concentrations were enumerated based on fluorescent intensities and cell size.Greenstained Limosilactobacilli samples were analysed to enumerate percent of dead cells based on total bacterial concentrations. Salt buffer effect on LysMP Different salt concentrations were assayed to determine the effect on LysMP stability and lytic efficiency in NaCl. The purified LysMP at a final concentration of 100 μg/ mL was tested with reaction buffer containing various sodium chloride (NaCl) concentrations (0.1, 0.2, 0.3, 0.4, 0.5, 0.75 M, and 1 M) and L. fermentum 0315-25 (10 7 CFU/mL).Cells were incubated for 30 min at room temperature and lytic activity was examined in a Cellometer as described in bacterial viability assays.Total percent (%) live cells was determined using bacterial viability assay method described above. Divalent cation effect on LysMP lytic activity Based on ligand model prediction, endolysin LysMP may have affinity towards Zn 2+ [48].We tested the effect of two different concentrations (0.1 mM and 1 mM) of divalent metal cations (Ca 2+ , Cu 2+ , Fe 2+ , Mg 2+ and Zn 2+ ) on enzyme activity by examining growth inhibition of L. fermentum 0315-25 in the presence of LysMP.Target bacteria were grown to OD 600 = 0.5, washed once with PBS, pH 7.4, and resuspended in 1X PBS.Concentrations of 100 mM of each CaCl 2 , CuCl 2 , FeCl 2 , MgCl 2 , and ZnCl 2 (Sigma-Aldrich) were separately dissolved in sterile ultrapure water and filtered (0.22 µm; Millipore).A combined volume of 200 µL reaction (20 µL of cell, 20 µL of endolysin at a final concentration of 100 μg/mL or water for negative control, 150 µL MRS, and 10 µL of divalent cation to achieve desired final concentration was added to each well in a clear 96-well (round bottom; Greiner CELLSTAR ® microtiter plate.Using a SpectraMax M2e plate reader, OD 600 was used to measure every 30 min for 9 h at 37 ℃. Temperature, pH, and ethanol stability assays Thermostability of LysMP was determined by incubating 1 mg/mL of purified endolysin for 0.5, 24, and 48 h in 300 mM NaCl, Tris-HCl (Sigma-Aldrich) assay buffer, pH 7 at = 4 °C, 20 °C, 30 °C, 37 °C, 50 °C, 60 °C, and 95 °C before performing the bacterial viability assays at a final enzyme concentration of 100 μg/mL as described above in bacterial viability assay section.Similarly, 1 mg/mL of purified endolysin was used for pH stability at pH 4.0, 5.0, 6.0, 7.0, and 8.0 (21 mM citric acid, 58 mM NaH 2 PO 4 buffer adjusted to the pH indicated) and ethanol stability in ethanol concentrations of 0-30% (v/v) in reaction buffer, pH 7, for 0.5, 24, and 48 h at 25 °C temperature [1] and examined for bacterial viability activity as described above. Preparation of small-scale corn mash fermentation Corn mash fermentation was performed as described previously with slight modification [1,5].S. cerevisiae NRRL Y-2034 (Table 1) was grown overnight in YPD broth supplemented with additional glucose (final concentration 7% w/v) at 30 °C with 200 rpm shaking.The contaminant L. fermentum strain 0315-25 (Table 1) was grown in MRS media with 5% glucose (w/v) at 30 °C without shaking to mid-log phase (OD 600 nm = 0.6-0.8).Both yeast and bacteria cells were collected via centrifugation and resuspended in sterile phosphate buffered saline (PBS; pH 7.4, Fisher Scientific) to OD 600 = 1 for yeast, and OD 600 = 4 for L. fermentum 0315-25, respectively.One OD 600 is approximately 6 × 10 7 CFU/mL for yeast and 1 × 10 8 CFU/mL for bacteria.Corn mash (approximately 33% solids) was collected from a commercial dry-grind ethanol facility and stored at − 20 °C until use and autoclaved before use.In separate 25-mL Erlenmeyer flasks, 16 mL corn mash with ammonium sulfate (0.12%, w/v; Sigma-Aldrich) and glucoamylase (10 μL of Alcoholase II Liquid 30098-LS341-Glucoamylase) was dispensed and incubated overnight for liquefaction at 40 °C and 100 rpm shaking overnight.Three sets of flasks in duplicate were prepared and designated as yeast control, bacterial contamination challenge and bacterial contamination challenge with treatment.All the flasks were inoculated with 0.1 mL S. cerevisiae inoculum.While 0.4 mL challenged bacterial inoculum were added into the flasks except yeast control designated flasks.The purified endolysin LysMP was added into the treatment flasks at a final concentration of 250 µg/mL.In each flask, 1 mL MRS broth supplemented with 5% glucose (w/v) was added to promote growth of challenge bacteria.Sterile water was added to the final volume of 20 mL.The flasks were plugged with a rubber stopper containing a 20-gauge 0.9 mm × 40 mm Precision Glide needle (Becton Dickinson) to vent excess CO 2 and incubated at 30 °C with 100 rpm shaking for 48 h.Samples (0.5 mL) were taken at 0, 24, and 48 h and diluted in PBS (pH 7.4).The bacterial count for each pooled sample was performed on 1.5% MRS agar plates and yeast inhibitor (10 µg/mL; cycloheximide) by spiral serial dilution using the Eddy Jet 2 spiral plater (IUL Instruments) set in the E mode 50 (50 µL sample).Plates were incubated anaerobically using the Anaero Pack System (Mitsubishi) at 37 °C for 18 h [16].Colony forming unit/mL (CFU/mL) were enumerated using a Flash & Go plate reader (IUL Instruments) with minimum of detection for sample at > 3 − log 10 (CFU/mL).As previously described, a high-performance liquid chromatography (HPLC) system with 300 mm Aminex HPX 87H column (Bio-Rad laboratories, Inc.) was used to quantify presence of acetic acid, glucose, lactic acid and ethanol [54]. Statistical analysis Two-way analysis of variance (ANOVA) with Tukey post hoc was applied to where appropriate to analyse the experimental results where sample are performed in at least two independent biological replicates (n = 2).Statistical significance is determined by *p < 0.05, **p < 0.01, ****p < 0.0001 (GraphPad Prism version 9.5.1). Fig. 1 Fig. 1 Schematic representation of LysMP domain structures.A The LysMP conserved domain representation depicting endolysin active domain (EAD) and cell wall binding domain (CBD) joined by a linker.B Three-dimensional model structure predicted by ESMFold [22].C The SDS-PAGE gel of the purified recombinant LysMP.Marker (M; Precision Plus Protein Standard; Bio-Rad), whole cell lysate of induced LysMP (S), Flow through (FT) during IMAC purification, wash (W), and Elution (E) is the eluted protein after IMAC purification of carboxyl-terminal 6 × histidine-tagged LysMP (35.9 kDa) Fig. 2 Fig. 2 Endolysin LysMP inhibition of susceptible bacteria.A Dose-dependent growth inhibition of purified LysMP against L. fermentum 0315-25 over time.B Antimicrobial spot assay of LysMP on polyacrylamide gel with target bacterium.Sterile 1 × PBS served as negative control (-Ctrl) and purified endolysin LysMP as treatment (LysMP).C Bacterial viability assay.Target bacterium with and without 30 min LysMP treatment (100 µg/ mL).Bright-field images detect all cells.Green, fluorescent signal detect dead bacteria with compromised cell membrane.Red cells detect live bacterial cells Fig. 4 Fig. 4 Biochemical characterization of LysMP.A Calcium ion (CaCl 2 ) at 0.1 mM and 1 mM were added to endolysin LysMP treatment.B Copper ion (CuCl 2 ) at 0.1 mM and 1 mM were added to endolysin LysMP treatment and OD 600 was used to determined growth of target bacteria over a period of 9 h at 37℃.C Iron(II) (FeCl 2 ) at 0.1 mM and 1 mM were added to endolysin LysMP treatment.D Magnesium iron (MgCl 2 ) at 0.1 mM and 1 mM were added to endolysin LysMP treatment.E Zinc iron (ZnCl 2 ) at 0.1 mM and 1 mM were added to endolysin LysMP treatment.The OD 600 was used to determined growth of target bacteria L. fermentum 0315-25 over a period of 9 h at 37 ℃.Data are means ± standard deviations (n = 3) using analysis of variant.*p < 0.05; **p < 0.01; ****p < 0.00001 Fig. 6 Fig. 6 Bacterial count in model corn mash fermentation.The L. fermentum bacterial count was measured in the corn mash samples by plating on MRS medium supplemented with cycloheximide as yeast inhibitor.Y: yeast control (no contamination; black bar), Y + LF: L. fermentum inoculated along with yeast (contamination challenge; light grey bar), Y + LF + LMP: LysMP endolysin was supplemented into bacterial infected corn mash and yeast (dark grey bar).Section sign ( §) indicates yeast control (black bar) had no measurable presence of L. fermentum.All data are means ± standard deviations with two independent biological replicates (n = 2) and statistically analysed using analysis of variant where not statistically significant (ns); ****p < 0.00001 Fig. 7 Fig. 7 Corn mash sugar consumption and analyses of fermentation products.Corn mash fermentation analysis of A glucose, B ethanol, C lactic acid, and D acetic acid for: yeast control (Y; no bacterial infection), L. fermentum infection with yeast (Y + LF; contamination challenge), and LysMP endolysin supplemented yeast with bacterial infection (Y + LF + LMP).All data are means ± standard deviations with two independent biological replicates (n = 2) and statistically analysed using two-way ANOVA.*p < 0.05; ***p < 0.0001; ****p < 0.00001 and ns = not statistically significant Fig. 8 Fig. 8 Analysis of fermentation products with crude cell lysate LysMP and LysKB317 treated corn mash fermentation.A Percent glucose (%), B percent (%) ethanol, C molar [M] concentration of lactic acid, D molar [M] concentration of acetic acid, E log CFU/mL of L. fermentum 0315-25 in corn mash fermentation.Section sign ( §) indicates yeast control (black bar) has no measurable presence of L. fermentum.Treatment of crude lysate of estimated 250 µg/mL endolysin LysMP and/or LysKB317 were added into 20 mL corn mash fermentation (n = 2 independent replicates and error bar = SD)
2023-09-30T13:53:52.280Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "66e36669fcad85d077a8d5c8b49e9cee8f0d5bf8", "oa_license": "CCBY", "oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/counter/pdf/10.1186/s13068-023-02400-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "400b798351148183b35276ce299d48410e109d53", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249401713
pes2o/s2orc
v3-fos-license
Microscopic Observations on Form and Structure of the Worm Enchytraeus buchholzi (Clitellata: Enchytraeidae) Background The worm Enchytraeus buchholzi is a new record species for Shaanxi, China, and a key pest on American ginseng Panax quinquefolium. To distinguish the species, the authors prepared its whole mounts and paraffin-embedded sections, and microscopically observed, photographed and measured. Besides, we conducted an experimental study on its DNA barcode. Results Cells, tissues and organs related to nervous, digestive, circulatory, excretory and reproductive systems were found, photomicrographed and described, including: prostomium, peristomium, segments, clitellum, pygidium, lateral and ventral chaetae; brain, cranial nerves, sensory papillae, ventral nerve cord; pharyngeal pad and glands, retractor muscles and muscular bundles, peptonephridia, esophagus, intestine; dorsal, lateral, ventral and intestinal parietal vessels, coelomocytes, coelomic cavity; nephridia, chloragogen cells; ovaries, groups of germ cells with developing oocytes, mature eggs, spermathecae; testes, seminal vesicles, sperm funnels, penial bulbs. Their shapes and sizes were given, and functions discussed briefly. The visual effect of staining specimens with hematoxylin plus eosin ranked the first, and that with acetocarmine the second. Conclusions The supplementary and objective descriptions, with the microphotographs as forceful pieces of evidence, have expanded biological knowledge in aspects of the form, structure and function of the worm, which is helpful for professionals to recognize and understand this species and provide a solid basis for its integrated pest management. The worm E. buchholzi has become a new record species for Shaanxi, China and a key pest on P. quinquefolium in Liuba County, Shaanxi Province. In order to distinguish this new pest, the authors prepared its whole mounts and paraffin-embedded sections, and microscopically observed, photographed and measured its sizes of forms and structures. These new observations would expand its biological knowledge, help professionals recognize the worm, and provide a basis for its integrated pest management (IPM). Taxonomic Status Three DNA sequences obtained in the formal experiment, with 676 base pairs each, were compared with those in BOLD Systems [17], which were matched to the species Enchytraeus buchholzi, with their similarities ranging from 98.89% to 100%. Thus, the DNA molecular analysis and identification suggested the specimens collected from Liuba be Enchytraeus buchholzi at species level. All the following descriptions are given under the new scientific name. Body Form Enchytraeus buchholzi is threadlike and monoecious or hermaphroditic. Living adults are 5 to 8 mm long; when fixed, length ca. 5 mm or slightly longer, width 180 ± 27 μm (sample size, n = 86) at segment V, and widest 285 ± 48 μm (n = 86) at clitellum. Its body consists of a prostomium, 27 -30 segments and a pygidium, with the number of segments changing slightly owing to individual variations (Fig. 1). Domelike projections, the sensory papillae, are distributed densely on the epidermis of the prostomium ( Fig. 2A, B); there is no chaeta on the epidermis of peristomium (segment I) ( Fig. 2A). Each segment from II to XXIX has a pair of lateral chaetal bundles with 2 or 3 chaetae, and a pair of ventral chaetal bundles with 3 chaetae except in XII and XIII (Fig. 11B). The pygidium has no chaeta but many sensory papillae on the epidermis ( Fig. 2C; Fig. 10B). Chaetal formula is 2-2,3: 3-3. Chaetae are straight with rarely weakly bent at the distal end, 35-44 μm long, only a few with nodules ( Fig. 2A, C; Fig. 11B) [18,19]. The clitellum covers XII and XIII, which can be distinguished by the two pairs of lateral chaetal bundles arranged longitudinally on each side though there is no intersegmental furrow outside and no septum inside ( Fig. 1A; Fig. 11B). Anterior Portion The prostomium, the clitellum and segments between them constitute the anterior portion of the worm, ca. 1/3 of the full length, where the internal organs are relatively gathered. Nervous System: There is the brain, long eggplantshaped in lateral view and inverted pear-shaped in dorsal or ventral view, 96 ± 14 μm long (n = 40), 19 ± 3 μm wide at the neck (n = 40), 48 ± 7 μm wide (n = 20) at the widest part, and 38 ± 6 μm thick (n = 20), located in I, II and partial III (Fig. 3A-C). The brain bifurcates in front part (Fig. 3B), and is rounded in posterior margin (Fig. 3C). It extends several cranial nerves, two thicker ones forward to the sensory papillae on the epidermis of the prostomium, two backward to the pharyngeal pad, and also stretches circumpharyngeal connectives downward to the subpharyngeal ganglia in II, linking the ventral nerve cord, 26 ± 9 μm wide and 18 ± 3 μm thick (n = 15), till the last segment before the pygidium (Fig. 10). The ventral nerve cord exposes only a little in I (Fig. 3D, E), and is relatively wider and thicker in II, III and IV than that in the rest segments ( Fig. 3A; Fig. 4D-G; Fig. 5D, G). Digestive System: The mouth opens on the ventral side of the peristomium (Fig. 3A, D), which is followed by the buccal cavity, pharyngeal pad, esophagus, intestine and anus (Figs. 3D,4,5,6,7,8,9 and 10). The dome-like pharyngeal pad, 92 ± 16 μm long (n = 32), 77 ± 11 μm wide (n = 11) and 40 ± 6 μm thick (n = 21), is rich in muscles and has, at the top, a muscular knot and ring ( Fig. 4A-E, G). The knot extends many retractor muscles to attach to the body wall on dorsal, lateral and ventral sides ( Fig. 4A-C, E, G), and the ring encircles the anterior part of the peptonephridia (esophageal appendages) and esophagus (Fig. 4C, F, G). The Fig. 2 Form of prostomium, segments I -III, XXVIII and XXIX, and pygidium. A. Prostomium, segments I (peristomium), II and III (mod-in-Hm), ventral view. B. Prostomium, transversal section (std-in-HE), anterior view. C. Segments XXVIII and XXIX, and pygidium (mod-in-Hm), ventral view. Labels abbreviated and their meanings (the same as below): lcb, lateral chaetal bundle; nod, nodule; pro, prostomium; py, pygidium; sp, sensory papillae; vcb, ventral chaetal bundle Fig. 3 Structure of prostomium, I, II and III (all std-in-HE). A. Prostomium, I, II and III, lateral view. B. Brain, showing bifurcation in its front part, dorsal view. C. Brain, showing round-shape in its posterior margin, dorsal view. D. Prostomium, I and II, longitudinal section, lateral view. E. Segment I, transversal section, anterior view. ac: anucleate coelomic corpuscle; ann, annulus; bc, buccal cavity; br, brain; bw, body wall; cpc, circumpharyngeal connective; crn, cranial nerve; dv, dorsal vessel; epc, epithelial cell; esc, epithelial sensory cell; isf, intersegmental furrow; lv, lateral vessel; mo, mouth; mu, muscle; per, peristomium; php, pharyngeal pad; pro, prostomium; rm, retractor muscle; spg, subpharyngeal ganglion; sub, sinus underneath the brain; vcs, ventral chaetal sac; vnc, ventral nerve cord peptonephridia, 48.6 μm long and 22.2 μm in diameter at its anterior part ( Fig. 4A-C), is attached to the back of the pharyngeal pad; the rest have two branches, coiled and hyaline, 96.4 μm long; the full length is 145 μm and the whole is confined in IV ( Fig. 4A-C, F, G). Each segment from IV to VI has a pair of pharyngeal (septal) glands, surrounding esophagus and squeezing the postseptum of the segment to the rear, especially in IV and V (Fig. 5A, B). The pharyngeal glands in IV and V are composed of elliptic dorsal and ventral lobes and thin strands, and the dorsal lobes are larger than the ventral ones (Fig. 5A, B; Table 1). The pharyngeal glands in VI are also made up of dorsal and ventral lobes and thin strands, but sometimes there appears a smaller middle lobe between them (Fig. 5B). The dorsal lobes in IV and V are separate (Fig. 5B), but the ones in VI connected dorsally (Fig. 5B, I). Their sizes are listed in Table 1. Excretory System: The nephridia are spindle-shaped, 126 ± 28 μm long and 31 ± 5 μm wide (n = 16) excluding tubules; there is a pair of them in each of the segments from VII to X, in septa 6/7, 7/8, 8/9 and 9/10, lying on both sides of the ventral nerve cord ( Fig. 5H; Fig. 6A, B). In addition, there are similar structures in each of the segments between the clitellum and the pygidium (Fig. 9B, C, E; Fig. 10A). Reproductive System: Male genital organs: There is a pair of elliptic testes, 14.0 ± 0.9 μm long and 9.7 ± 0.0 μm wide (n = 5), lying both sides of the ventral vessel and attaching to the anteseptum of XI closely (Fig. 7C). Nearby is a pair of seminal vesicles in different shape and size, 122 ± 34 μm long (n = 81), 102 ± 34 μm wide (n = 36) in lateral view and 74 ± 25 μm wide (n = 45) in dorsal or ventral view, with many spots on their surfaces ( Female genital organs: Attaching to both sides of the ventral nerve cord and near anterior margin of XII, there appears a pair of grape-like ovaries in which the immature germ cells, oogonia, are located (Fig. 8A); on the same site the figure also shows that XII is filled with groups of germ cells with developing oocytes that are globular, oval, or in other shape, with only a few cut open but the majority covered with many nurse cells stained blue except their nucleus stained slightly red. These groups of germ cells are aggregated into clusters in different shape, and the sizes of the clusters are: 181 ± 55 μm long and 172 ± 56 μm wide (n = 28) in lateral view; 149 ± 45 μm long and 93 ± 24 μm wide (n = 46) in dorsal or ventral view (Fig. 8A-D). They are also found in XI (Fig. 7A, C, D), and spirally arranged around two centers in dorsal or ventral view (Fig. 7C). The distance is 90 ± 14 μm (n = 5) between the two centers. (Fig. 8C, F-J). When there are more mature eggs, they may extend backward to other segments after the clitellum in vivo. There is a pair of spermathecae connecting with their short ectal and long ental ducts, located on both sides of the esophagus and above the ventral lobes of pharyngeal glands in V (Fig. 5A, B, E-G), which open on the ventral body wall, with glandular tissues surrounding their ectal orifice inside (Fig. 5A) and muscular epidermis (cushion) outside (Fig. 5E). The two pieces of muscular epidermis are located along the two longitudinal lines of ventral chaetal bundles, and close to the anteseptum of V ( Fig. 4A; Fig. 5E). Their ental ducts stretch diagonally forward to and attach to lateral body wall (Fig. 5B). The ampullae of spermathecae are elliptical or spherical, 42 ± 8 μm long and 32 ± 7 μm wide (n = 26), full of cilialike spermatozoa inside (Fig. 5F, G). Other Systems: The body wall consists of thin cuticle, epidermal and muscular layers (circular and longitudinal) ( Fig. 3A; Fig. 5F; Fig. 9D, E). Both the muscular layers and the coelomic fluids, through the former acting against the latter, form a hydrostatic skeleton, which maintains the shape and toughness of the organism under living conditions. Exchange between oxygen and carbon dioxide is performed through the moist body wall. Posterior Portion Segment XIV, the pygidium and segments between them constitute the posterior portion; all the segments in this portion take on homonomous metamerism except the ipv, intestinal parietal vessel; mu, muscle; meg, mature egg; miv, microvillus; ne, nephridium; n, nucleus; nc, nurse cell; ov, ovary; pb, penial bulb; pm, posterior margin of XIII; spf, sperm funnel; ucg, unicellular gland; vnc, ventral nerve cord; vv, ventral vessel; yog, yolk granule pygidium. Expressed from outside to inside sequentially, the structure of each layer in these segments is: body wall, ventral nerve cord, nephridia, coelomic cavity, coelomocytes, ventral vessel, chloragogen cells, intestinal parietal vessel, muscular wall of intestine, intestinal parietal cells, and intestinal lumen (Fig. 9B-F). In addition, segments from VII to X in the anterior portion have similar structures (Fig. 5H; Fig. 6A, B), with the intestinal parietal vessel substituted by the dorsal vessel. The segment before the pygidium contains many intestinal parietal cells, and the ventral nerve cord becomes relatively thicker there (Fig. 10A, B). Eggs and Their Development Under living conditions, E. buchholzi is pale to milky white ( Fig. 1A; Fig. 11D), and reproduces asexually by parthenogenesis; cross fertilization may occur at lower temperatures (12-18 °C). Several mature eggs are wrapped within a cocoon (Fig. 11A-C), which is deposited singly or collectively, and hidden with dusts and debris by parent adults. An adult produces several tens of cocoons in its life time when feeding on American ginseng root powders in a wet-sandy dish indoors. The number of mature eggs is 1-2 per cocoon in early stage, and rises quickly to 3-10 per cocoon with time elapsing, sometimes even up to 15 in a cocoon. In the first 50 days of adults' reproductive period, the mean number was ever maintained at ca. 8 eggs per cocoon as shown in Fig. 11C, possibly due to the rich nutrition of American ginseng root powders. The outline of mature eggs in a cocoon is clear when the latter is just laid (Fig. 11C); as the embryonic development proceeds, the outline of the eggs inside becomes blurred in vivo. A newly hatched young worm is about 1 mm long; after continuous feeding and direct developing, the worm grows up to an adult when there are about two mature eggs appearing in its clitellum and its color turns from pale to milky white. After that, the adult continues injuring American ginseng severely within roots (Fig. 11D) as well as reproducing more offspring. Functional Discrimination and Analysis Internal organs of the worm are relatively concentrated in its anterior portion, especially in both sections from the prostomium to VI and from XI to XIII. The worm moves freely and quickly inside the host plant and between particles of wet-sandy soils, which is closely related to its developed brain, ventral nerve cord, densely sensory papillae on the prostomium and pygidium, muscular tissues in the body and intestinal walls, and lateral and ventral chaetae. The pharyngeal pad proper is rich in retractor muscles, and with the help of the muscular knot and ring that extends several muscular bundles out to attach to the body wall, it thus strengthens the effect of grinding food. The anterior part of the peptonephridium opens to basal pharyngeal pad; three pairs of pharyngeal glands connect the pharyngeal pad through tubules or strands [20,21]. Their functions should be secreting digestive enzymes and enhancing digestive powers by means of extracellular digestion, an important link in its life process. Seen in I to XI, the dorsal, lateral and ventral vessels were stained red with the HE dyeing way. The ventral and intestinal parietal vessels can also be seen in segments in the posterior portion (Fig. 9B-F), which shows: 1) the ventral vessel extends between the ventral nerve cord and the layer of chloragogen cells, transporting metabolic wastes to the rear; and 2) the intestinal parietal vessels form a network that encircles the digestive tract, absorbing nutritional fluids from intestinal parietal cells, turning them into fresh colorless blood, and carrying it forward. The inlet of dorsal vessel acts as a valve that prevents blood from returning to the rear. The color of the nephridia in the ventral side of VII to X and other segments in posterior portion is pale, showing the excretory function may reduce staining extent in corresponding organs. Nucleate coelomocytes may be the weaver of septa and anucleate coelomocytes may function as lymphatic bodies. Although the size of the ovaries is small, they reproduce oogonia continuously. Generally, after numerous growing and mitotic divisions, some oogonia develop and enlarge into primary oocytes; undergoing a full meiosis, each oocyte forms three polar bodies and a large haploid ootid; the polar bodies disintegrate and the ootid matures into the egg [22]. The germ-line cysts in Enchytraeus albidus ovaries consist of 16 cells; during oogenesis, the fate of interconnected germ cells differentiates and only one cell develops as the future egg, while the other 15 become nurse cells [23]. Recalling the photomicrographs in Figs. 7 and 8, almost all the features of oogenesis in the target worm are similar to those in E. albidus. The nurse cells should have produced all the yolk granules, with their chemical properties changing from basophilic to acidophilic, thus stained red. Being bathed in nutrientrich coelomic fluids in the clitellum, each oocyte and accompanied nurse cells are able to develop rapidly. Comparisons with the Type Species and Early Descriptions The worm Enchytraeus buchholzi was first described as a widespread species by Vejdovský in German in 1878 [4]; characters such as body length, segment number, chaetae per bundle, salivary glands (peptonephridia, esophageal appendages), seminal vesicles, sperm funnels, and vas deferens or sperm duct were introduced (Table 2), with other phenomena (intestinal canal covered with glands, segmental organs flowing to outside, ovaries maturing earlier, mature eggs falling into the body cavity, etc.) recorded. A monograph on the class Oligochaeta was compiled by Michaelsen in German in 1900 [24], with the species E. buchholzi included and corresponding characters cited especially from the one published by Vejdovský in 1879 [25]. Morphotaxonomic studies and geographical distributions of the worm have been continuing in different parts of the world [26][27][28][29][30], and some early literature is cited to make comparisons with the specimens from Liuba (Table 2). Comparisons in Table 2 show the specimens from Liuba are very similar to the type species on many traits, especially on the first three: body length, segment number, and chaetae per bundle. In conjunction with the result of the DNA molecular analysis, the targets are identified into Enchytraeus buchholzi, and thus recovered their original identity. There are, of course, some differences among those descriptions, probably owing to distinct habitats and individual variations ( Table 2). Based on the traits that seminal vesicles are larger than half diameter of segment V and the ental duct of the spermatheca attaches to lateral body wall ( Table 2; Fig. 5B and the text written above), the specimens may be identified into a subspecies, namely Enchytraeus buchholzi ssp. liubaensis. The number of chaetae and their shape also contribute significantly to the identification of species in the family Enchytraeidae [4]; according largely to the trait, Vejdovský ever distinguished and named seven new species and redescribed two in the genus Enchytraeus [4]. The first two characters, segment number and chaetae per bundle, are the most important traits to distinguish enchytraeid species; the third one, body length, should be less important because of its varying with segment number. Largely relying on the trait, 2-3 chaetae per bundle, with supports of some other traits, several enchytraeids were identified new at species level after 1959; for example, the potworm Enchytraeus bulbosus was new to science because its penial bulb is larger than half diameter of the clitellum [9], and the terrestrial species Enchytraeus luxuriosus was new because of its segment number around 45 [30]. Making Slide Specimens Among the five methods used to make slide specimens, four led to whole mounts and one to paraffin sections, each of which has its own advantage though they are different in fixation, staining and mounting process. The HE dyeing way is recommended since different structures can be colored properly and thus easily recognized; staining in acetocarmine liquid is convenient for fast dyeing, with bright color and good effect. Compared with the slide specimens sectioned, the whole mounts are relatively thicker, and some parts are seen blurred because of beyond the depth of field under higher objective lens and of more tissue layers for light to penetrate. Specimens sectioned can be cut thinner, but some fine structures may be easy to wash away, resulting in image distortion. Conclusions All descriptions and microphotographs of the worm E. buchholzi, verified, revised and newly supplemented here, express essential characteristics of the species. Simultaneously, the new results expanded text descriptions, which would be helpful for professionals to recognize this small worm, understand its morphology, structure and physiological function. As forceful pieces of evidence, these microphotographs may support corresponding free-hand drawings published previously and correct something wrong, if any. Concerning IPM practice, the studies would help ginseng farmers aim at the target pest, manage it scientifically, and promote a vigorous development of the American ginseng planting industry.
2022-06-07T06:30:05.502Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "bed0ef25e0221f679b16cbf6d80272dd1b6386bb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "bed0ef25e0221f679b16cbf6d80272dd1b6386bb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
6631736
pes2o/s2orc
v3-fos-license
Hypotheses on the Potential of Rice Bran Intake to Prevent Gastrointestinal Cancer through the Modulation of Oxidative Stress Previous studies have suggested the potential involvement of oxidative stress in gastrointestinal cancers. In light of this, research efforts have been focused on the potential of dietary antioxidant intake to prevent gastrointestinal cancer through the modulation of oxidative stress. Rice bran, a by-product of rice milling, has been shown to contain an abundance of phytochemicals, which are dietary antioxidants. To date, a number of studies have shown the antioxidative effect of rice bran intake, and some demonstrated that such an effect may contribute to gastrointestinal cancer prevention, largely through the antioxidative properties of rice bran phytochemicals. In addition, these phytochemicals were shown to provide protection against cancer through mechanisms linked to oxidative stress, including β-catenin-mediated cell proliferation and inflammation. The present article provides an overview of current evidence for the antioxidative properties of rice bran and its phytochemicals, and for the potential of such properties in cancer prevention through the oxidative-stress-linked mechanisms mentioned above. The article also highlights the need for an evaluation of the effectiveness of rice bran dietary interventions among cancer survivors in ameliorating oxidative stress and reducing the level of gastrointestinal cancer biomarkers, thereby establishing the potential of such interventions among these individuals in the prevention of cancer recurrence. Introduction Gastrointestinal cancers, defined as those occurring at various sites that include the colon, rectum, stomach, liver, pancreas and oesophagus, form one of the most common cancer groups worldwide. In 2012, more than 3.8 million new cases of gastrointestinal cancer were reported worldwide, and its incidence is expected to increase to almost 6.2 million by 2030 [1]. Therefore, there is a pressing need for the development of effective strategies to prevent gastrointestinal cancers. Previous research has focused on seeking a better understanding of the molecular mechanisms of gastrointestinal carcinogenesis. During the years of research, a number of physiological factors have been identified concerning such mechanisms. For example, it was suggested that colorectal cancer (CRC) can be caused by intestinal inflammation and microbial dysbiosis [2], while stomach cancer was found to be the result of Helicobacter Pylori infections [3,4]. However, among all the various cancer-associated physiological factors, oxidative stress appears to be one of the most studied to date. In the light of this, research has been directed towards the potential use of dietary antioxidants, the edible compounds known to reduce oxidative stress, in cancer chemoprevention. Likewise, previous research has also focused on whether the intake of certain foods generally consumed by humans conferred a protective effect against oxidative stress. Rice bran, a by-product of rice milling previously shown to contain a variety of bioactive compounds that exhibit antioxidant properties, is one of the food sources that have been widely studied for their antioxidant and anticancer potential. The aim of the present paper is to provide an overview of current data on the anti-oxidative effect of rice bran, and the potential mechanisms of how such effect may lead to gastrointestinal cancer prevention. The review will first provide a brief account of how oxidative stress occurs, and the evidence supporting the hypothesis that ameliorating oxidative stress can reduce cancer risks. Studies that show the antioxidative effect of rice bran intake, together with those suggesting that rice bran intake may prevent cancer through the modulation of oxidative stress, will then be reviewed. Finally, the potential mechanisms involved in this chemo-preventive effect will be discussed in the context of findings concerning the protective function against oxidative stress of the bioactive compounds present in rice bran. Oxidative Stress Oxidative stress is a condition where the rate of production of free radicals far exceeds that of their removal by antioxidant enzymes, therefore causing an accumulation of the former. These free radicals, generally termed reactive oxygen species (ROS) and reactive nitrogen species (RNS), are produced through various metabolic processes, including cellular respiration [5] and immune reactions by immune cells [6]. While the presence of low levels of these free radicals is beneficial to cellular functions such as the regulation of signalling pathways [7], they in fact have deleterious effects on cells when they are produced at high levels. Indeed, ROS and RNS have been shown to cause oxidation and/or nitration of lipids, proteins and DNA, resulting in damage to these biomolecules. For example, hydroxyl radical, the product of a reaction between ROS such as superoxide anion and hydrogen peroxide (H 2 O 2 ), can cause lipid peroxidation, protein carbonylation and the formation of DNA adducts such as 8-oxo-7,8-dihydro-2'-deoxyguanosine (8-oxodG), all of which are markers of oxidative stress [8]. An RNS such as peroxynitrite, formed by the reaction between superoxide anion and nitric oxide, has also been shown to cause damage to these biomolecules [9]. A summary of the detrimental effects of these free radicals is provided in Figure 1. previously shown to contain a variety of bioactive compounds that exhibit antioxidant properties, is one of the food sources that have been widely studied for their antioxidant and anticancer potential. The aim of the present paper is to provide an overview of current data on the anti-oxidative effect of rice bran, and the potential mechanisms of how such effect may lead to gastrointestinal cancer prevention. The review will first provide a brief account of how oxidative stress occurs, and the evidence supporting the hypothesis that ameliorating oxidative stress can reduce cancer risks. Studies that show the antioxidative effect of rice bran intake, together with those suggesting that rice bran intake may prevent cancer through the modulation of oxidative stress, will then be reviewed. Finally, the potential mechanisms involved in this chemo-preventive effect will be discussed in the context of findings concerning the protective function against oxidative stress of the bioactive compounds present in rice bran. Oxidative Stress Oxidative stress is a condition where the rate of production of free radicals far exceeds that of their removal by antioxidant enzymes, therefore causing an accumulation of the former. These free radicals, generally termed reactive oxygen species (ROS) and reactive nitrogen species (RNS), are produced through various metabolic processes, including cellular respiration [5] and immune reactions by immune cells [6]. While the presence of low levels of these free radicals is beneficial to cellular functions such as the regulation of signalling pathways [7], they in fact have deleterious effects on cells when they are produced at high levels. Indeed, ROS and RNS have been shown to cause oxidation and/or nitration of lipids, proteins and DNA, resulting in damage to these biomolecules. For example, hydroxyl radical, the product of a reaction between ROS such as superoxide anion and hydrogen peroxide (H2O2), can cause lipid peroxidation, protein carbonylation and the formation of DNA adducts such as 8-oxo-7,8-dihydro-2'-deoxyguanosine (8-oxodG), all of which are markers of oxidative stress [8]. An RNS such as peroxynitrite, formed by the reaction between superoxide anion and nitric oxide, has also been shown to cause damage to these biomolecules [9]. A summary of the detrimental effects of these free radicals is provided in Figure 1. To counter the damaging effects of these free radicals, cells utilise a repertoire of antioxidant enzymes and molecules to remove such excess free radicals and maintain a healthy redox balance. For example, superoxide dismutase (SOD) can scavenge superoxide anion and convert it to hydrogen To counter the damaging effects of these free radicals, cells utilise a repertoire of antioxidant enzymes and molecules to remove such excess free radicals and maintain a healthy redox balance. For example, superoxide dismutase (SOD) can scavenge superoxide anion and convert it to hydrogen peroxide, which, by the action of catalase or glutathione peroxidase (GPx), would be further detoxified. These enzymes would therefore be able to prevent the formation of the damaging ROS and RNS mentioned above. Furthermore, the antioxidant molecule glutathione, with the help of glutathione-S-transferase (GST), would help lower the level of oxidative stress by detoxifying the products of lipid peroxidation and DNA oxidation [10]. Taken together, ROS and RNS can be produced through certain physiological processes, with their levels kept in check by the cellular antioxidant system. Oxidative stress occurs if these free radicals are produced in excess, or if the antioxidants cannot keep up with their removal, eventually leading to an accumulation of oxidative damage to the biomolecules in the cells. The Contribution of Oxidative Stress in Cancer To date, numerous studies have provided evidence for the role of oxidative stress in cancer progression, and the link between the two has been extensively reviewed [11][12][13][14][15][16]. Moreover, studies of cancer-promoting intestinal bacteria have also suggested the involvement of oxidative stress in gastrointestinal cancers. For example, Helicobacter pylori infection, long known to cause gastric cancer [3,4], can produce ROS such as superoxide and peroxynitrite [17]. Enterococcus faecalis, a bacterial species whose faecal abundance was found to be increased in colorectal cancer patients [18], was also shown to produce superoxide [19] and hydroxyl radicals [20]. These studies suggest the involvement of bacteria associated with gastrointestinal cancer in exacerbating oxidative stress, and provide further evidence for the role of oxidative stress in gastrointestinal cancer. In fact, previous studies have provided multiple lines of evidence that oxidative stress can increase cancer risks. For instance, the ROS produced by the colonic bacteria Enterococcus faecalis were shown to induce potentially carcinogenic mutations of colonic DNA, with their mutagenic effect being prevented by the action of catalase [21]. Products of lipid peroxidation such as malondialdehyde can react with DNA and form "exocyclic DNA adducts", which has been suggested to contribute to gastrointestinal carcinogenesis [22]. Moreover, inflammatory bowel disease, a condition characterised by an increased level of oxidative stress [23], was found to be associated with increased colorectal cancer risks [24]. Consistent with this finding, higher urinary 8-oxodG levels, indicative of a high level of oxidative stress-associated DNA damage, were observed in CRC patients [25,26]. In contrast, it has been suggested in some in vitro and in vivo studies that dietary antioxidants have a potential for gastrointestinal cancer prevention. For example, luteolin, a bioflavonoid with antioxidant properties, was shown to exhibit anti-proliferative and pro-apoptotic effects in a colon cancer cell line [27]. Cocoa's antioxidant properties were demonstrated in a mouse model of colitis-associated cancer, and it was also suggested as a potential cancer chemo-preventive agent [28]. Studies providing evidence that dietary antioxidants may inhibit cancer progression were recently reviewed by Galadari et al. [15]. However, controversies do exist as to the anticancer effects of the intake of antioxidants, or the lowering of oxidative stress levels. Indeed, clinical trials of dietary antioxidant intake among humans have failed to yield consistent results on whether the intake of dietary antioxidants confers protective effects against cancer. Previously, a meta-analysis had shown that antioxidant supplementation had no effect on CRC incidence and recurrence, although the supplementation of a vitamin C and vitamin E combination had a slight inhibitory effect on CRC [29]. While methodological differences could be a factor contributing to the observed discrepancies between studies, the poor bioavailability of the ingested antioxidants was also suggested as a likely factor for such discrepancies [15]. As suggested by Galadari et al., the ingested antioxidants are often poorly absorbed and quickly metabolised [15], thus the antioxidants used in certain studies may not have enough opportunity to exert their beneficial effects on the human body before they are metabolised and excreted. This phenomenon could well have led to the large variation in their effects on cancer prevention and the inconsistency of data derived from these clinical trials. Despite the discrepancies in the findings of different studies, the well-established involvement of oxidative stress in cancer, including gastrointestinal cancers, has indeed sparked immense interest among scientists in further research into the use of certain dietary supplements known to contain abundant compounds with antioxidant properties for gastrointestinal cancer prevention. Rice bran is one of such dietary supplements that has been gaining increasing research interest over the past decade in its potential for use in cancer chemo-prevention. Rice Bran Rice bran is produced as a by-product of the milling process of rice, a staple food largely cultivated in Asia and South America and widely consumed by humans worldwide. After harvesting, rice needs to undergo a series of milling processes to remove the outer layers of the grains in order to produce the white rice that is generally consumed as part of the human diet. One such process involves the removal and processing of the bran, the coating that covers whole grain rice produced after the removal of rice hull, thereby creating rice bran. However, the world's population does not generally use such rice milling by-products as part of their daily diet. Rice bran contains lipoxygenases that break down the fatty acids present in the bran, which after an extended period of storage would eventually result in a detrimental effect on its flavour [30], making it less edible. Despite such drawbacks, rice bran actually contains an abundance of bioactive compounds, generally known as phytochemicals, that lead to better health and the chemoprevention of cancer. Previous studies have in fact suggested the role of rice bran in the chemoprevention of gastrointestinal cancer, and also proposed potential mechanisms for how its intake can confer a measure of chemo-prevention. These include the inhibition of cell proliferation, anti-inflammation and modification of the intestinal microbiota. These mechanisms have been demonstrated in previous studies and reviewed elsewhere [31,32]. Evidence for the Anti-Oxidative Effects of Rice Bran In addition to the potential mechanisms mentioned above, the modulation of oxidative stress levels has also been suggested as an alternative cancer chemo-preventive mechanism of rice bran intake. In fact, a repertoire of phytochemicals present in rice bran, including tocotrienols, phytic acid, γ-oryzanol, ferulic acid, phytosterols and flavonoids (Figure 2), has been shown to possess antioxidant properties [33]. Moreover, nutrients such as vitamin B, which is abundantly present in rice bran, appear to exhibit a preventive effect against oxidative stress. A recent case-control study demonstrated that a reduced intake of vitamin B was associated with an increased level of oxidative stress, as evidenced by the higher level of lipid peroxidation and glutathione (GSH) depletion in subjects exhibiting low vitamin B intake [34]. This was also supported by the findings of a previous animal study, where mice deficient in vitamin B6 displayed characteristics of increased oxidative stress, including an increased lipid peroxidation level [35]. In other words, the phytochemicals and nutrients present in rice bran may play a significant role in conferring the protective effect of rice bran intake against oxidative stress. Consistent with this hypothesis, a number of in vivo and in vitro studies have also demonstrated the benefits of rice bran intake in the amelioration of oxidative stress. Boateng et al. had previously demonstrated that rice bran dietary supplementation for rats with colon tumours would result in an almost two-fold increase in the activity of GST, which detoxifies the oxidative products of biomolecules, in the colon [36]. More recently, supplementing the diet of rats with aqueous enzymatic extract from rice bran was also found to enhance the activity of antioxidant enzymes such as SOD, catalase and glutathione peroxidase, with the level of oxidative damage of lipids and proteins reduced compared with the control rats [37]. Moreover, such treatment was also found to cause a significant reduction in the production of superoxide radicals and a decreased expression of reduced nicotinamide adenine dinucleotide phosphate (NADPH) oxidase, an enzyme capable of superoxide production, in rats [38]. In addition, increased free radical scavenging and antioxidant activity were also demonstrated in several in vitro models upon treatment with rice bran extracts. For example, treatment of HL-60 cells, a human leukaemia cell line, with ethanolic extracts of brown rice and rice bran resulted in a reduction in superoxide production, coupled with a decrease in the extent of lipid peroxidation in these cells [39]. A dose-dependent increase in the ROS and nitric oxide scavenging activity was also observed in a glioma cell line treated with methanolic extract of Njavara rice bran [40]. Recently, Lee et al. showed that pre-treatment of a liver cancer cell line (HepG2) with rice bran extract obtained from black rice reversed the detrimental effects of chemically-induced oxidative stress. Further observations upon the treatment of the cells with the extract include the suppression of ROS generation, an increase in reduced glutathione content and the inhibition of lipid peroxidation [41], indicating an improvement in the antioxidant status of these cells after treatment. Furthermore, rice bran extracts were demonstrated to reduce oxidative stress in some noncancer cell lines upon treatment. For example, fermented rice bran extract was shown to inhibit ROS generation in cultured adipocytes subjected to H2O2-induced oxidative stress [42]. Further, pretreatment with rice bran extract on rats' heart cells exposed to H2O2 was also found to increase both the activity and the level of expression of antioxidant enzyme catalase [43]. Such studies confirm the above in vitro studies regarding the inhibitory effect of rice bran intake on ROS production in cancer cell lines. In addition, rice bran oil, extracted from rice bran, has been demonstrated to exhibit antioxidant potential in several in vivo studies. Administering rice bran oil to rats subjected to chemicallyinduced oxidative stress was found to reduce their level of lipid peroxidation. This was coupled with a slight increase in the activity of catalase, SOD and GPx, although this effect was not prominent [44]. Likewise, Riceberry bran oil, extracted from the bran of a new variety of Thai rice, was shown to exhibit similar effects on lipid peroxidation and antioxidant enzyme activity upon supplementation for diabetic rats on a high-fat diet [45]. In a study using rats treated with azoxymethane, a carcinogen known to induce colon cancer in animal models, administration of rice bran oil was shown to result in 50% higher GST activity in the rats' colons, when compared with the control rats [46]. Further, in streptozotocin-induced diabetic rats, supplementation of rice bran oil in their diet led to amelioration of mitochondrial DNA damage in the liver, kidney and pancreas of these rats, as evidenced by the reduction in the level of 8-oxodG present in these organs [47]. Taken together, the above studies suggest that rice bran intake elicits a protective effect against oxidative stress, and this effect is likely to be attributed to multiple mechanisms, including the modulation of antioxidant enzyme activity and expression, inhibition of oxidative damage to biomolecules and direct scavenging of ROS. A summary of the evidence demonstrating the anti-oxidative effects of rice bran intake is presented in Table 1. In addition, increased free radical scavenging and antioxidant activity were also demonstrated in several in vitro models upon treatment with rice bran extracts. For example, treatment of HL-60 cells, a human leukaemia cell line, with ethanolic extracts of brown rice and rice bran resulted in a reduction in superoxide production, coupled with a decrease in the extent of lipid peroxidation in these cells [39]. A dose-dependent increase in the ROS and nitric oxide scavenging activity was also observed in a glioma cell line treated with methanolic extract of Njavara rice bran [40]. Recently, Lee et al. showed that pre-treatment of a liver cancer cell line (HepG2) with rice bran extract obtained from black rice reversed the detrimental effects of chemically-induced oxidative stress. Further observations upon the treatment of the cells with the extract include the suppression of ROS generation, an increase in reduced glutathione content and the inhibition of lipid peroxidation [41], indicating an improvement in the antioxidant status of these cells after treatment. Furthermore, rice bran extracts were demonstrated to reduce oxidative stress in some non-cancer cell lines upon treatment. For example, fermented rice bran extract was shown to inhibit ROS generation in cultured adipocytes subjected to H 2 O 2 -induced oxidative stress [42]. Further, pre-treatment with rice bran extract on rats' heart cells exposed to H 2 O 2 was also found to increase both the activity and the level of expression of antioxidant enzyme catalase [43]. Such studies confirm the above in vitro studies regarding the inhibitory effect of rice bran intake on ROS production in cancer cell lines. In addition, rice bran oil, extracted from rice bran, has been demonstrated to exhibit antioxidant potential in several in vivo studies. Administering rice bran oil to rats subjected to chemically-induced oxidative stress was found to reduce their level of lipid peroxidation. This was coupled with a slight increase in the activity of catalase, SOD and GPx, although this effect was not prominent [44]. Likewise, Riceberry bran oil, extracted from the bran of a new variety of Thai rice, was shown to exhibit similar effects on lipid peroxidation and antioxidant enzyme activity upon supplementation for diabetic rats on a high-fat diet [45]. In a study using rats treated with azoxymethane, a carcinogen known to induce colon cancer in animal models, administration of rice bran oil was shown to result in 50% higher GST activity in the rats' colons, when compared with the control rats [46]. Further, in streptozotocin-induced diabetic rats, supplementation of rice bran oil in their diet led to amelioration of mitochondrial DNA damage in the liver, kidney and pancreas of these rats, as evidenced by the reduction in the level of 8-oxodG present in these organs [47]. Taken together, the above studies suggest that rice bran intake elicits a protective effect against oxidative stress, and this effect is likely to be attributed to multiple mechanisms, including the modulation of antioxidant enzyme activity and expression, inhibition of oxidative damage to biomolecules and direct scavenging of ROS. A summary of the evidence demonstrating the anti-oxidative effects of rice bran intake is presented in Table 1. Abbreviations: GPx, glutathione peroxidase; GST, glutathione-S-transferase; MDA, malondialdehyde; NADPH, reduced nicotinamide adenine dinucleotide phosphate; ROS, reactive oxygen species; SOD, superoxide dismutase. The Anti-Oxidative Properties of Rice Bran Contribute to Cancer Chemo-Prevention As discussed above, numerous studies have provided evidence for a link between oxidative stress and cancer initiation and progression, and potential mechanisms have been proposed to explain how ROS can be carcinogenic. With multiple studies pointing towards the potential of rice bran intake in the relief of oxidative stress, it is therefore tempting to speculate that the antioxidant properties of rice bran may have an inhibitory effect on tumour growth and cancer progression. Several studies have demonstrated that the antioxidative effects of rice bran intake may contribute to cancer prevention. An early study [48] revealed that supplementation with tocotrienol-rich fraction isolated from rice bran oil would reduce the level of lipid peroxidation and protein oxidation caused by exposure of rats to diethylnitrosamine (DEN)/2-acetylaminofluorene (AAF), a combination of chemicals used to induce stomach cancer. The effect was accompanied by a reduction in the activity of alkaline phosphatase, an enzymatic marker for gastrointestinal cancers [50], in the liver of these rats. Moreover, in mice bearing Ehrlich tumour cells, administering modified arabinoxylan rice bran was found to inhibit the growth of Ehrlich tumours. This inhibitory effect was accompanied by an improvement in the antioxidant status of these mice, including reduced lipid peroxidation, increased activity and expression of antioxidant enzymes and elevated glutathione levels [49]. These findings can further be supported by the observation that brewers' rice, a rice product comprising broken rice, rice bran and rice germ, may also exhibit anticancer effects through its antioxidant properties. Tan et al. [51] showed that supplementation with brewers' rice would reverse the oxidative effects of azoxymethane treatment in rats. These effects include a decrease in SOD expression and an elevation in the levels of nitric oxide and malondialdehyde (MDA), a product of lipid peroxidation. This observation therefore suggests that brewers' rice supplementation would be likely to confer protection against azoxymethane-induced colon cancer in these rats through inhibition of oxidative stress. Interestingly however, one study [52] suggested that the anticancer effect of the intake of rice bran was not attributable to the amelioration of oxidative stress, but rather to the inhibition of inflammation. The authors treated fibrosarcoma QR-32 cells with fermented brown rice and rice bran with Aspergillus oryzae, a supplement that had previously been shown to inhibit azoxymethane-induced colon carcinogenesis [53]. They found that such treatment reduced the infiltration of inflammatory cells into the tumours and inhibited the expression of pro-inflammatory genes, but failed to reduce the level of 8-oxodG formation. Although the discrepancy in findings between this study and others is unclear, it is possible that the antioxidative effects of the phytochemicals in the rice bran used in the study may have prevented the further oxidation of 8-oxodG, rather than the formation of 8-oxodG itself. It was suggested that 8-oxodG is not the only product of DNA oxidation as a result of oxidative stress, as 8-oxodG is able to be further oxidised into spiroiminodihydantoin deoxyribonucleoside under such conditions [54]. A previous study also showed that incubation of DNA with more than 100 µM tea polyphenols, which were known to exhibit antioxidant properties, exhibited no effect on 8-oxodG levels in the presence of hypochlorous acid, a product formed under oxidative stress conditions [55]. The authors argued that the tea polyphenols, present in high concentrations, had directed their antioxidant action at preventing the conversion of 8-oxodG to spiroiminodihydantoin deoxyribonucleoside as a result of oxidative stress, rather than the formation of 8-oxodG. Therefore, the observations made by Onuma et al. [52] could be the result of inhibiting the further oxidation of 8-oxodG by the rice bran phytochemicals, and that is why they were not able to observe a decrease of 8-oxodG as a result of rice bran treatment. In any case, while it is true that rice bran intake may prevent carcinogenesis through anti-inflammatory mechanisms, it is not appropriate to rule out the role of oxidative stress modulation in rice bran's anticancer effect. In fact, the chemo-preventive properties of rice bran, as indicated in the above studies, could be largely attributed to the antioxidant effect of the aforementioned phytochemicals present in rice bran, including phytic acid and phenolic acids. To date, several in vivo studies have provided evidence for the role of these phytochemicals in the inhibition of gastrointestinal cancers through the modulation of oxidative stress. For example, phytic acid was shown to increase GST activity and reduce the level of lipid peroxidation in a rat model of hepato-carcinogenesis, and these effects were coupled with a decreased appearance of hepato-carcinogenesis markers [56]. This suggests that phytic acid may suppress liver tumourigenesis through amelioration of oxidative stress. In another study, the ability of phytic acid to increase GST activity, which resulted in a decrease in the number of colon tumours in a rat model of CRC, was also demonstrated [57]. In addition, an early study [58] showed that the administration of ferulic acid, a phenolic acid abundantly present in rice bran, to azoxymethane-treated rats could decrease the number of aberrant crypt foci (ACF), an early sign and a potential marker for the development of colon cancer [59]. This phenomenon may be related to the increase in GST activity in these treated rats, thereby demonstrating the role of antioxidant properties of ferulic acid in colon cancer prevention. Further, supplementation of p-methoxycinnamic acid, a rice bran phenolic acid, was found to reverse the decrease in the levels of both antioxidant enzymes and molecular antioxidants caused by the treatment of dimethylhydrazine, a chemical for the induction of colon cancer, in rats. These effects were also accompanied by a decrease in the formation of ACF in these rats [60]. Interestingly, an in vitro study by the same research group showed that p-methoxycinnamic acid treatment on a human colon adenocarcinoma cell line (HCT-116) would actually induce oxidative stress in these cells, demonstrated by an increase in ROS production and a concomitant decrease in the activity of antioxidant enzymes. The authors also postulated that these pro-oxidative effects of p-methoxycinnamic acid treatment could confer protection to these cells against cancer through apoptotic induction [61]. The reasons for the discrepancy in the effects of p-methoxycinnamic acid treatment shown by these two studies remain as yet unclear. However, the difference in the models used (in vivo vs. in vitro) and the dosage of the p-methoxycinnamic acid used in both studies could constitute the contributory factors for this discrepancy. Nevertheless, these studies show that the antioxidative effects of rice bran are likely to be exerted by the phytochemicals known to exhibit antioxidant properties that are abundantly present in rice bran. These studies also provide evidence that the protective effect of these phytochemicals against gastrointestinal cancer is at least partly attributable to their antioxidant properties. Table 2 Overall, the studies presented above suggested an involvement of the regulation of oxidative stress levels in the chemo-preventive effect of rice bran intake, likely due to the antioxidant properties of the phytochemicals present in rice bran. Further studies showing an association between the reduction of oxidative stress markers (such as lipid peroxidation, DNA oxidation and protein carbonyl formation) and inhibition of tumour growth in cellular and animal models of further types of gastrointestinal cancers, such as stomach and pancreatic cancers, would be useful in supporting this hypothesis. Other Potential Oxidative-Stress-Related Mechanisms for the Chemo-Preventive Effect of Rice Bran Intake With previous research demonstrating an implication of oxidative stress in carcinogenesis, research efforts have been directed to the elucidation of the mechanisms in which oxidative stress can lead to gastrointestinal cancer. Notably, oxidative stress was shown to be linked to a variety of molecular mechanisms or pathways where their aberrant activation would lead to cancer. These include inflammation and signalling pathways involving mitogen-activated protein kinase (MAPK), p53, β-catenin, Class O forkhead box protein (FOXO) and NF-κB [62,63]. It is therefore tempting to speculate that the intake of antioxidants that ameliorate oxidative stress and perhaps interfere with these pathways would have a protective effect against gastrointestinal cancers. Abbreviations: CRC, colorectal cancer; GPx, glutathione peroxidase; GST, glutathione-S-transferase; SOD, superoxide dismutase. As indicated above, the ability of the phytochemicals present in rice bran to modulate the level of oxidative stress would at least partly contribute to the anticancer effect of rice bran intake. However, previous studies suggest that these phytochemicals may also confer such anticancer effects through other mechanisms, including the reduction of inflammation and the regulation of β-catenin signalling, a pathway associated with cell proliferation and tumour metastasis. As mentioned, both β-catenin signalling and inflammation are linked to oxidative stress, leaving us with the question of whether the phytochemicals, which are antioxidants themselves, may confer their anticancer effects through these mechanisms using their antioxidant properties. In other words, would the modulation of oxidative stress by these phytochemicals be able to control aberrant β-catenin signalling and inflammation, the two known contributors to cancer? This section of the review provides an overview of the studies that provide evidence suggesting the relationship between oxidative stress and β-catenin signalling and inflammation, and also the evidence for the effect of certain rice bran phytochemicals on the regulation of β-catenin signalling and inflammation. β-Catenin Signalling β-catenin is a protein that acts as a mediator of the effect of Wnt signalling, a signalling pathway known to contribute to cell proliferation and whose aberrant activation is implicated in cancer [64]. It forms part of a transcriptional complex that, when activated, would lead to the expression of genes involved in cell survival and proliferation (cyclins, c-myc and survivin) and tumour metastasis (metalloprotease), in the nucleus of cells. Indeed, multiple lines of evidence suggest that oxidative stress can lead to increased β-catenin activity. Increased activation of NADPH oxidase I (NOX1), a superoxide-producing enzyme known to be associated with colon cancer [65], has been shown to inhibit the degradation of β-catenin via a decrease in its phosphorylation by glycogen synthase kinase 3 (GSK3). On the contrary, knockout of NOX1 would result in a loss of β-catenin activity and a reduction in cyclin D1 expression [66]. These findings therefore demonstrate that increased superoxide production, causing oxidative stress, would promote β-catenin-mediated gene expression. In addition, in a renal carcinoma rat model, Liu et al. [67] showed that chronic oxidative stress can activate β-catenin signalling, leading to an increased expression of the target genes of β-catenin such as cyclin D1. Further, treatment of a human colorectal cancer cell line (SNU-407) with H 2 O 2 was found to increase the activity of Akt, a potential upstream regulator of β-catenin [68], leading to an increased nuclear localisation of β-catenin and increased expression of cyclin D1 [69]. Consistent with these findings, antioxidant treatment on cells was shown to inhibit Wnt/β-catenin signalling through the suppression of β-catenin dephosphorylation, thereby promoting the degradation of β-catenin in the proteasome and inhibiting β-catenin-mediated gene expression [70]. Taken together, oxidative stress can promote cell proliferation through the increased activity of β-catenin and the resulting increased expression of pro-proliferative genes. Several studies involving cancer cell lines have provided evidence that certain phytochemicals present in rice bran would help suppress cell proliferation through their interference in Wnt/β-catenin signalling, primarily through a reduction in β-catenin expression. γ-Tocotrienol and δ-tocotrienol, both of which are vitamin E derivatives, were found to reduce the level of expression and nuclear localisation of β-catenin in colon cancer cell lines, leading to reduced expression of cyclin D1, c-myc and survivin in these cells [71,72]. Moreover, Rajendran et al. [73] also demonstrated in hepatocellular carcinoma cell lines that γ-tocotrienol treatment could cause a down-regulation of the activity and expression of signal transducer and activator of transcription 3 (STAT3), a transcription factor shown to be activated by Wnt/β-catenin signalling [74], leading to the reduced expression of pro-proliferative genes. Consistent with these findings, another study showed that tocotrienol-rich fraction extracted from palm oil could reduce β-catenin expression and cyclin D1 and survivin expression in colon cancer xenografts inoculated in mice [75], thereby demonstrating the ability of tocotrienols to inhibit cell proliferation through the down-regulation of the Wnt/β-catenin pathway. In addition, tricin, a bioflavonoid abundantly present in rice bran, was also shown in colon cancer cells to reduce β-catenin levels, thereby reducing the expression of markers for cancer stem cells that may promote tumour growth [76]. Likewise, phytic acid administration among rat models of CRC was also found to decrease β-catenin expression, an effect shown in separate studies to be accompanied by a decreased expression of Ki67, a marker for cell proliferation [77] and decreased formation of aberrant crypt foci and incidence of colon tumours [78]. These studies indicate that rice bran phytochemicals may interfere with Wnt/β-catenin signalling via a reduction in β-catenin expression, which results in a decrease in the proliferative ability of cells owing to the inhibition of β-catenin-mediated expression of genes implicated in cell proliferation. Moreover, Ahmed et al. [79] found that γ-tocotrienol can reverse epithelial-to-mesenchymal transition of tumours, a process implicated in the initiation of tumour metastasis, through the inhibition of β-catenin signalling, although the study involved the use of breast cancer cells. This study suggests that certain rice bran phytochemicals may also prevent tumour metastasis and cancer progression through the suppression of β-catenin signalling, owing to the ability of β-catenin to induce the expression of pro-metastatic genes such as metalloproteases, although further studies using gastrointestinal cancer cell lines or animal models are required to confirm this. A depiction of the relationship between oxidative stress and β-catenin signalling, and the effect of rice bran phytochemicals on β-catenin signalling, is presented in Figure 3. was also found to decrease β-catenin expression, an effect shown in separate studies to be accompanied by a decreased expression of Ki67, a marker for cell proliferation [77] and decreased formation of aberrant crypt foci and incidence of colon tumours [78]. These studies indicate that rice bran phytochemicals may interfere with Wnt/β-catenin signalling via a reduction in β-catenin expression, which results in a decrease in the proliferative ability of cells owing to the inhibition of β-catenin-mediated expression of genes implicated in cell proliferation. Moreover, Ahmed et al. [79] found that γ-tocotrienol can reverse epithelial-to-mesenchymal transition of tumours, a process implicated in the initiation of tumour metastasis, through the inhibition of β-catenin signalling, although the study involved the use of breast cancer cells. This study suggests that certain rice bran phytochemicals may also prevent tumour metastasis and cancer progression through the suppression of β-catenin signalling, owing to the ability of β-catenin to induce the expression of pro-metastatic genes such as metalloproteases, although further studies using gastrointestinal cancer cell lines or animal models are required to confirm this. A depiction of the relationship between oxidative stress and β-catenin signalling, and the effect of rice bran phytochemicals on β-catenin signalling, is presented in Figure 3. The relationship between oxidative stress and β-catenin signalling. Oxidative stress can lead to activation of β-catenin signalling. This can be achieved through the ability of H2O2 to activate Akt, leading to the increased nuclear localisation of β-catenin, or the ability of superoxide to inhibit the degradation of β-catenin, causing increased expression of pro-cancerous genes such as cyclin D1, cmyc and survivin. On the contrary, rice bran phytochemicals such as phytic acid, tricin and tocotrienols can inhibit β-catenin signalling, largely through the inhibition of β-catenin expression and nuclear localisation. These phytochemicals therefore exhibit cancer chemo-preventive properties by preventing β-catenin from activating the expression of pro-cancerous genes. In the figure, arrows indicate 'promotion' or 'lead to', and bar-headed lines indicate 'inhibition'. Inflammation Inflammation has long been considered to contribute to tumour progression [80]. Previous reviews have also suggested the involvement of inflammation in a variety of gastrointestinal cancers [81][82][83][84], therefore indicating the potential of anti-inflammatory strategies in the prevention of Figure 3. The relationship between oxidative stress and β-catenin signalling. Oxidative stress can lead to activation of β-catenin signalling. This can be achieved through the ability of H 2 O 2 to activate Akt, leading to the increased nuclear localisation of β-catenin, or the ability of superoxide to inhibit the degradation of β-catenin, causing increased expression of pro-cancerous genes such as cyclin D1, c-myc and survivin. On the contrary, rice bran phytochemicals such as phytic acid, tricin and tocotrienols can inhibit β-catenin signalling, largely through the inhibition of β-catenin expression and nuclear localisation. These phytochemicals therefore exhibit cancer chemo-preventive properties by preventing β-catenin from activating the expression of pro-cancerous genes. In the figure, arrows indicate 'promotion' or 'lead to', and bar-headed lines indicate 'inhibition'. Inflammation Inflammation has long been considered to contribute to tumour progression [80]. Previous reviews have also suggested the involvement of inflammation in a variety of gastrointestinal cancers [81][82][83][84], therefore indicating the potential of anti-inflammatory strategies in the prevention of gastrointestinal cancers. In fact, previous studies have generated ample evidence for the inter-relationship between oxidative stress and inflammation. Oxidative stress was shown to be implicated in inflammatory bowel diseases (IBD), which are characterised by inflammation in the gastrointestinal tract [85]. This observation is further demonstrated by studies showing that an increased level of 8-oxodG levels and protein carbonyls were found in patients with Crohn's Disease and ulcerative colitis, both of which are major types of IBD [13,86]. Moreover, ROS had previously been demonstrated to be a contributor to the inflammatory process. For example, hydroxyl radical and superoxide may interact with lipids present in cell membranes and form 4-hydroxynonenal (4-HNE), a molecule that promotes inflammation by provoking the release of pro-inflammatory mediators from immune cells [87]. H 2 O 2 may also contribute to inflammation through its ability to activate NF-κB [88], a transcription factor that has long been considered to play a role in intestinal inflammation through mediating the expression of inflammatory cytokines such as interleukin-6 and TNF-α [89,90]. Moreover, mice deficient in nuclear factor erythroid 2-related factor 2 (Nrf2), a transcription factor responsible for the expression of antioxidant genes [91], would be more susceptible to the development of intestinal inflammation under the effect of dextran sulphate sodium [92], a drug commonly used to induce colitis in animal models. This finding further supports the involvement of oxidative stress in intestinal inflammation. While oxidative stress can lead to inflammation, an increase in the condition could also play a role in exacerbating oxidative stress. Immune cells such as neutrophils, a major player in inflammation, would express NADPH oxidase, which can produce superoxide [93]. Once activated, the increased superoxide production by these neutrophils would further worsen the local oxidative status, thereby forming a vicious cycle of oxidative stress and inflammation. The bi-directional exacerbating effect between oxidative stress and inflammation may perhaps explain the involvement and coexistence of both conditions in gastrointestinal diseases, such as CRC. Overall, oxidative stress can potentially contribute to inflammation, largely through the effect of ROS on promoting the activity of immune cells in producing and releasing pro-inflammatory mediators. Moreover, oxidative stress and inflammation are liable to exacerbate each other in inflammation-based diseases, providing further evidence for the inter-relationship between these two molecular events. The anti-oxidative phytochemicals present in rice bran have also been shown in a number of studies to possess anti-inflammatory properties. Dietary supplementation of γ-oryzanol, an antioxidant in rice bran with potent radical scavenging activity, was found to reduce inflammation in rats by suppressing the production of pro-inflammatory cytokines released by activated macrophages [94]. In a mouse model of colon cancer, tricin supplementation was demonstrated to lower the expression of the pro-inflammatory tumour necrosis factor-α in the colonic mucosa of the mice [95]. Further, in studies involving the use of human colon cell lines, tricin treatment was shown to inhibit the activity of cyclo-oxygenase, an enzyme responsible for the generation of pro-inflammatory mediators such as prostaglandins [96,97], as well as the inhibition of the production of interleukin-6 [98]. Phytic acid treatment of colon cancer cell lines was also shown in separate studies to reduce the expression of IL-6 [99] and TNF-α [100], together with their receptors, in these cells. Such observations were also confirmed in vivo among rats fed with a high-fat diet [101]. In addition to the above effects, phytic acid treatment of cells was also found to increase the expression of I-κB [102], an inhibitor of NF-κB, which was indicated above to be a mediator of both inflammation and cell proliferation. More recently, a study also showed that phytic acid treatment of Caco-2 cells was able to reverse the pro-inflammatory effects of the treatment with inflammatory mediators through the down-regulation of the expression of inducible nitric oxide synthase (iNOS) [103], an enzyme whose over-expression was observed in patients with IBD [104]. Furthermore, in vitro γ-tocotrienol treatment was shown to inhibit the activity of NF-κB, leading to a decrease in the expression of pro-inflammatory and pro-proliferative genes, including cyclo-oxygenase 2, cyclin D1 and c-myc [105,106]. The above observations indicate that phytochemicals present in rice bran appear to play an important role in the reduction of inflammation, primarily through the down-regulation of pro-inflammatory gene expression. A schematic depiction of how oxidative stress interacts with inflammation, and how rice bran phytochemicals may inhibit inflammation, is shown in Figure 4. Taken together, the phytochemicals present in rice bran can exhibit a chemo-preventive effect not only through the regulation of oxidative stress, but also by interfering with other pro-tumourigenic cellular mechanisms that were shown to be related to oxidative stress, such as the β-catenin-mediated expression of pro-proliferative genes and inflammation. Further studies are likely to be valuable in elucidating whether these phytochemicals exhibit their interference in these cellular mechanisms through their inherent antioxidative properties. Such studies might provide further evidence for the role of the reduction of oxidative stress in cancer chemoprevention. Taken together, the phytochemicals present in rice bran can exhibit a chemo-preventive effect not only through the regulation of oxidative stress, but also by interfering with other protumourigenic cellular mechanisms that were shown to be related to oxidative stress, such as the βcatenin-mediated expression of pro-proliferative genes and inflammation. Further studies are likely to be valuable in elucidating whether these phytochemicals exhibit their interference in these cellular mechanisms through their inherent antioxidative properties. Such studies might provide further evidence for the role of the reduction of oxidative stress in cancer chemoprevention. The relationship between oxidative stress and inflammation. ROS such as hydroxyl radical and superoxide anion may cause increased lipid peroxidation, leading to the formation of 4-HNE which may promote the release of pro-inflammatory cytokines. Likewise, H2O2 may activate NF-κB, a transcription factor that mediates the expression of pro-inflammatory cytokines. This leads to increased level of inflammation which was demonstrated to be implicated in carcinogenesis. Moreover, during inflammation, neutrophils would step up the production of superoxide anion by NADPH oxidase. The increased superoxide production would further exacerbate oxidative stress, thereby forming a vicious cycle. On the other hand, rice bran phytochemicals were shown to amoeliorate inflammation, either through the inhibition of NF-κB activity or the reduction of proinflammatory gene expression. In the figure, arrows indicate 'promotion' or 'lead to', and bar-headed lines indicate 'inhibition'. Future Research Directions To date, the majority of studies investigating the antioxidative and anticancer effects of rice bran and its phytochemicals have mainly involved the use of cancer cell lines and animal models of cancer. However, studies that look at the effect of rice bran intake among humans on the relief of oxidative stress are currently lacking. As indicated earlier, oxidative stress is implicated in the pathogenesis of certain gastrointestinal cancers, and increased oxidative stress has been observed in gastrointestinal cancer patients. It is therefore tempting to speculate that the amelioration of oxidative stress could potentially be effective in the inhibition of gastrointestinal tumour growth, and therefore the prevention of cancer recurrence among gastrointestinal cancer survivors. With rice bran containing abundant phytochemicals that exhibit antioxidant properties, research on the use of rice bran dietary interventions among gastrointestinal cancer survivors would be of great value. As suggested by Galadari et al. [15], dietary antioxidants are likely to be poorly absorbed and quickly metabolised, raising the question of what dosage of rice bran needs to be ingested for the phytochemicals it contains to achieve a maximum antioxidative effect. Therefore, in studies involving rice bran dietary interventions, the required dosage of rice bran for the optimal antioxidative effect would first need to be established among gastrointestinal cancer survivors, by monitoring the changes in the biomarkers for oxidative stress before and after the intervention. These biomarkers may include serum levels of thiobarbituric acid reactive substances and 8-oxodG, both of which are . The relationship between oxidative stress and inflammation. ROS such as hydroxyl radical and superoxide anion may cause increased lipid peroxidation, leading to the formation of 4-HNE which may promote the release of pro-inflammatory cytokines. Likewise, H 2 O 2 may activate NF-κB, a transcription factor that mediates the expression of pro-inflammatory cytokines. This leads to increased level of inflammation which was demonstrated to be implicated in carcinogenesis. Moreover, during inflammation, neutrophils would step up the production of superoxide anion by NADPH oxidase. The increased superoxide production would further exacerbate oxidative stress, thereby forming a vicious cycle. On the other hand, rice bran phytochemicals were shown to amoeliorate inflammation, either through the inhibition of NF-κB activity or the reduction of pro-inflammatory gene expression. In the figure, arrows indicate 'promotion' or 'lead to', and bar-headed lines indicate 'inhibition'. Future Research Directions To date, the majority of studies investigating the antioxidative and anticancer effects of rice bran and its phytochemicals have mainly involved the use of cancer cell lines and animal models of cancer. However, studies that look at the effect of rice bran intake among humans on the relief of oxidative stress are currently lacking. As indicated earlier, oxidative stress is implicated in the pathogenesis of certain gastrointestinal cancers, and increased oxidative stress has been observed in gastrointestinal cancer patients. It is therefore tempting to speculate that the amelioration of oxidative stress could potentially be effective in the inhibition of gastrointestinal tumour growth, and therefore the prevention of cancer recurrence among gastrointestinal cancer survivors. With rice bran containing abundant phytochemicals that exhibit antioxidant properties, research on the use of rice bran dietary interventions among gastrointestinal cancer survivors would be of great value. As suggested by Galadari et al. [15], dietary antioxidants are likely to be poorly absorbed and quickly metabolised, raising the question of what dosage of rice bran needs to be ingested for the phytochemicals it contains to achieve a maximum antioxidative effect. Therefore, in studies involving rice bran dietary interventions, the required dosage of rice bran for the optimal antioxidative effect would first need to be established among gastrointestinal cancer survivors, by monitoring the changes in the biomarkers for oxidative stress before and after the intervention. These biomarkers may include serum levels of thiobarbituric acid reactive substances and 8-oxodG, both of which are oxidation products of bio-molecules generated as a result of oxidative stress. Rice bran dietary interventions using the optimal dosage determined may then be performed to establish whether they confer an inhibitory effect on tumour progression, through the measurement of various tumour markers at different time points during the intervention, such as carcino-embryonic antigen [107]. Such studies would provide further evidence for the anticancer effect of rice bran through the reduction of oxidative stress, and establish the potential for the use of rice bran dietary interventions in cases of gastrointestinal cancers recurring among the cancer survivors. Conclusions Rice bran is a dietary supplement that is known to contain abundant phytochemicals with potent antioxidant properties. Evidence has been emerging from previous in vivo and in vitro studies that these phytochemicals can inhibit the growth of gastrointestinal tumours, potentially through the amelioration of oxidative stress, the inhibition of cell proliferation and the reduction of inflammation. Such data would provide useful evidence for making a case for investigating the potential of rice bran intake for gastrointestinal cancer prevention in humans. Nevertheless, few studies to date have looked at the effects of rice bran intake in ameliorating oxidative stress and preventing tumour growth in the human gastrointestinal tract. Future research efforts should therefore be directed towards the development of effective rice bran dietary interventions, and the assessment of their effectiveness in reducing the presence of biomarkers indicative of both oxidative stress and tumour development among gastrointestinal cancer survivors. Such studies may not only provide further evidence for the effect of rice bran intake in gastrointestinal cancer prevention through the modulation of oxidative stress, but also establish the potential for using rice bran dietary interventions in inhibiting the recurrence of gastrointestinal cancer among survivors of the disease.
2017-07-26T02:57:32.650Z
2017-06-24T00:00:00.000
{ "year": 2017, "sha1": "12e1155ba288c94c70465e2b5a0eed0445711301", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/7/1352/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12e1155ba288c94c70465e2b5a0eed0445711301", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
110691402
pes2o/s2orc
v3-fos-license
The Construction of the Malaysian Educators Selection Inventory (MEdSI) for the Selection of Bachelor of Education Students in Public University in Malaysia By Sidek Sebagai negara kecil dan berkembang, pendidikan telah lama menjadi bagian yang sangat penting dalam pembangunan sosial negara. Sebelum Kemerdekaan, sistem pendidikan telah digunakan untuk memisahkan lebih lanjut etnis yang berbeda dan kelompok sosial. Pasca-kemerdekaan, sistem telah menjadi alat penting dalam menyatukan kelompok-kelompok ini dan mengurangi perbedaanperbedaan antara berbagai ras yang membentuk keragaman populasi Malaysia. Malaysia adalah poised untuk menjadi bangsa yang sepenuhnya dikembangkan pada tahun 2020, pendidikan adalah lagi di garis terdepan. Pembangunan suatu negara bergantung pada pengembangan orang-orangnya, sehingga peran sekolah dan guru yang bermain di fase awal ini pengembangan sumber daya manusia adalah sangat penting. Kebijakan yang terbaik dan kurikulum dapat ditarik ke atas, tetapi jika guru terbaik direkrut, hal ini mungkin semua datang ke sia-sia. Oleh karena itu, dalam rangka untuk membuat sebaik mungkin menggunakan keahlian dan fasilitas yang sudah ada dalam berbagai program pelatihan guru di universitas-universitas publik, kebijakankebijakan baru yang telah disusun dalam proses seleksi peserta pelatihan guru. Selain skolastik memenuhi persyaratan dan melewati proses wawancara, para kandidat sekarang juga disaring dengan tes psikometri, Malaysian Educators Selection Inventory (MEdSI). INTRODUCTION Given the large number of applicants who meet this academic criterion, and as interviewing is a very labourious and time-consuming process, another screening test is deemed necessary before the applicants are short-listed for the interview. But what kind of screening test? Academic qualifications alone are not a guarantee of a capable teacher. Thus, we go back on the personality and career aptitude frameworks in designing the instrument. The objective is not simply to decrease the number of candidates to be interviewed. We want an instrument that will really filter out less suitable teacher trainee candidates that is not only based on sound theoretical background but one that will also have psychometric properties of a valid and reliable instrument. THEORETICAL BACKGROUND The match between a person's characteristics and his career is essential in the person's motivation, work satisfaction, achievement, productivity and stability in his career (Holland, 1998). This is also similar to Parson's (1908) view that a career well-chosen is like fitting the square peg into a square hole and a round peg to a round hole, and not the other way around. This essentially means there is a need to choose the right person for the right job, and on this premise, our teacher trainee candidates have to have some intrinsic qualities in them that with the training given will make them into very capable teachers. The intrinsic qualities that we measured in MEdSI fall under 4 components: Personality, Career Aptitude, Integrity and Emotional Intelligence. Incompatibility between personality, interest and career choice may be manifested not just in unsatisfactory performance during training, but it may also be shown on the job through excessive sick leave taken, job truancy, insufficient commitment etc. (Carmeli & Gefan, 2005 the hiring and also in student admission processes. What is common among these tests is the concept of Person-Environment Fit (P-E Fit) (Sekiguchi, 2004). Instead of taking any one of these tests wholesale, it was decided that the cultural differences between Malaysia and the United States, plus the societal needs of Malaysia are pertinent enough to mandate a homegrown instrument, while still relying on wellestablished psychological and psychometric principles. Integrity Scale is a unique contribution to this teacher trainee candidate's selection process as it takes into account the current problems that have plagued the Malaysian schools vis-à-vis the teachers. The items were basically designed to discriminate between those who think teaching is an easy half-day job and the ones who have strong interest in teaching. Integrity here subsumes positive values that can be measured within the confines of knowledge, behavior and attitude. Integrity is a projection of teacher professionalism regarding discipline, diligence, responsibility, optimism, leadership, patience and perseverance, creativity and innovation. Emotional Intelligence is considered a necessary component of a teacher's persona as teachers are constantly dealing with students. Teachers who can understand and manage their emotional life and the emotion of their students are better off in handling conflicts that arise in classrooms. These teachers who are competent and able to solve such problems and demonstrated good planning skills while managing daily tasks generally are more effective in classroom management. ADMINISTRATION OF MEdSI MEdSI is a 300-item instrument designed to capture 4 intrinsic qualities i.e., Given the number of applicants and the high stakes nature of the assessment, ease of administration and scoring is also important. Hence, MEdSI is a paper-andpencil multiple-choice test with a time-limit of 60 minutes. PSYCHOMETRIC PROPERTIES: RELIABILITY AND VALIDITY The finalized version was administered on 1000 students currently undertaking education courses in 3 universities for establishing reliability and norms. The data presented here came from this norm group. PERSONALITY In MEdSI, the personality items were presented as a yes-no statement. The applicant is required to answer whether the item describes them or not. Included in the Personality dimension is a Lie Scale. This Lie Scale is very important to the entire test as applicants who fail the Lie Scale (a minimum answer of 4 out of the 10 items) are discarded from the pool of applicants. The Lie Scale is made up of items which applicants should not be expected to truthfully answer 'yes' but in the hope to appear good they would answer 'yes.' Example of a Lie Scale item is: 'I never disagree with my parents'. The personality model is taken from Cattell. Apart from the Lie Scale, the subscales are Assertive, Analytical, Autonomous, Extrovert, Intellectual, Resistance, Self-Criticism, Leadership, Helping and Achievement. Factor analysis was done on all the 11 sub-scales of the Personality dimension of MEdSI to determine the number of factors produced. For this purpose, the 11 x 11 correlation matrices were analyzed using the varimax rotation. Using this approach, the factors that are maintained for further processing are those that produced a high percentage value in terms of the variance and have eigenvalue more than one. In terms of variance, Edwards and Whitney (1972) has proposed variance of 75% or higher. Results show that the factor analysis produced seven factors (see Table 2). These seven main factors with eigenvalues more than one were produced; four of these are single factors, while the other three are combined factors. An inspection of the factors show that Factor 1 has 2 variables with loading more than .50 i.e., Intellectual (.76) and Analytical (.66). While Factor 3 has 2 variables with loadings more than .50 i.e. Extrovert (.56) and Helping (.57). CAREER INTEREST Career interest is measured through a yes-no checklist of interests pertinent to each particular construct. Examples Factor analysis was done on the 6 subscales in the Career Interest Dimension of MEdSI. The 6x6 correlational matrices with varimax rotation were used. The results are displayed in Table 3. Nota: h2 = communality * = variables with loading > .50 An examination of Factor 1 revealed that only 2 of the factors have loadings of more than .50 i.e., Realistic (.83) and Investigative (.68). Factor 2 has 2 variables with loadings more than .50 i.e. Conventional (.75) and Enterprising (.64). While Factor 3 has 2 variables with loadings more than .50 i.e., Social (.63) and Enterprising (.64). Although the results is different from Holland's findings of 6 separate factors (1997), it is still compatible with his theoretical leanings. Integrity The Integrity Scale is made up of 3 sub-scales i.e., Trustworthiness, Honesty and Wisdom. Table 4 shows the reliability coefficient of each of these subscales as measured through the Cronbach alpha. EMOTIONAL INTELLIGENCE The internal reliability of the Emotional Inteligence subscales was done and the result as manifested through the Cronbach alpha is shown in Table 5. Table 6 to Table 9 display the means, standard deviations and the 50 th percentile score for all the Dimensions of MEdSI. CONCLUSION In general, the MEdSI instrument constructed has been able to achieve the aims. The reliability and validity analyses show that the psychometric properties of good measurement are not compromised even with a large-scale and high-stake instrument such as this. Follow-up studies of teacher trainee cohorts who were given admittance to teacher training programmes in the various Malaysian public universities showed encouraging results. Preliminary observations from several universities indicated that the new cohorts demonstrated a more committed personality and seemed to be more motivated as compared to previous cohorts who did not undergo MEdSI. Since this instrument is being used by the Ministry of Higher Education of Malaysia in its goal of selecting better qualified and more suitable teachers' candidates for entering Malaysian public universities, its importance, relevance and usefulness needs to be addressed. Its psychometric properties at this point has been shown to be strong with high validity and reliability. The team of researchers that constructed MEdSI are now in the process of conducting more work in the form of building newer and better items for future cohorts and continually try to further improve and validate the instrument.
2019-04-13T13:08:04.856Z
2012-08-14T00:00:00.000
{ "year": 2012, "sha1": "12f878cebaeba45a01a5e2e3b855ae8bb14a6810", "oa_license": "CCBYNC", "oa_url": "http://ejournal.unp.ac.id/index.php/pedagogi/article/download/121/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "352b4244f4da320baa3c7a59d0b8ee0bff933395", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Engineering" ] }
235289460
pes2o/s2orc
v3-fos-license
Predicting spatial distribution of Asian Horned Frog (Megophrys montana Kuhl & Van Hasselt 1882) in Java Island using citizen science’s data Citizen science is a tool that has been used globally to gather data on species, recently, this effort is gaining popularity in Indonesia. Asian horned frog (Megophrys montana) is an amphibian endemic to Java with a significant population declining due to forest or habitat losses. The purpose of this study is to analyze the suitability of the habitat and to estimate the potential habitat of Megophrys montana in Java using maximum entropy (maxent). Ninety-four coordinates data from iNaturalist, a citizen science app, were used in modeling along with altitude, slope, rainfall, distance from rivers, Normalized Difference Vegetation Index (NDVI), and land cover categories. Megophrys montana habitat suitability model produced an excellent accuracy with an AUC value of 0.962. Altitude, rainfall, and slope were the most important environmental variables that affect the suitability of the species habitats. The characteristics of Megophrys montana habitat in Java are mountainous forest, wet rainfall, primary or secondary forest with steep slopes, and near to the rivers. West Java and Banten are provinces with the most suitable areas for their habitat, especially within the conservation areas, i.e. Mount Halimun Salak and Mount Gede Pangrango National Park. Introduction Habitat quality affects the distribution, population, and survival ability of certain species [1]. Forest, home for many wildlife, is rapidly declining especially in Java, where rapid development has reduced the forest cover [2]. However, a small pocket of suitable habitats for amphibians are still available, including habitat for the endemic Javanese amphibian, Megophrys montana. As a forest specialist, the population trend of this species is decreasing due to habitat degradation i.e. land-use change and pollution [3]. Currently, biodiversity data collection project conducted by amateur naturalists or citizen science has gained popularity. Citizen science data could be a promising source of information to fill the information gaps needed to model a species distribution [4]. One of the citizen science project available in Indonesia is Amfibi Reptil Kita (ARK) which use iNaturalist as a platform to collect data. Using the data available, it is possible to develop a habitat model suitable for Megophrys montana to analyze the suitability of the habitat and to predict the potential habitat of Megophrys montana in Java. This information can be used as a management base for the conservation of Megophrys montana in the future. Methods This research was conducted from July to August 2020 at the Laboratory of Environmental Analysis and Geo-Spatial Modeling, Department of Forest Resources Conservation and Ecotourism, Faculty of Forestry and Environment, IPB University. Secondary data were used for analyses, i.e. Megophrys montana presence coordinates and environmental variables predicted to have relevance on the presence of the frog, i.e altitude, slope, rainfall, NDVI, land cover, and distance from rivers. The encounter coordinates are obtained from iNaturalist-platforms citizen science. This study uses research-grade data with open, obscured, and private geoprivation. Obscured and private data were obtained from the GO-ARK project (Gerakan Observasi Amfibi Reptil Kita) [5]. The number of Megophrys montana data in iNaturalist is 130 points consisting of GO-ARK project data (81 points) and outside the project (49 points). However, only 94 locality coordinates are generated (figure 1). The data came from observations of volunteers from April 2012 to August 2020. The modeling process at maxent uses 75% of the data for the model and 25% for the accurate model [6]. Frog presence data is changed to a comma-sepparated value (csv), while the environment variable is converted into ascii (asc). In addition, the environmental variables used must have a similar coordinate system, outer boundary, and cell size. This study uses WGS 1984 coordinate system and cell size is 1 km 2 . The first data analysis is the multicollinearity test to determine the relationship between the variables [7]. Multicollinearity test is performed using band collection statistics in ArcMap, if multicollinearity value is less than or equal to -0.75 or exceeds +0.75 will mean multicollinearity. After the multicollinearity test, overlay analysis is used to determine the overlap of the Megophrys montana encounter points with the environmental variables that using. Overlay analysis using intersect in Arc Map. Maxent calculations result in habitats indicated by a range of values of 0 to 1 [8], divided into four conformity classes i.e., non-conforming (0-0.3), low (0.3-0.4), medium (0.4-0.6) and high (0.6-1.0) [9]. A good model can be defined with curves that maximize specificity. This condition can be measured using the Area Under Curve (AUC) value [10]. The accuracy of the model based on AUC values is divided into a poor grade (0.6-0.7), medium (0.7-0.8), good (0.8-0.9), and excellent (0.9-1.0) [11]. The percentage of sample data used to test the accuracy of the model is 25%. iNaturalist coordinates and environmental variables In general, Megophrys montana are found in mountainous areas, elevation above 500 m asl. However, there are four outliers of encounter points in low lands (0-300 m asl), located in residential areas, rice fields, and ponds. We considered these data as spatial bias because citizen science data is drawn flexibly with varying observer expertise, collected across a wide range of spatial resolutions and sampling attempts that are often not standardized [12]. This bias has the potential to inhibit the ability of citizen 3 science data to be maximally used in making habitat models [13,14], thus we omit these four data in the modeling process. Spatial data processing produces maps of environmental variables that are used in modeling, i.e. maps of altitude, slope, rainfall, NDVI, land cover, and distance from rivers (figure 2). These are the variable considered affecting the presence of Megophrys montana on the island of Java. Java Island has an altitude from 0 to 3,636 masl and is almost evenly distributed in every province in Java. The slope variable varies from 0 to 163.59%. The mountainous areas in Java tend to be dominated by moderate slopes, especially in the western part. Rainfall shows a value of 1,055 to 4,417 mm/ year. Eastern parts of Java have lower levels of rainfall than western and central parts. NDVI values vary from -0.6 to 0.83. The land cover classification consists of 19 types, but only 5 types are used in the modeling, i.e. shrubs, secondary forest, primary forest, plantation forest, and agriculture. The determination of the land cover is based on the value of the greatest influence in modeling. The distance from the river to 1.18 km shows the importance of river presence to the habitat of Megophrys montana. Multicollinearity test Temperature and altitude data have a cross-correlated relationship because it has a multicollinearity value of -0.97527 (table 1). Therefore, the temperature was not included in the modeling because the altitude is the variable that has the highest effect. (3), landcover (4), slope (5), temperature (6), and distance from rivers (7) 3. 3. Accuracy of the model The results of Megophrys montana habitat suitability model produce an AUC value of 0.962 with a standard deviation of 0.009 ( figure 3). This indicates that the accuracy of the model is very good [11]. The effect of environmental variables on modeling Based on the jackknife curve, the order of the variables with the highest effect is altitude, rainfall, and slope ( figure 4). Altitude is the most important variable in the modeling process because it has the greatest value when processed with other variables and when omitted it reduces the yield the most. This indicates that the Megophrys montana are very sensitive to altitude changes. Megophrys montana are found in higher areas and the name of these species indicates their habitat, which is mountainous areas [15,16]. Figure 4. Jackknife test results in modeling. The effect of environmental variables on the probability of Megophrys montana presence The effect of environmental variables on the probability of Megophrys montana presence is illustrated by the independent response curve in figure 5. The response curve shows the probability presence of Megophrys montana increased at an altitudinal range of 0 -± 1800 masl, and response will turn negative where altitude > ± 1800 masl. The differences in altitude will cause differences in climate, especially temperature, rainfall, and humidity. An increasing altitude will decrease the air temperature and increase the rainfall. Habitats with low air temperatures are preferred by amphibians because they can retain moisture on their skin [17]. Air temperature in Indonesia decreases by 5-6 °C for each additional 1000 masl [18]. However, altitude can be a natural barrier for amphibians so that it has an influence on community structures such as density [19]. Therefore, the probability presence of Megophrys montana is smaller when altitude is above 1800 masl. The rainfall response curve shows the probability of Megophrys montana presence increases when the rainfall is in the range 1000 -± 4000 mm / year and the lowest probability is when the rainfall is > ± 4400 mm/year. Areas with low air and surface temperatures are suitable habitats for amphibians because they can avoid the loss of water on the surface of the skin [20]. The response curve shows the probability of Megophrys montana presence increases with the increasing value of the slope. There is a possibility that areas with high slopes are protected areas with good vegetation cover conditions to provide sustainable benefits [21]. However, the slope is a variable that can limit the presence of amphibians because it can hinder movement [22]. The response curve shows the probability of Megophrys montana presence decreases with increasing distance from the river. It indicates that the existence of a water source is important for Megophrys montana, especially as amphibians are always associated with water [16]. The presence of water causes the surrounding area to have high humidity due to evapotranspiration [23]. This condition keeps amphibian skin moist to help with the process of exchanging air and water through the skin, which beneficial to amphibians that need sufficient moisture to protect themselves from drought [16]. The response curve shows that the probability of Megophrys montana presence increases with an increase of NDVI. The NDVI curve shows a decrease from negative values to 0 and increases when ≥ 0 and increasing significantly in the range +0.4 -+0.7. Overlapping results also show an increase when the NDVI value gets bigger. This is influenced by the NDVI value which has a positive relationship with land cover; when NDVI is greater, the vegetation cover is denser. Vegetation can reduce air and surface temperature because the canopy can reduce the intensity of solar radiation. This condition will form a microclimate with high humidity [18]. In addition, humidity affects the presence of amphibians due to the presence of mucus glands that function to keep the skin moist [24]. The land cover used consists of five types, i.e. shrubs, primary forest, secondary forest, plantation forest, and agriculture. The land cover response curves show that the highest probability of Megophrys montana presence is in primary and secondary forests (see figure 6). This result is consistent with the previous report, that Megophrys montana occupy the litter of both primary and secondary forest floors [15,16]. Primary and secondary forests have dense vegetation cover and the forest floor is filled with leaf litter. Asian horned frogs are also rarely found on disturbed lands such as plantation forests and agriculture [15]. Potential habitat for Megophrys montana on the island of Java Areas with high suitability for Megophrys montana are within highlands with high rainfall rates, steep slopes, and primary or secondary land cover types (figure 7). In general, areas with high suitability are mountainous areas. This result is consistent with the etymology of Megophrys montana, referring to its habitat in mountainous areas [16]. In addition, the results of the jackknife test showed that Megophrys montana was sensitive to altitude changes. The size of high suitability habitat in Java is 2,913 km 2 or 1.89% of the total size of the island. Provinces with the highest high level of habitat suitability are West Java (1,566.96 km 2 ), followed by Central Java (626.97 km 2 ) and Banten (139.03 km 2 ). Although East Java is the largest provincial area in Java Island, it only has high suitability area of 30.20 km2 or 0.06% of the total area. The factor that might influence this low size area is its relatively dry climatic conditions compared to West Java. The annual rainfall of West Java in 2019 is 3555 mm [25], while East Java is 1727 mm [26]. In addition, another factor that can influence is the low presence data of the Megophrys montana collected in the eastern part of Java. There is only one presence coordinate in East Java, a private geotype data located in Batu, Malang. Locations with a high percentage of suitability areas are the western part of Java, i.e. West Java and Banten except for DKI Jakarta which is a dense modern city, an area incompatible with the Megophrys montana habitat. Geographically, West Java has complex natural conditions and geological structures with mountainous areas in the middle and southern part with an average height of 1,000 masl and the highest level of elevation at 3,078 masl. The province also has forested areas, both primary and secondary, that serve as conservation forest, protection forest, and production forest covering an area of 695 and 658.9 hectares. Rainfall ranges from 1,000-4,000 mm / year with an average temperature of 16-34 o C. In addition, there are 41 river basins with surface water discharge of 81 billion m 3 / year [27]. The conservation area with high suitability for Megophrys montana habitat is 900.43 km 2 or 18.42% from the total conservation area of Java. The western part of Java has the highest suitable habitat for Megophrys montana in conservation areas. This indicates that the potential for the protection of the Megophrys montana habitat in the western part is the greatest in Java, especially in the Mount Halimun Salak and Mount Gede Pangrango National Parks. Areas with high suitability in Central Java, the largest is Mount Slamet, are not conservation areas. In addition, part of the area of high habitat suitability in Java included in the conservation area is only 30.9%. Conservation area managers need to pay attention in protecting the Megophrys montana in the future, as it only occurs in Java.
2021-06-03T01:39:06.595Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "45dafd60b556c1298ca8c10381e3ecfd81dc6c54", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/771/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "45dafd60b556c1298ca8c10381e3ecfd81dc6c54", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Physics" ] }
16604335
pes2o/s2orc
v3-fos-license
Phononic and magnonic dispersions of surface waves on a permalloy/BARC nanostructured array Phononic and magnonic dispersions of a linear array of periodic alternating Ni80Fe20 and bottom anti-reflective coating nanostripes on a Si substrate have been measured using Brillouin light scattering. The observed phononic gaps are considerably larger than those of laterally patterned multi-component crystals previously reported, mainly a consequence of the high elastic and density contrasts between the stripe materials. Additionally, the phonon hybridization bandgap has an unusual origin in the hybridization and avoided crossing of the zone-folded Rayleigh and pseudo-Sezawa waves. The magnonic band structure features near-dispersionless branches, with unusual vortex-like dynamic magnetization profiles, some of which lie below the highly-dispersive fundamental mode branch. Finite element calculations of the phononic and magnonic dispersions of the magphonic crystal accord well with experimental data. Background Photonic-phononic crystals, also referred to as phoxonic crystals [1][2][3][4], are of great interest as their dual photonic and phononic bandgaps allow the simultaneous control of photon and phonon propagation in these crystals. Another class of metamaterials possessing dual-excitation bandgaps is magnonic-phononic or magphonic crystals [5][6][7]. Although less well known than phoxonic materials, they too have promising application potential because of the possibility of the simultaneous control and manipulation of magnon and phonon propagation in them. Hence, they are potentially more useful technologically than either solely magnonic or phononic crystals which depend on a single type of excitation, namely magnons or phonons, as the respective information carrier. Magphonic crystals were theoretically studied by Nikitov et al. in 2008 [5]. Recently, Zhang et al. experimentally studied these materials in the form of a two-dimensional (2D) chessboard-patterned array of cobalt and Ni 80 Fe 20 (Permalloy, Py) dots [6], and one-dimensional (1D) periodic arrays of alternating Fe (or Ni) and Py nanostripes on SiO 2 /Si substrates (henceforth referred to as Py/Fe(Ni)) [7]. As the materials of the elements of these bicomponent arrays are both metals, namely either Py/Co, Py/Fe, or Py/Ni, the elastic and density contrasts between adjacent elements are rather low. In general, the phononic bandgap width increases with elastic and density contrasts [8,9]. Indeed the phonon bandgaps of the 1D and 2D structures measured by Zhang et al. are small, being of the order of 0.5 GHz. In this work, the magphonic crystal studied is a 1D periodic array of alternating Py and bottom anti-reflective coating (BARC) nanostripes deposited on an Si(001) substrate (abbreviated to Py/BARC). Py and BARC were selected as materials for the high elastic and density contrasts between them. Hence, the phononic dispersion is expected to be significantly different from those of Py/Fe(Ni). It is also of interest to explore the effects on the magnonic dispersion when the material of one of the elements in a bicomponent magphonic crystal is a non-magnetic one. The dispersions of surface spin and acoustic waves were measured by Brillouin light scattering (BLS) which is a powerful probe of such excitations in nanostructured materials [6,7,[9][10][11][12][13]. The measured phononic dispersion spectrum features a Bragg gap opening at the Brillouin zone (BZ) boundary, and a large hybridization bandgap, whose origin is different from those reported for other 1D-periodic phononic crystals [6,[13][14][15][16]. Interestingly, the experimental magnonic band structure reveals spin wave modes with near-nondispersive behavior and having frequencies below that of the highly dispersive fundamental mode (see below). This differs from the 1D one-or twocomponent magnonic crystals studied earlier, where almost dispersionless branches appear well above the dispersive branches [6,12]. Numerical simulations, carried out within the finite element framework, of the phononic and the magnonic dispersions yielded good agreement with experiments. Sample fabrication A 4 × 4-mm 2 -patterned area of 63 nm-thick 1D periodic array of alternating 250 nm-wide Py and 100 nm-wide BARC nanostripes (lattice constant a = 350 nm) was fabricated on a Si(001) substrate using deep ultraviolet (DUV) lithography at 248 nm exposing wavelength [17]. The substrate was first coated with a 63-nm-thick BARC layer, followed by a 480-nm-thick positive DUV photoresist. A Nikon lithographic scanner with a KrF excimer laser radiation was then used for exposing the resist. To convert the resist patterns into nanostripes, a 63-nm-thick Py was deposited using electron beam evaporation technique followed by the lift-off in OK73 and isopropyl alcohol. An ultrasonic bath was used to create agitation for easy lift-off of the Py layer. Completion of the lift-off process was determined by the color contrast of the patterned Py regions and confirmed by inspection under a scanning electron microscope (SEM). Figure 1a shows an SEM image of the resulting structure. Brillouin measurements The 180°-backscattering geometry was used in the BLS experiments, with the scattering plane normal to the sample surface and the magnon or phonon wavevector q along the periodicity direction (x direction in Figure 1a) which coincides with the [110] direction of the Si substrate. The 514.5-nm radiation of an argon-ion laser served as the light source and the scattered light was frequency analyzed with a (3 + 3)-pass tandem Fabry-Pérot interferometer equipped with a silicon avalanche diode detector. Prior to the spectral scans, the sample was first saturated in a 0.7-tesla field applied along the symmetry axes of the stripes, which was then gradually reduced to zero. Spectra of the acoustic and spin waves were measured in the p-p and p-s polarizations, respectively, and their dispersion relations mapped by varying the laser light incidence angle. Figure 1b,c shows typical Brillouin spectra recorded for the two excitations. Their mode frequencies obtained from spectral fits using Lorentzian functions were plotted against wavevector to yield dispersion relations shown in Figures 2a and 3a. Results and discussion We will first focus our attention on the phononic dispersion. The measured phononic dispersion spectrum features a 1.0-GHz gap opening centered at 4.8 GHz at the Brillouin zone boundary, and a 2.2-GHz bandgap centered at 6.5 GHz. Dispersion relations and mode displacement profiles of surface acoustic waves (SAWs) were computed using the finite element approach in COMSOL Multiphysics [18] and the Bloch-Floquet theorem. The 350-nm-wide computational cell used comprises a 63-nm-thick layer of a 100-nm-wide BARC stripe sandwiched between two 125-nm-wide Py stripes, atop a 2-μm-thick Si substrate, with its bottom boundary fixed. It is to be noted that unlike the case of the 1D Py/Fe nanostripe array of [7], no interfacial air gaps were considered in the calculations, as the fabrication process employed here precludes their formation. Elastic parameters used in the simulations for Py, BARC, and Si are Young's moduli = 180, 6.26, and 169 GPa; Poisson ratios = 0.31, 0.34, and 0.064; and mass densities = 8600, 1190, and 2330 kg/m 3 , respectively [19][20][21]. The simulated dispersion relations for the lowest three SAW branches, below the longitudinal bulk wave threshold [22,23], presented in Figure Simulated mode profiles for q = π/a, shown in Figure 2b, of the lowest two modes exhibit characteristics of the surface Rayleigh wave (RW). These RWs are standing Bloch waves satisfying the Bragg scattering condition. The mode profile of the third branch at the BZ boundary reveals that it is also a standing wave with most of its energy confined in the BARC stripes. Mode profiles for q = 1.4π/a displayed in Figure 2c indicate that at this wavevector, the first branch has the characteristics of the RW. In contrast, the higher two SAWs leak energy into the Si substrate as their dispersion curves extend beyond the transverse bulk wave threshold [16,[22][23][24]. The dispersion relations of the RW and Sezawa wave (SW), modeled by treating the Py/BARC array as a homogeneous effective medium [25] on a Si substrate, are presented in Figure 2a. It can be seen that the gap opening arises from the zone folding of the RW dispersions and avoided crossings at the BZ boundary. A prominent feature of the phonon dispersion spectrum is the large hybridization bandgap. For a structure, such as ours, comprising a 'slow' film on a 'fast' substrate, Sezawa waves will exist only below the transverse bulk wave threshold, and over a restricted range of qh, where h is the film thickness [23,26]. As shown in Figure 2a, within the first BZ, the SW and zone-folded RW do not cross, indicating that the measured bandgap does not originate from the hybridization of these waves. Instead, within the bandgap, the zone-folded RW crosses the transverse bulk wave threshold. Additionally, above but close to this threshold, attenuated SAWs called pseudo-Sezawa waves which exist as resonances with the substrate continuum of modes have been observed [23,26,27]. We thus attribute the origin of the bandgap to the hybridization and avoided crossing of the zone-folded RW and pseudo-Sezawa waves. The origin of this hybridization bandgap is to be contrasted with those reported for other 1D phononic crystals. For instance, Zhang et al. [7] and Maznev [15,16] attributed the origin of the gaps they observed in filmsubstrate samples to the avoided crossings of the RW and zone-folded Sezawa modes. Also, hybridization bandgaps in Si and SiO 2 gratings [13,14] were ascribed to the mixing of the RW and the longitudinal resonance, also referred to as the high-frequency pseudo-surface wave. It is noteworthy that the phonon dispersion spectrum of Py/BARC differs substantially from those of the 1D Py/Fe (Ni) arrays of [7]. For instance, the measured gap opening of 1.0 GHz at the BZ boundary of the former, is much wider than the first bandgap of 0.4 GHz observed for the latter. This is primarily due to the elastic and density contrasts between two metals (Fe or Ni and Py) being much lower than that between the polymer BARC and the metal Py. The 4.8 GHz center of this gap opening is also higher than those (≈ 3.4 GHz) of Py/Fe(Ni). This is expected as the 350-nm period of our Py/BARC is shorter than the 500-nm one of Py/Fe(Ni). Another reason is that our Py/BARC is directly patterned on a Si substrate, while the Py/Fe(Ni) samples contain an 800-nm-thick SiO 2 sublayer between the patterned arrays and the Si substrate which has the effect of red shifting the SAW frequencies. Another notable difference is that the 2.2-GHz bandgap is considerably larger than those of the Py/Fe(Ni) arrays, whose maximum gap is only 0.6 GHz. One explanation for this is the high elastic and density contrasts between the materials in Py/BARC. We now discuss the dispersion of spin waves in Py/BARC. The magnon band structure (Figure 3a) and mode profiles of the dynamic magnetization (Figure 3b) were calculated by solving the coupled linearized Landau-Lifshitz equation and Maxwell's equations in the magnetostatic approximation using a finite element approach [10]. As Py has negligible magnetic anisotropy, the free-spin boundary condition [28] is imposed on the Py surface. The Bloch-Floquet boundary condition is applied along the periodic direction. Parameters used for Py are the saturation magnetization M S = 7.3 × 10 5 A/m, the exchange stiffness A = 1.2 × 10 -11 J/m, and the gyromagnetic ratio γ = 190 GHz/T. The relative BLS intensities I of the magnon modes [11] were estimated from I ∝ | R 0 a m z (x) exp(−iqx) dx| 2 . The dispersion curves of the more intense modes are indicated by bold solid lines while those of weaker ones by dotted lines in Figure 3a, which reveals generally good agreement between experiment and simulations. Aside from the fundamental mode branch, labeled M1 in Figure 3a (see below), the other branches are rather flat. The magnon eigenmodes of a single isolated Py stripe having the same dimensions as those of a Py stripe in Py/BARC were also calculated using the above approach. Their calculated frequencies are indicated by blue bars in Figure 3a. It can be seen that, except for the fundamental mode branch, the magnon dispersion relation of Py/BARC is similar to that of the isolated Py stripe. In contrast to the magnon band structures of arrays of Py stripes separated by air gaps studied earlier [12], near-dispersionless modes exist below the fundamental mode branch (M1) of our Py/BARC sample. One reason is that the Py stripes in our sample are thicker. In comparison to the Py/Fe(Ni) structures [7], Py/BARC has a generally less-dispersive magnon band structure; however, its measured 1.8 GHz first and 0.7 GHz second bandgaps are of the same order of magnitude as those of the former. It is to be noted that the magnon branches can be classified into two groups. One group comprises branches (labeled M1 to M3 in Figure 3a) whose modes have profiles that are similar, i.e., near-uniform across the Py stripe thickness (z direction), to those observed in Py/air stripe arrays [12,29]. The other dispersionless group (labeled N1 to N5) comprises the perpendicular standing spin waves (PSSW). The frequencies of these PSSW modes, with quantization numbers n = 1 and m = 0 to 4 across the thickness and width, respectively, were also analytically calculated [11] and found to be 8.64, 8.94, 9.78, 11.1, and 12.8 GHz, in good agreement with experiment. It is noteworthy that the dynamic magnetizations (represented by arrows in Figure 3b) of the PSSW modes form one or more closed loops, each resembling the vortex configuration of a ferromagnetic ring [30]. As the dipolar field outside a magnetic vortex vanishes, the dipole-dipole coupling between the PSSW modes is expected to be very weak. This is evidenced by their nearly flat dispersion curves. Interestingly, mode hybridizations exist between the fundamental mode M1 and the respective PSSW modes N2 and N4, as borne out by the simulated hybridized mode profiles. Hybridization of the fundamental mode M1 with the N3 mode is however precluded due to their different symmetries. The M1 mode possesses odd symmetry, as under a π-rotation about the symmetry axis (y direction) of a Py stripe, its dynamic magnetizations are reversed. The N2 and N4 modes have odd symmetry, while the N3 mode has even symmetry. Conclusions In summary, we have measured the simultaneous magnonic and phononic bandgaps of the Py/BARC magphonic crystal by Brillouin light scattering. The measured phononic Bragg gap opening and hybridization bandgap are much wider than those previously observed in laterally patterned multi-component phononic crystals. This is mainly ascribed to the high elastic and density contrasts between the stripe materials, Py and BARC. The hybridization bandgap is found to have an unusual origin in the hybridization and avoided crossing of the zone-folded Rayleigh and pseudo-Sezawa waves. The magnonic dispersion relation comprises near-dispersionless PSSW branches, with some of them lying below the highly dispersive fundamental mode branch. Modes of the former have interesting vortex-like dynamic magnetization profiles, suggesting that interactions between the Py stripes are weak, and hence accounting for the nearly flat dispersion curves of these modes. Finite element simulations generally reproduced the experimental phonon and magnon dispersion relations. Because of the possibility of simultaneously controlling and manipulating the magnon and phonon propagation in them, magphonic crystals could find applications in areas such as acoustic and spin-wave signal processing.
2017-06-23T20:01:02.482Z
2013-03-02T00:00:00.000
{ "year": 2013, "sha1": "e0bce4240309b9a1cdc929959f1e9d361b27d3f3", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-8-115", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96680f3a24df23dc67b29ee4d326fa92af8d42bd", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
11273398
pes2o/s2orc
v3-fos-license
Can Individual and Social Patterns of Resource Use Buffer Animal Populations against Resource Decline? Species in many ecosystems are facing declines of key resources. If we are to understand and predict the effects of resource loss on natural populations, we need to understand whether and how the way animals use resources changes under resource decline. We investigated how the abundance of arboreal marsupials varies in response to a critical resource, hollow-bearing trees. Principally, we asked what mechanisms mediate the relationship between resources and abundance? Do animals use a greater or smaller proportion of the remaining resource, and is there a change in cooperative resource use (den sharing), as the availability of hollow trees declines? Analyses of data from 160 sites surveyed from 1997 to 2007 showed that hollow tree availability was positively associated with abundance of the mountain brushtail possum, the agile antechinus and the greater glider. The abundance of Leadbeater’s possum was primarily influenced by forest age. Notably, the relationship between abundance and hollow tree availability was significantly less than 1∶1 for all species. This was due primarily to a significant increase by all species in the proportional use of hollow-bearing trees where the abundance of this resource was low. The resource-sharing response was weaker and inconsistent among species. Two species, the mountain brushtail possum and the agile antechinus, showed significant but contrasting relationships between the number of animals per occupied tree and hollow tree abundance. The discrepancies between the species can be explained partly by differences in several aspects of the species’ biology, including body size, types of hollows used and social behaviour as it relates to hollow use. Our results show that individual and social aspects of resource use are not always static in response to resource availability and support the need to account for dynamic resource use patterns in predictive models of animal distribution and abundance. Introduction The influence of resource availability on the distribution and abundance of species is a central issue in ecology [1]. It is also a major issue in practical conservation biology because resource decline is a key component of the widespread habitat degradation associated with land use by humans [2,3]. Many studies have documented declines of species in association with the loss of critical resources, such as hollow-bearing trees that function as shelter resources for many obligate hollow-dwelling arboreal birds and mammals [4,5]. Often, relationships between resources and animal distribution or abundance are used to make quantitative predictions of how species will respond to scenarios of future resource availability, for instance using population viability analyses or resource selection functions [6,7,8]. A key research challenge relating to our ability to predict the responses of animal populations to resource variation is to understand the dynamics of resource use under varying resource availability [9,10]. Commonly, resource-based models of distribution or abundance assume a static relationship between populations and resources [6]. However, increasing evidence demonstrates that the kinds of resources that are used [11,12], the frequency with which they are used (or avoided) in relation to their availability (the resource selection function) [13,14], and the degree of resource sharing (cooperation) [15] can vary with resource availability or other changes such as human disturbance or predation pressure [16,17]. An understanding of the mechanisms by which animals respond to variation in resource availability is essential if we are to predict how resource variation will affect animal populations [9,18]. In this study, we investigated whether resource use by hollowdependent arboreal marsupials varies under resource availability in a semi-natural (sensu Franklin & Johnson [19]) forest ecosystem; the tall Eucalyptus forests of the Victorian Central Highlands of south-eastern Australia. In these forests, hollow-bearing trees are critical shelter resources for many species, and the availability of hollows is a key conservation issue for a number of arboreal marsupials including the endangered Leadbeater's Possum (Gymnobelideus leadbeateri) [20,21]. Hollows suitable for most arboreal marsupials typically do not form in mountain ash trees (Eucalyptus regnans: the dominant overstorey species) until the trees exceed 190 years of age [22]. There is spatial variation, and an ongoing temporal decline, in hollow availability across this landscape due to different rates of formation and collapse of hollow trees in forest stands of different ages, as well as recent wildfire and logging [23,24,25]. Previous work generated projec-tions of temporal declines in the abundance of arboreal marsupials across this landscape by assuming fixed relationships between animal abundance and hollow tree availability [26]. However, adaptive responses in the use of these key resources may mediate the demographic effects of resource variation. Therefore, we tested for two adaptive responses to variation in resource (hollow tree) availability. These were: (1) Variation in the Probability of Use of the Hollow Tree Resource Changes in the use of hollow trees as shelter resources could be manifested in the overall probability of use of the hollow tree resource and in the relative probability of use of different types of shelter resources [11,27]. We predicted that where hollow trees are scarce, a greater proportion of those trees will be used, and lesspreferred kinds (age classes) of trees will be used more often. (2) Variation in the Number of Individuals Per Occupied Tree (Resource Cooperation) Potentially, changes in resource cooperation may either mitigate or exacerbate the demographic effects of resource decline. The evolution of kin-based cooperative behaviour has been documented in response to limitation of territory resources [28,29]. Such a response (increased resource sharing) could buffer populations against decline in proportion to resource availability. Alternatively, decreasing resource availability can lead to increasing resource competition, with increasing aggression, resource defence and territoriality [30,31]. Indeed, one of the species studied here, the mountain brushtail possum (Trichosurus cunninghami) shared dens less often where dens were less abundant [15], suggesting that social mechanisms can exacerbate the effects of resource decline. To determine which, if any, of these responses to environmental variation (variation in proportional occupancy and/or resource cooperation) occur, we analysed patterns of abundance, hollow tree occupancy and sharing in four species of arboreal marsupial using a long-term dataset. Ethics Statement The field research presented in this paper involved observational animal counts only, and thus did not require an animal ethics permit. The research was conducted in publicly-managed state forests and national parks. Study Area and Data Collection We conducted our research in the Victorian Central Highlands of south-eastern Australia, an area covering approximately 60680 km (37u209-37u559S and 145u309-146u209E). The data were collected at 160 one hectare sites that were situated predominantly in mountain ash (Eucalyptus regnans) forest. This species is the world's tallest angiosperm and is the dominant overstorey tree species between 800 m and 1100 m altitude in this area. The number of hollow-bearing trees at each site ranged from one to 31 and were identified on the basis of visual identification of hollows. Each marsupial species studied has specific (and largely non-overlapping) hollow requirements, and the total number of hollow trees per site is likely to be an overestimate of the number of hollow trees available to each species, as not all hollows are suitable for each species. Nevertheless, the type and size of tree hollows (related to their suitability for each species) in a tree is strongly related to its decay stage [32], and we used this as an explanatory covariate in our models. The sites were surveyed repeatedly on an overlapping and rotating sampling design from 1997 to 2007 [33]. During each survey of a site, we counted the number of individuals of each species of arboreal marsupial emerging from every hollow tree on the site for a period of one hour after dusk [33]. All of the species we surveyed are nocturnal. They shelter during daylight hours in tree hollows and typically emerge shortly after dusk to forage. This is the most effective method available for estimating the abundance of each species of arboreal marsupial at a site. We recorded nine species of arboreal marsupial in our surveys [33] but focussed on the four most commonly recorded species for these analyses. These were (1) the mountain brushtail possum (Trichosurus cunninghami) a large (2.5-4 kg) nocturnal arboreal marsupial that shelters in large tree hollows; (2) Leadbeater's possum (Gymnobelideus leadbeateri), an endangered small (,140 g) marsupial with a colonial social system that dens in hollow trees and typically favours small 'keyhole' entrances to large hollows inside dead standing mountain ash trees; (3) the greater glider (Petauroides volans), a large (1.35 kg) gliding marsupial that feeds exclusively on eucalypt leaves and prefers to den in hollows high in live trees; and (4) the agile antechinus (Antechinus agilis) a small (20-40 g) marsupial carnivore that predominantly forages at ground level but dens communally in a range of types of tree hollows. We provide a basic background to the biology of these species in Appendix S1 and a diagrammatic representation of the tree form preferences of each species in Figure 1. Data Analysis We analysed data for each species to answer three questions: (1) Does the number of animals per site vary in proportion to the number of hollow-bearing trees at the site? (2) Does the probability of occupancy of each hollow tree vary in proportion to the number of hollow-bearing trees at the site? (3) Does the number of individuals per occupied tree vary with the number of hollowbearing trees at the site? We analysed our data using generalised linear mixed models (GLMMs) in Genstat 11 [34]. Our model selection approach was to drop non-significant terms from the 'full' model of a small set of candidate explanatory variables. We analysed the data separately for each species because there are no trophic relationships between them, nor are they likely to compete for food or shelter resources (they use different types of hollows [21]), so we had no reason to expect any major effects of one species on another. Indeed, multiple species are commonly detected in the same tree if suitable hollows are available for each. We have no records of multiple species in the same hollow. (1) Does the number of animals per site vary in proportion to the number of hollow-bearing trees at the site?. We used Poisson GLMMs with a logarithmic link function to relate the number of animals of each species per site to candidate explanatory variables. Because each site was surveyed on multiple occasions, year of survey was represented as a random term. Our candidate explanatory variables included the number of hollow bearing trees at that site and the age category of the forest (young regrowth, post 1939 wildfire regrowth, old growth). We included forest age because several important floristic and structural attributes of forest stands vary with age, such as the predominant decay class of hollow trees ( Figure 1) and the abundance of Acacia, an important food source for species like Leadbeater's possum. The number of trees per site was analysed both as an untransformed and log-transformed variable. We used these models to answer two questions: (a) Is there an effect of hollow tree abundance on site-level abundance of arboreal marsupials, accounting for potential effects of forest age? (b) If so, is the relationship between arboreal marsupial abundance and hollow tree abundance significantly different to 1:1. We estimated whether the coefficient differed significantly from 1 by re-fitting the models using log transformed hollow tree abundance as an offset variable. (2) Does the probability of occupancy of each hollow tree vary in proportion to the number of hollow-bearing trees at the site?. We used binomial GLMMs with a logit link function to analyse the probability of occupancy of each tree by each arboreal marsupial species. Site and year were included in the models as random terms. The candidate explanatory variables (fixed terms) included the number of trees per site (untransformed and log-transformed), forest age category and tree form ( Figure 1). Tree form was included because past work indicates that each species has a preference for particular kinds of tree forms [21], and the decay stage of hollow trees that predominate at a site is not independent of the number of trees at that site ( Figure 2). For instance, old growth forest stands contain many hollow-bearing trees that are usually alive (Tree forms 1 and 2 in Figure 1). Younger regrowth forests typically contain few hollow trees, and those that are present are often highly decayed 'legacies' of an older cohort of trees from before the previous fire (Tree forms 6-8 Figure 1. A subset of the decay stages of mountain ash trees used by arboreal marsupials (based on [33,40]. The dark arrows show the range of tree forms (TF1-8) preferred by each species, including the mountain brushtail possum (MBP), the greater glider (GG), the agile antechinus (AA) and the Leadbeater's possum (LP). The thinner grey arrows are tree forms used less frequently by each species. Although there is overlap between species in the preferred tree decay stages, the species differ in their specific requirements for hollow size. Mountain ash trees may take up to 150 years from germination to reach the TF1 stage, when suitable hollows for arboreal marsupials first begin to form. Tree form 9 is not shown and represents trees that have completely collapsed. Generally, younger trees (within the range shown) may hollows in the main stem and broken branches, while older trees have hollows in a highly decayed main stem. doi:10.1371/journal.pone.0053672.g001 in Figure 1). Further, the number and type of hollows found in the different tree forms can vary, with the earlier decay classes (Figure 1) often having a number of hollows in broken branches and the later decay classes having fewer, but larger, hollows in a highly decayed main stem [32]. We commenced our analyses with tree form represented as a categorical variable with all nine decay classes ( Figure 1). However, after initial exploratory analyses, the tree forms were often condensed to two or three subsets based on the habitat use of each species. For example, for greater gliders we reclassified the tree forms ( Figure 1) into a binomial variable distinguishing live trees (Tree forms 1-2) from dead trees (Tree forms [3][4][5][6][7][8]. We included interactions between the number of hollow trees per site and tree form to test for shifts in the kinds of hollow trees selected as dens under variation in den availability (i.e. Is there a 'relaxation' of tree form preference as hollow trees become more scarce?). (3) Does the number of individuals per occupied tree vary with the number of hollow-bearing trees at the site?. We used Poisson GLMMs with a logarithmic link function to relate the number of animals of each species observed in occupied trees to the number of trees per site (untransformed and log-transformed), tree form (Figure 1), forest age category and the interaction of these variables. We included tree form to account for potential variation in the type and number of hollows in trees of different decay stages, and forest age class as a broad explanator of variation in structural and floristic attributes of forest stands. Results (1) Does the Number of Animals Per Site Vary in Proportion to the Number of Hollow-bearing Trees at the Site? We observed a mean of 2.26 (range 0-21) animals per site (over all species). The most commonly recorded species were the mountain brushtail possum (329 individual records) and the greater glider (328), followed by Leadbeater's possum (175) and the agile antechinus (160) from 440 site surveys from 1997 to 2007. For three species, the number of individuals recorded per site showed a significant positive relationship with the number of hollow trees (Table 1, Figures 3, 4, and 5). For Leadbeater's possum, but no other species, we found a significant effect of forest age on site level abundance (this species was most abundant in young regrowth forest that germinated after a 1983 wildfire), but no effect of hollow tree availability (P = 0.082; Table 1, Figure 6). We were interested in determining whether the relationship between tree hollow abundance and animal abundance differed significantly from 1:1 and tested this by re-fitting the models using log-transformed hollow tree abundance as an offset variable. The coefficients were significantly less than 1 for all species (P,0.001). (Table 2, Figures 3, 4, 5, and 6). This suggests that a greater proportion of the hollow trees are occupied when there are fewer hollow trees at a site. These relationships were not significantly affected by forest age for any species. There were significant preferences in the kinds of trees selected for shelter by each species, indicating selection for specific decay classses. Following exploratory analyses, the tree form categories were grouped according to the preference of each species. This included dead trees (Tree forms 3-8 in Figure 1) for Leadbeater's possum, which were 1.85 times more likely to be occupied than live trees. For the greater glider, live trees (Tree forms 1-2 in Figure 1) were 2.2 times more likely to be occupied than dead trees. Agile antechinus were significantly more likely to be found in trees of medium decay stage (3.2% probability of detection in Tree forms 3-7 in Figure 1) compared to live trees (1% detection rate) or later-stage dead trees (0.6% detection rate). The mountain brushtail possum was less specific in its tree form preference, but it was most likely to be found in hollow bearing trees of form 2 (9.8% detection rate) as illustrated in Figure 1. We did not identify significant interactions between the number of hollow trees per site and tree form on detected tree occupancy by any species (P.0.05 for the interaction in all cases). This suggests no shifts in the kinds of trees selected for shelter in response to variation in the availability of hollow trees. We recorded a mean of 1.349 range 0-3) greater gliders, 1.418 mountain brushtail possums (0-3), 1.682 agile antechinus (0-7) and 2.160 (0-7) Leadbeater's possums from each tree found to be occupied by that species. Two species, the mountain brushtail possum and the agile antechinus, showed significant and contrasting social responses to the number of hollow trees per site (Table 3). We found evidence for greater sharing of hollows trees by mountain brushtail possums as hollow trees became scarcer, and a significant effect of tree form, with live trees of tree form 2 (see Figure 1) typically supporting the greatest number of individuals (predicted mean 1.88). Such trees can contain numerous hollows and are most common in old growth forest stands with many hollow trees (Figure 2) [22]. Thus, the kinds of trees predominating at sites with high den availability effectively increases the number of individuals per occupied tree at such sites, yet there also appears to be a behavioural response in the opposite direction, in that den-sharing increases where hollow trees are scarce. In contrast to the results for the mountain brushtail possum, we found a greater number of agile antechinus per occupied tree in sites where hollow trees were more abundant ( Figure 5). The number of Leadbeater's possums or greater gliders per occupied tree did not vary significantly with the number of hollow trees per site (Table 3). Shelter Resources and Arboreal Marsupial Abundance Our primary aims were to understand how arboreal marsupials respond to the decline of a critical shelter resource, hollow bearing trees, with a specific focus on the dynamics of occupancy patterns and resource sharing under variation in resource availability. Answering these questions contributes to our understanding of the mechanisms by which animals respond to environmental change, thus improving our ability to predict the demographic effects of resource decline [9]. We found the abundance of three hollowdependent marsupials to be significantly and positively related to the abundance of hollow-bearing trees. However, site-level abundance all species studied decreased at approximately half the rate expected based on a 1:1 relationship between hollow trees and animal abundance. The two resource use responses that we documented that contributed to this pattern included variation in the probability of occupancy of each hollow tree and in the number of individuals per occupied tree. Variation in Occupancy Rates in Response to Hollow Tree Abundance Of the two responses that we observed, an increase in the probability of occupancy of each hollow tree was the primary demographic compensatory mechanism against shelter resource decline for all species. Indeed, this response was remarkably consistent across the four species studied (Table 1, Figures 3, 4, 5, and 6). Occupancy rates were typically low when hollow trees were abundant (Figures 3, 4, 5, and 6), although this needs to be interpreted in light of: (1) the fact that each species has distinct hollow requirements, so our total hollow tree count is likely to overestimate hollow availability for any individual species; and (2) behavioural aspects of den use, whereby individuals of these species use multiple den trees (over 20 in the case of the mountain brushtail possum [35]). Nevertheless, occupancy rates increased significantly with declining hollow tree availability. Such negative relationships between proportional use of a high quality (or critical) resource and its availability have been observed in other species. For instance, the frequency of use of pastures (containing abundant forage) by red deer (Cervus elaphus) increased with decreasing pasture availability in a mosaic landscape comprised of forest and pasture in southern Norway [36]. This change in probability of use of a critical resource with variation in its availability was one of the key responses that we predicted at the outset of this study. Table 1. Poisson generalised linear mixed models of the effects of hollow tree availability and forest age (non-significant terms were dropped from models) on the abundance of four species of arboreal marsupials. Another response that we predicted was a shift in the relative preference for different resources in response to variation in per capita resource availability [37]. Such responses have been observed by other species. For instance, roe deer (Capreolus capreolus) showed no change in selection for woodlands in response to their availability. Woodlands provide cover and forage for roe deer, but in agricultural landscapes where woodland cover was low, roe deer increasingly used hedgerows for these purposes [38]. Caerulean warblers (Dendroica cerulea) showed a significant shift in the preferred locations of nest sites after major structural habitat changes due to disturbance [39]. This plasticity appeared to confer a degree of demographic resilience to ecological disturbance. In montane ash forests, the marsupials that we studied show preferences for denning in particular decay-classes of tree [4,21,40] and we predicted that, in addition to an overall change in the proportion of hollow trees used, the species would show a 'relaxation' of tree form selection where hollow trees were less abundant. This was not observed (there were no significant interactions between hollow tree abundance and tree form in tree occupancy models). It is possible that the structural attributes of the hollows in the different tree decay classes (hollow size, entrance size, elevation, thermal properties) limit flexibility in the different kinds of trees that can be used by each species [20]. However, the lack of a 'relaxation' of tree form preference where hollow trees were less abundant was surprising, given that such a relaxation is exactly what was found in a study of one of these species after a recent major fire resulted in the loss of approximately 80% of the hollow trees at one of our sites [23]. Most of the hollow trees that collapsed after that fire were highly decayed dead trees, such that the relative abundance of each tree form was significantly different before and after the fire [23]. However, the ecological context for the variation in hollow tree availability in the dataset analysed here, in which the variation is predominantly spatial (between sites) with a relatively slow temporal change in tree abundance [25], is quite different from the short-term temporal variation caused by fire, where surviving individuals with established home ranges are faced with a dramatically altered resource landscape [23]. Thus, behavioural and demographic variation in response to temporally stable spatial heterogeneity in resource availability may be quite different to behavioural and demographic responses to the rapid loss of a critical resource. Resource Sharing A key aim of this study was to investigate plasticity in resource sharing in response to variation in resource abundance. The availability of, and competition for, resources plays a key role in evolutionary theories relating to social behaviour [28,29,41]. There is increasing evidence for social behaviour mediating functional responses to environmental change [42,43] and for adaptive changes in social behaviour in response to environmental change [44]. This has led to calls for the consideration of social behavioural processes in conservation research and management [45]. Here, we observed variation in resource sharing by two species in response to variation in abundance of hollow trees. However, these responses were smaller than the variation in the resource selection function (i.e. the probability of occupancy of hollow trees) and were highly variable between species. The mountain brushtail possum showed a significant but relatively minor increase in the number of individuals sharing each occupied tree as hollow tree availability declined. The agile antechinus showed a stronger pattern in the opposite direction, while the greater glider and Leadbeater's possum showed no significant responses. In the case of the mountain brushtail possum, the demographic consequence of increased den resource sharing in sites with fewer hollow trees was a minor buffering of animal abundance against resource decline. The development of cooperative behaviour of various types in response to a per capita decline in resource availability has been observed in other natural systems, including in several bird species [28,29]. In line with those studies, our results also suggest increased resource cooperation with decreased per capita resource availability. Such social responses to resource decline could potentially be an important demographic buffer to otherwise negative environmental changes. However, this is an area that has not been studied extensively in a conservation context. Even within the same study region (and species), different studies have revealed contradictory patterns in different populations. Mountain brushtail possums at Cambarville (within the broader region studied here) shared dens less often and used fewer trees where hollow trees were scarce [15,23,46]. These results contradict the present findings and are consistent with predictions from theoretical and empirical work suggesting that resource defence behaviour and intolerance of other individuals develops under resource competition [30]. Potentially, research into kin selection, the scale of individual resource use and resource cooperation, and the scale of heterogeneity in resource availability, may shed light on the discrepancies between these findings [41]. More agile antechinus were observed in each occupied tree on sites with more hollow trees. Most likely, this is a simple consequence of local population size. There are likely to be more animals on sites with more trees because the agile antechinus commonly forages for invertebrates under shed or shedding bark, which is more abundant in old forests than younger forests [47]. The species dens communally in groups of up to 20+ individuals for thermoregulation and pre-mating social interactions {Banks, 2005 #72; [48]. Essentially, there are more individuals available for communal denning in sites with a greater number of hollow trees and they are likely to actively seek out large communal groups. Since communal denning for enhanced thermoregulation is important for this species in cold climates [48], the declining availability of individuals for communal denning with decreasing hollow tree availability may exacerbate the negative effects of hollow tree decline on this species. Conclusions and Caveats We investigated variation in the proportional occupancy and sharing of shelter resources by arboreal marsupials with regard to variation in the abundance of hollow-bearing trees, a critical shelter resource. We found consistent patterns of an increased probability of use of the hollow trees at a given site where there were fewer such trees per site. However, this was not facilitated by a relaxation of preferential selection for certain decay classes of trees by each species. This functional response was the major 'numerical' buffer to demographic decline associated with shelter resource loss. An important area for future research in this system will relate to the role of other resources limiting a proportional (1:1) increase in abundance with hollow tree availability. The probable influence of food resource limitation was apparent in our data for Leadbeater's possum. For this species, abundance was associated with forest type but not the number of hollow bearing trees. Leadbeater's possum was most abundant in young regrowth forest. Key habitat requirements for this species include hollowbearing trees and an understorey of Acacia trees for foraging [4,49]. Several Acacia species regenerate rapidly after disturbances such as fires in these forests. Because these regenerating forest stands often contain a number of highly decayed dead trees (the preferred tree form for Leadbeater's possum: see Figure 1 and Table 2) and high Acacia availability, they are likely to be ideal habitat for this species. In contrast, old growth forests contain abundant hollow trees but little Acacia understorey [47], such that food limitation is likely to play a larger role than hollow tree availability in the distribution and abundance of this species in such forest stands. The different social responses that we observed under variation in hollow tree abundance suggest that many aspects of a species' biology influence the potential for social plasticity in response to variation in resource availability. In this system, variation in other resources such as food, social aspects of the use of the focal resource (hollow bearing trees) and simple physical considerations (e.g. How many animals can fit in a hollow?) are likely to have played important roles. Sociobiology is a relatively new area of research in conservation biology [44,45,50]. However, it has a strong foundation in evolutionary ecological research [41,51] and has the potential to better inform our understanding of the responses of animals to environmental change. Supporting Information Appendix S1 Basic biology of the four study species. (DOCX)
2017-06-16T16:26:58.672Z
2013-01-08T00:00:00.000
{ "year": 2013, "sha1": "98d57a1c60c3b247aeaa8703ff92c4cb19a974f4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0053672&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98d57a1c60c3b247aeaa8703ff92c4cb19a974f4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222280045
pes2o/s2orc
v3-fos-license
Implementing social accountability for contraceptive services: lessons from Uganda Background Growing evidence shows that social accountability contributes to improving health care services, with much promise for addressing women’s barriers in contraceptive care. Yet little is known about how social accountability works in the often-complex context of sexual and reproductive health, particularly as sex and reproduction can be sensitive topics in the open and public formats typical of social accountability. This paper explores how social accountability operates in the highly gendered and complex context of contraceptive care. Methods This exploratory research uses a case study approach to provide a more grounded understanding of how social accountability processes operate in the context of contraceptive information and services. We observed two social accountability projects that predominantly focused on contraceptive care in Uganda over a year. Five instruments were used to capture information from different source materials and multiple respondents. In total, one hundred and twenty-eight interviews were conducted and over 1000 pages of project documents were collected. Data were analyzed and compiled into four case studies that provide a thick description of how these two projects operated. Results The case studies show the critical role of information, dialogue and negotiation in social accountability in the context of contraceptive care. Improved community and health system relationships, community empowerment, provider and health system responsiveness and enhanced availability and access to services were reported in both projects. There were also changes in how different actors related to themselves and to each other, and contraceptive care, a previously taboo topic, became a legitimate area for public dialogue. Conclusion The study found that while social accountability in the context of contraceptive services is indeed sensitive, it can be a powerful tool to dissolving resistance to family planning and facilitating a more productive discourse on the topic. care, particularly those related to inequitable access and quality of care [28]. Service level barriers to quality contraceptive services, such as inconvenient hours and long waiting times, inaccurate and incomprehensible information, disrespectful and discriminatory treatment by service providers, along with untrained health care providers, lack of supplies and informal fees, negatively affect women's confidence in and use of contraceptive services, thereby limiting their ability to exercise their reproductive rights [11,17,24,47,50,62]. These conditions exist in some parts of Uganda, where women's and men's ability to freely decide on the number and spacing of their childrenand have the information and means to do soremains constrained after years of high-level political opposition [3, 27, 34-39, 49, 52, 53, 56]. Uganda's modern contraceptive prevalence rate increased from 14% in 2000 to 35% in 2016 [54]. Yet, many married women (42%) still have an unmet need for contraceptives, particularly those living in rural settings, with lower incomes and less education [54]. Many women face a 40-min walk to public health facilities (3 to 5 km) and, upon arrival, their desired contraceptive services maybe unavailable due to stock outs or staff are not trained in providing different methods [27,35,39,55]. Congestion in public facilities translate into overstretched staff and long waiting times that exacerbate opportunity costs and social risks [35,56]. Women seeking contraceptive care in Uganda, particularly younger women, report poor treatment by health care providers and norms about age, marital status, parity and education influence how providers counsel clients [27,[36][37][38]. Though changing, social and gender norms continue to influence fertility preferences and shape social opposition to women's access to and use of contraception [3, 2, 27, 34, 35, 37-39, 49, 52, 53]. Partner opposition, with the potential threat of verbal and/or physical abuse or abandonment, continue to underlie contraceptive non-use [35,37]. Several studies have shown that social accountability can improve access to and use of contraceptive services [5,22,60]. For example, Gullo et al. [22] found an estimated increase of 57% in family planning use in Malawi following a community scorecard process, and Björkman and Svensson [5] found an increase of 22% in family planning use in Uganda after a report card process. These effects are surprising given that contraceptive care is often a political and personal topic that can be sensitive in the open discussions typical of social accountability. Little attention has been paid to understanding the mechanisms behind this relationship. This paper addresses the lack of understanding of how social accountability processes operate in highly gendered and complex context of contraceptive care [9,10]. It presents exploratory research that uses descriptive case studies to provide a more grounded understanding of how social accountability processes works to improve access to family planning. We observed two social accountability projects focused on improving contraceptive care in Uganda. Methods To address this research gap, we conducted exploratory research using a descriptive case study design in four districts of Uganda -two in Western Region and two in Central Regionover a one-year period. We used qualitative research methods to facilitate the collection of respondents' experiences, opinions, and thoughts that can help better understand the relationship between social accountability and contraceptive care. A case study design allowed for rich and detailed accounts of the social accountability processes over time in each site, outlining the different perspectives and events and incorporating the relevant contextual and implementation information [51,64]. The districts were selected based on the presence of ongoing social accountability projects focused on improving access to quality contraceptive services implemented by Reproductive Health Uganda (see Table 1). Reproductive Health Uganda (RHU) is a Ugandan nongovernmental organization (NGO) with decades of experience in sexual and reproductive health service provision and advocacy throughout the country. Two projects were selected in Western and Central region and both aimed to strengthen the capacity of service users and civil society to hold those duty-bearers with the responsibility to provide contraceptive care to account for promised services. Both projects operationalized social accountability in different ways; one of the projects used what they called a 'community dialogue' approach and the other used a 'community scorecard' approach. The 'community dialogue' brought together groups of village members to identify the issues they faced when accessing family planning. Drawing on these insights, specially trained representatives (i.e. champions) from these groups interacted with local health authorities to advocate for change and reported back to the village groups. This cycle was repeated on a quarterly basis. In addition, the village participants engaged in health promotion for family planning work and set up self-help groups. The community dialogues were distinct from the community scorecard approach which directly brought together service users and service providers (both at the facility and in local health authorities) to jointly assess the issues underlying service delivery problems and find a common way of addressing them. The scorecard activities were implemented alongside activities to strengthen wider Goal: Contribute to the strengthening of women's reproductive health and rights in Uganda Outcome: By 2017, the demand for quality family planning services has increased by at least 4% in two districts Outcome: By 2017, Reproductive Health Uganda has increased participation in and influence on strategically selected local and national policy processes for the promotion of family planning Theory of change If citizens are empowered to act on their choices and take lead in advocating for change, THEN, they would believe and have confidence that they can hold their leaders accountable and influence them to change policies in their favor. This would motivate citizens to demand better services from their duty bearers. The persistent collective voice and actions from citizens and community structures would compel duty-bearers to respond by changing the necessary policies and taking other actions that lead to improvements in the accessibility, availability and quality of health and social services. Empowered beneficiaries will take responsibility in advocating for sustainable change. Sustainable change will be attained through the rights holders knowing their rights and how to address these to achieve influence; creation of demand for FP in the community; advocacy involving women to hold duty bearers accountable; and, in time, improved access to reproductive health services. Key project actors at sub-district and village levels Social accountability approach Combined social audits and citizen report cards. Compiled service information from both service users and providers who are then supported to jointly identify priority issues and develop action plans to address them. The community scorecard activities were implemented alongside activities to strengthen wider health sector accountability, such as improving human resources for health and district-wide strengthening of health facility committees. Interactive participatory communication. Participatory process of sharing information between people to help reach a mutual understanding and a workable solution. health sector accountability, such as improving human resources for health and district-wide strengthening of health facility committees. The activities focused on holding a range of health systems actors accountable, from the providers at the health facilities through to local health authorities. Table 2 provides a detailed summary of the projects' objectives, program theories, key actors and approaches. Both projects worked within the decentralized public health service system in Uganda that is structured into national and regional hospitals, general hospitals, and into four levels within districts based on catchment areas [31]. Each higher level of the health systems provides more specialized functions and supervises of the lower level health facilities. Uganda's (2006) National Policy Guidelines and Service Standard sets out the contraceptive services that should be provided by each level of provider down to the contraceptive pills provided by the community health workers [30]. Study instruments Five instruments were used to extract information from different source materials, to capture multiple points of view in all four sites (two per district) and provide a grounded description of project implementation. Table 3 describes each instrument used to collect data, the participants and the sample sizes for each component of data collection. Data collection Over 1000 pages of project documents were obtained from the project staff and include project plans, log frames/result frameworks, reporting and evaluation reports. The documents detailed project design and Document review To understand program theories of change, intended outcomes and activity timelines, as well as reported implementation of activities and outcomes. Project staff prepared the reports representing narratives of the project from their perspective. Not applicable (NA) NA NA Over 1000 pages of documents were reviewed, including project activity reports, project planning documents, such as log frames/ results frameworks, baseline reports, annual reports to funders, and any evaluations undertaken. Context mapping To understand the prior experience of the community during the previous 3 years, as well as ongoing interventions related to social accountability and family planning. The context mapping was conducted during the first 2 months of the project. 21 10 11 Participants were purposively sampled. They were approached through a telephone call and interviewed in person at their workplace. They included district health officials and local nongovernmental organization (NGO) staff. In-depth interviews To explore experiences and perceptions of activities in the social accountability process with community members, project staff and intervention participants, particularly in regard to family planning, over the year of observation. 73 36 37 Participants were conveniently sampled at one of 12 activities and included: district officials, subcounty and local leadership, health providers and officials, project champions; Community Based Organization (CBO) members, farmers, teachers, business people, male role models, church leaders, project staff. Participants were approached face-to-face to be interviewed after the activities in private location nearby. Nonimplementation interviews As social accountability is a process over time, it was important to consider stoppages and delays. The interviews probed reasons and perceived impacts of such interruptions to further understand the realities of implementing social accountability. Participants were purposively sampled based on project staff recommendations. They included project staff, district and local executive staff, health workers and project champions. There were approached through a telephone call and interviewed privately in their workplace. Remedy and redress interviews In-depth interviews and observations indicate instances where a change was reported and attributed to the projects. To examine these changes and unpack the mechanisms of change, interviews were conducted to understand how people thought a particular change came about. 25 16 9 Participants were purposively sampled through snowball technique based on their role in the reported change as suggested by the project staff. They included project champion, community mobilizer, male role model, project staff, health committee members, local executive official, health official, and CBO member. They were approached through a telephone call and interviewed in their workplace or private location. They were asked to describe their perceptions of why the change took place and what they believed to be the impacts. planning and were analyzed to examine what each project intended to achieve and how it intended to go about it. Data were collected through in-depth interviews and observations over a year, starting when both projects had been implemented for at least a year. The in-depth interviews captured information on people's experiences of the events, the core components, and the barriers and facilitators. The observations captured interactions, attitudes and behaviors of project participants' during project activities and events. Eight data collectors (three men and five women, including two of the authors) conducted the interviews and the observations. Data collectors were post-graduate students with at least a university education, previous experience in conducting interviews in sexual and reproductive health and spoke the local language. Data collectors were recruited and hosted at RHU and received 3 days of training prior to data collection and a one-day refresher training halfway through the study. Interviews were conducted in the local language and on the whole, there were no existing relationships between the interviewers and respondents, with the exception of project staff who had arranged data collectors' access to project activities. All respondents were interviewed in private at the project events or at their workplace; they were informed about the study goals in their local language and asked to provide written consent. Interview guides were developed with semi-structured questions and prompts (see Additional file 1). The interviews lasted about 1 h and were audiorecorded with the consent of the respondents. For the observations, field notes were completed by the researchers after attending the events. Data analysis One hundred and ninety three interviews were conducted with a range of respondents (see Table 3 for details on interview type, gender and sampling). Interviews were transcribed into English and each transcript was checked by a member of the research team. The transcripts were coded in Atlas.ti by four coders. Initial deductive codes were developed based on the definition of social accountability and the research question, and emergent codes were found through jointly coding four transcripts by respondent type. To ensure consistent application of the coding frame, the four coders jointly coded a further four transcripts, and the remaining transcripts were divided between them. Weekly calls were held between coders to review codes and the lead author checked the consistency between coders. Codes were organized under themes and by respondent type, and written into thematic reports, which were triangulated for each site. Project documents and fieldnotes were analyzed together. Project documents were reviewed to develop a detailed chronology of how the project was implemented. The intended activities were compared against the reported ones, and against the data collectors' observation fieldnotes. The authors (VB and JG) reviewed and extracted information about the projects from the collected documents. To construct the case studies, the two sets of analysis were compiled into a case study format for each site, with two case studies for each social accountability project. Contrasting the case studies within and between the two projects highlighted the similarities and differences in how social accountability operated in each site, and helped to identify common findings across all the case studies. Case studies and findings for Western Region are designated A1 and A2, and those for Central Region are designated B1 and B2. Within 6 months of completion of data collection, workshops were held with respondents in each site to discuss and validate the research findings. Results Case studies of each project are presented in narratives of how social accountability operated in the context of contraceptive care. Case study A: the community scorecard approach Prior to the implementation of the project in both districts, there was limited awareness about family planning. There was open hostility to family planning with rumors abound about its detrimental side-effects such as cancer, fatigue, and weight gain. Given the project's focus on wider health system strengthening, RHU only began to focus their activities on family planning in the second year of the project. RHU introduced the topic of family planning to the project's existing health authority partners, champions and community organizations (CO) who had already been trained in community mobilization, monitoring and advocacy. RHU introduced the new focus on family planning by training their community based organization (CBO) partners on contraception and supporting them to identify gaps in contraceptive care locally and to integrate these issues into their existing work plans. After the training, the CBO partners undertook extensive community mobilization on family planning alongside sensitizations about health rights. The health rights training was important because community members learnt about their entitlements and what standards of care they should expect. A local teacher said, "You cannot solve the problem you don't know" (A2). Community members shared their new knowledge of entitlements with other community members. The project staff also worked with local religious leaders from different denominations to actively make positive statements about family planning to their congregations. Community members recalled the family planning activities and stressed that family planning was something they had to 'learn', as a local champion explained: Family planning did not emerge from a community dialogue because we learnt about family planning …. even on Sundays [when I would] go to church and tell the Reverend that today I would like to teach family planning or maternal health. Now the Reverend also accepts it and gives me a few hours and I teach men, women and all the children when they have all come to church (A1). In tandem with these community focused activities, the project staff sensitized health providers and local health officials about health rights, accountability and family planning. Local health officials found this training helpful in their work. A local official explained, "Actually it was useful because what they were telling us was more or less teaching us how to uplift our area, especially the communities, because there are very many questions or problems or challenges in our communities" (A2). After separate sensitization with both the community and the health systems actors, both groups came together in interface meeting to jointly identify issues, develop priorities and strategic actions to address them. Together they identified shared concerns about misconceptions surrounding family planning, commodity stock outs and untrained service providers and how to address them. The desired changes were then regularly monitored by the community. The interactions generated mutual understanding, and empathy between community members and health system actors. Community members felt empowered that they could raise their voice, as a local church leader commented, "It was important because what community people said, they got time to say things they don't have [the ability to say] anywhere else" (A1). A health care provider explained how these exchanges made her appreciate the communities' voice: If there is no voice from the community, it may take forever to have a better policy or, if a policy is in place and the community doesn't understand what that policy says or what they will benefit when that policy is implemented, then it may also be another hindrance in policy implementation. So that is where we say we are heading to, we may not be there, but we are somewhere, we have achieved a few milestones along the way (A1). Health system actors began to view community inputs as valuable and forwarded the joint concerns to subcountry and district authorities for further action. There was an emergent sense of collaboration between the community and the health system. A community member who was involved in one such initiative explained, "I have been able to learn many things like working together as a group for the better of our community" (A1). Some facility-related issues, such as poor road access, limited water and electricity supplies, poor provider behavior and lack of security, were attended to locally without relying on assistance from local officials or NGOs. A community member explained this established practice: We as community members…decided to come together, make an effort to solve some of these problems on our own without having to wait for outside help, which should come in later, at least we need to do half of these things ourselves (A1). The project did not report on changes in contraceptive care because this was not part of its performance monitoring requirements. However, there were several changes attributed to the project that indirectly benefitted contraceptive care, such as the recruitment of new health care providers and changes in provider behavior (e.g. wearing of uniforms and posting of duty rosters). The project participants themselves reported increased awareness of health issues, including family planning; having more confidence in the health system, and felt that their local health care providers and local leaders listened and acted on the issues they raised. Though the project tended to focus on broader health systems strengthening rather than family planning, it was apparent that, over time, access to family planning became a legitimate concern for discussion in community forums. Case study B: community dialogue approach There were frequent reports of social resistance and opposition to contraception in both districts where the project was implemented. A project champion explained, "Women used to fear to come out in the open to demand a family planning method of their choice. They feared that if people knew they were using family planning, they would think they are a prostitute, that, along with their husband, is there another one (126:54)?" There were misconceptions about how contraception worked and what kind of effects it had like causing foetal abnormalities, cancer and fibroids. Another champion explained how contraceptive use was influenced by gender norms: "When they [women] get married, their bodies, including the sex part, traditionally belongs to the man. The man decides when to have sex, when to have children and how many children" (B1). In contrast with case study A, family planning was integral from the outset of the project. There was ongoing sensitization of both communities and local health systems actors on family planning to dispel myths and tackle social resistance. The sensitization was conducted through women's pressure groups, male role model groups, radio programs, couple counseling seminars and workshops among sub-county leaders. These activities were key, as a champion said, to "Clearing the image that people were painting of family planning" (B1). RHU trained community champions (local women with social standing who had experience and skills in negotiating with leaders) and supported the formation of women-only "pressure groups (PG)" in the project villages. The champions and PG members met every 2 months to learn about family planning, prioritize which access barriers to address and report on the number of people they sensitized about family planning. Often RHU sent in their own service providers to conduct the training as a high degree of technical knowledge was required. The barriers they discussed ranged from transport costs, male resistance, myths and misconceptions, religious opposition, disrespectful providers and lack of information in local languages. The newly formed groups organized the meetings, developed their activities and led sensitization about family planning in person or at social gatherings. To attract members, RHU launched income-generating activities in catering, animal rearing and savings. The activities proved so successful that these groups registered with the local authorities as independent organization so they could access additional funds. As male opposition was considered a barrier to accessing family planning services, the project supported couple counseling and the formation of male role models (MRM). Male role models were local men with social standing trained to promote family planning with other men, both individually and in groups. Participants in these community meetings thought they were catalytic and helped them develop personally and gain self-worth. One MRM said, "Every time you are with a person, you bring a good idea, and someone claps for it, it means it is of value" (B1). A youth leader felt that they could make a difference, "What did I learn? I learned that even though you are small, you can make an impact" (B1). A project champion observed these changes: The people who attended were so free, why am saying that, when these things started there were people who had low self-esteem most especially pressure groups they used to fear, when they would see champions they would fear. But slowly we started visiting them in their activities, we would be called, and we go to explain to them where they do not understand. So you look and see a champion sitting here, a pressure group sitting there, but it was never there (B1). Members appreciated learning from each other and working together, as one community member said: Everyone was getting a chance. You know in that group we have health workers, we have village women, teachers … so to say there is also a class of people that do government work and also a class of women who do their own work in the villages but we were in the meeting and everyone was being given a chance to talk (B2). Once the barriers to contraceptive care were identified in the community meetings, the project champions and RHU met with local authorities to advocate for change. For example, a barrier identified was that women could not seek contraceptive services because of the prohibitively high transport costs or fees charged during private outreach programs. The champions and RHU, therefore, advocated with the district health authorities to integrate family planning into ongoing immunization outreach. These advocacy efforts were relatively successful, with sub-country and district funds committed to family planning in the first year of the project. In the subsequent years, the advocacy focused on ensuring the committed funds were released and used as intended. In the meetings with local authorities, people with different backgrounds shared their experiences and ideas, and learned from and about each other. Local officials came to value inputs from the surrounding community, "We lack such information and yet people run to us to provide a solution to them, so we need such information more than ever" (B2). Officials also appreciated learning about family planning and shared this with their constituencies, "My role was to participate, to see what family planning is, to understand what they have taught so that I can go back and spread the gospel to my people who have not got this chance" (B1). Over time, however, participants reported divisions among the community actors. The champions were treated differently; they received additional training, and directly engaged with officials that the other community groups did not. This difference was palpable, as a pressure group member stated: "We have the champions, those that are higher than us, because for us, we are lower as PG members" (B1). These social differences among community members played out in what was included in the dialogues with health system actors; quality of care issues identified by the Pressure Group received less attention than policy related barriers, particularly those with budgetary implications. Over the course of the study, the MRMs began mobilizing for other projects they were involved in, and the PGs set up autonomous groups with funding from other sources. The project successfully secured several budget lines for family planning, many self-sustaining women's groups were formed, and the district-wide platform to coordinate family planning was established. There was a more positive attitude towards family planning, and it had become a legitimate public concern. One champion said, There has been some change, because back then we had our people who never wanted to hear anything about family planning because they were taking it in a very different way, they would say stop-stop don't even tell us, so that is what has been on ground. But now they even visit us and also us we visit them after, so we see a great change because people are now involved in family planning, something that had never happened in the past (B2). Discussion With the increased attention to social accountability and making health systems more accountable, two projects aimed at improving access to quality contraceptive services implemented by RHU in Uganda offer important lessons on how social accountability operates in relation contraceptive services. Both projects, like other studies of social accountability, reported improvements in community and health system relationships, community empowerment, and provider and health system responsiveness to community concerns [22,43]. These changes were generated by actors viewing themselves and each other differently, whether by considering themselves as agents of change or taking the views from others as valuable. In addition, over the course of both projects, family planning went from being taboo to an appropriate topic for public dialogue. Much like other accounts of social accountability in the context of reproductive health and maternal health, across the four case studies, information, dialogue and negotiation were central to the change process [20,23]. The case studies provided grounded examples of information, dialogue and negotiation that can inform policy and practice. Information Social accountability in the context of contraceptive care is more than improving access and quality; it entails bringing stigmatized or unacknowledged concerns to the surface and making silenced issues legitimate areas of public concern. In both project sites, contraception was considered a 'woman's issue' and/or source of fear and social anxiety. This required a special programmatic emphasis on changing the associations surrounding contraception. RHU, an external actor that introduced the topic, was described as bringing in 'new knowledge' into the community. Wide-ranging efforts (such as small scale sensitization, working with religious leaders to make positive statements, and enrolling male role models to work throughout the villages) were used to challenge popular perceptions regarding contraception, including emphasizing its social and health benefits, and dislodging misconceptions and social opposition held by community members, health care providers, and dutybearers. Efforts geared towards making an issue a legitimate area of public concern are a distinctive feature of social accountability in the context of contraceptive care. Raising awareness and sharing information itself was insufficient to prompt change in either project sites; as others have argued, it is critical that health systems actors are willing and able to respond to community demands [16,18,26]. Viewing people's claims as legitimate and showing "receptivity to the ideas and concerns raised by citizens by implementing changes to the decision-making or management structure, culture, policies or practices ( [32]: 130)" are central to bringing about change [29,32]. In the case studies highlighted in this study, health system actors were actively supported to overcome their own misconceptions both about contraception and about community engagement. Only then were the key stakeholders in contraceptive care open to listen to community needs and preferences [21,23,41]. Yet, the capacity and willingness of duty-bearers often go unrecognized [20,29,41,43]. The need to support health care actors in social accountability was not reflected in the projects' documents which treated responsiveness as an outcome rather than as part of the process. In both sites, however, project participants worked with local officials and health sector actors to raise their awareness about contraception, to actively value the concerns coming from the community as well as to improve their knowledge about the health system and their own roles in it. Dialogue In the social accountability canon, dialogues are a central mechanism to counteract the hierarchical nature of the health services [14,20,22,23,41,[43][44][45]. Dialogues transform both how people perceive themselves and their ability to effect change, as well as how they perceive and interact with others, particularly those in socially advantaged and/or privileged positions. The community-level groups, including those initiated by the project and those that grew out of the projects, provided spaces for both women and men to reflect on issues that affected their reproductive health and to recognize them as shared concerns; this created a sense of agency and solidarity that translated into the dialogues and negotiations with health care providers and health officials. In these groups, people felt they could speak, that they were heard and valued, and that personal capabilities were expanded from working together. They increasingly viewed themselves as active and able to bring about change themselves; change was not something done for them. The projects supported dialogues between people from different backgrounds and positions (from community members, community representatives, project staff, health care providers and local health officials) and offered new ways of interacting. Scott et al. [44] suggested that these kinds of dialogues are novel spaces where people who do not ordinarily come together meet and the normal rules of interaction are momentarily suspended. In these 'safe spaces', people are encouraged to reflect on their biases and assumptions, realize their own limitations and express their own constraints and frustrations [41,44]. As other studies of social accountability found, these are opportunities for bi-directional information sharing, for expressing grievances and priorities, for official explanations of policies and responding to concerns, and for co-producing priorities and ways to address them [6,21,22,41,43]. In a context where there is social resistance to contraception, with reports of social and physical opposition from partners, parents, health care providers, religious leaders, and the limited knowledge and information about contraceptives, the dialogues do something more than challenging hierarchies. In both case studies highlighted in this paper, the dialogues played a critical role in sharing more positive ideas and information about contraception. The importance of small-scale social networks in increasing the local acceptability of family planning is well documented [4,8,33]. Demographers have noted that through social networks, local influencers facilitate discussions about 'inappropriate topics', increase exposure to positive experiences and positive outcome expectations of family planning, and facilitate opportunities to assess the relative benefits, all of which contribute to a more supportive environment [19,58]. For women who may experience social isolation or internalize harmful social norms, forming local groups provides critical social support, confidence-building and self-efficacy in seeking family planning services [12,13,15]. Dialogues play several critical functions regarding how social accountability operates in contraceptive care such as supporting local solidarities to challenge hierarchies, and expanding local acceptability of family planning, particularly where there is social opposition. Negotiation Dialogues can lead to new alliances in which communities and health system actors work together to negotiate with higher level authorities to bring about change. In both projects, by the end of the study, community inputs were increasingly valued by health system actors and were actively sought to be included in districts and at sub-county planning and budget meetings, district management meetings and health sector committee meetings. In addition, local health actors worked with community partners to develop strategies to ensure the implementation of policies locally, as well as upstream to press senior officials to act. In the two projects studied, the role of community representatives (the champions in the community dialogue approach or sub-contracted local community groups in the community scorecard model) was notable. People with local social stature and strong networks were strategically leveraged to enhance the credibility of efforts and to help legitimize the demands of those in less socially advantageous positions. Scholars working in the field of social accountability have identified the legitimacy of groups and their demands as a critical factor in confronting unequal relationships behind inequitable access to care [14,20,41]. Marginalized groups often draw on the social capital generated through connections with local community groups, non-governmental organizations and local personalities to help legitimise their demands [41]. Yet findings from this study show that the participatory mechanisms were managed by more privileged participants who promoted what they thought was most important, while those more vulnerable were sidestepped. In both cases, the social standing of the community representatives played a role in legitimizing women's demands, and acted as a safe outlet for women who felt social constraints in what they could say and do for fear of social censure. Given the resistance to family planning in the study setting, navigating these gendered norms and hierarchies may be an unavoidable necessity and essential for women in the community. Intimate partner violence as a result of contraceptive use is well-documented globally and in Uganda and many women resort to covert use of contraception [1,7,48,61]. Given the social and physical risks associated with contraceptive use, working with intermediaries may be a safer way to raise and discuss sensitive issues and deflect the risks posed to individual women to demand for quality contraceptive information and services. The findings from this study show that the social complexities of communities, at a minimum, were recognised in the programme design. The findings of this study show that contraceptive care has some unique features. First, early in the process, special programmatic efforts were required to make contraceptive a legitimate topic of community concern among community and health system actors. This study shows that sexual and reproductive health decisions, including access to family planning, are regulated by local norms and expectations around gender, marriage and kinship, and are often considered as the private domain of the family. Its private connotations may prevent it being raised in public forums and requires special endeavors to make it an acceptable topic for public discussions. This study found that positive public conversations about contraception set the grounds for more open conversations about barriers and solutions to information and services. The findings further suggest that another unique dimension is the need to strike a balance between personal safety and participation in dialogues and negotiation. The social and physical risks around publicly discussing sexual and reproductive decisions are unmistakable and require careful attention. The strategic use of community representatives who were socially advantaged and potentially less exposed to the social and physical risks presented an innovative solution in both project sites. Yet, in both cases, the alliance between the community groups and their community representatives was short lived, as they split towards the end of the project to pursue their own interests. Social accountability approaches are likely to be sustainable, particularly those that involve changing local perceptions and acceptability of contraception, and supporting the formation of trained local solidarity groups groups, which could extend beyond the lifespan of the project. Limitations This study has certain limitations. First, the study districts were purposively selected because they were conducting social accountability projects to improve accesss to family planning and both were implemented by RHU. RHU has been working on sexual and reproductive health and rights in Uganda for decades and is therefore not representative of all NGOs doing similar work. Second, there was a potential bias towards positive responses. The interviewers were recruited by RHU and could have been perceived as RHU staff, and respondents were recruited during the activities or based on recommendations from project staff. This potential bias was addressed by including respondents beyond project implementers and participants. Furthermore, bias was minimized through triangulating the sources when interpreting data and through validation of the findings with the respondents at the end of the study. Where the responses differed, the differences were described in the analysis and interpreted by the coders in the context of other findings. Third, the paper reports the changes as perceived by the respondents based on one year of project implementation, rather than an impact evaluation. The study design had originally included measuring contraceptive uptake, but there was no suitable program data or service delivery data available to assess this outcome. In crafting a concise narrative it was necessary to leave out some data. One such dimension was the deep frustration about the delays in funding that resulted in stoppages in implementation of activities which affected the trust, morale, credibility, and opportunities that the projects had been building upon. Conclusion The case studies included in this paper provide important insights into how social accountability works in the oftensensitive context of sexual and reproductive health care that are cogent for other settings and for socially and politically complex issues. We found that while social accountability in the context of contraceptive services is indeed sensitive, it can be a powerful tool to dissolving resistance to family planning and facilitating a more productive discourse on the topic. Findings from the case studies show that social accountability can generate common causes and lead to slow-burn transformative changes, through information, dialogue and negotiation, that can improve health services and wider social dynamics.
2020-10-12T13:43:08.875Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "4ee1c0e87f5ee281214d5cfefcb54607a3ca73a4", "oa_license": "CCBY", "oa_url": "https://bmcwomenshealth.biomedcentral.com/track/pdf/10.1186/s12905-020-01072-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ee1c0e87f5ee281214d5cfefcb54607a3ca73a4", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
235243556
pes2o/s2orc
v3-fos-license
The inside of me: interoceptive constraints on the concept of self in neuroscience and clinical psychology Humans are unique in their ability to think about themselves and carry a more or less clear notion of who they are in their mind. Here we review recent evidence suggesting that the birth, maintenance, and loss of the abstract concept of ‘self’ is deeply tied to interoception, the sense of internal physiological signals. Interoception influences multiple facets of the self-concept, cutting across its material, social, moral, and agentive components. Overall, we argue that interoception contributes to the stability of the self-concept over time, unifying its layers and constraining the degree to which it is susceptible to external influences. Hence, the core features of the self-concept are those that correlate more with inner bodily states. We discuss the implications that this may have for theories of embodied cognition as well as for the understanding of psychiatric disorders in which the concept of self appears fragmented or loose. Finally, we formulate some empirical predictions that could be tested in future studies to shed further light on this emerging field. Introduction The concept of self, i.e. who you think you are, encompasses the sum total of the features that you affirm of your own being and deny of all other beings. However, the number and type of these distinctive features can vary widely. Evidence from classical psychometric tests, such as the Twenty Statements Test (Kuhn & McPartland, 1954), suggests that people define themselves by describing their physical appearance (I am six feet tall; I am dark-haired), telling their biographical details (I am the son of a surgeon), stating their social roles (I am married; I am a pianist), specifying their political, ethnic, or religious group (I am a conservative; I am Jewish), listing their virtues and vices (I am honest; I am lazy), or locating their place in the grand scheme of things (I am a human; I am a rational animal). Thus, the self is a multifaceted construct, encompassing a broad set of information that ranges from the mundane and material to the social and spiritual. At the same time, the self is indeed a concept and not a motley array of disconnected details, since that set of information consistently points to a unique individual across a diverse range of circumstances. More specifically, the self-concept is an abstract concept, as it exhibits the two peculiar marks of abstractness: it does not have a single and perceptually bounded kind of objects as referent, and it can be applied to a variety of complex situations (Borghi et al., 2018). Abstract concepts have long been considered a challenge for theories of embodied cognition, as they seem to lack a clear, unambiguous bodily counterpart. However, recent studies on abstract concepts suggest that the birth, maintenance, and loss of these concepts in the human mind can be in fact correlated with bodily states (Borghi et al., 2018;Connell et al., 2018;Villani et al., 2021). As a case in point, in this review, we aim to show how a highly symbolic notion like the sense of self is shaped by radically corporeal dimensions like those carried out by interoception, i.e. the perception of physiological signals coming from the inner parts of the body, such as heartbeats, breaths, gastric contractions, and other cues that inform us about our homeostatic condition (Craig, 2002). The upshot is that internal bodily signals exert a pervasive, stabilising influence on three of the four layers of James's (1890) classical partition of the self: not just the material self (the concept of myself as a material being), but also the social self (the concept of myself as a member of society) and even the spiritual self (the concept of myself as a moral person). Furthermore, we will review the preliminary evidence suggesting that internal bodily states are also linked to the fourth Jamesian layer, namely, the pure Ego (the concept of myself as a thinking and acting subject). Across all levels, we will also assess what happens to the concept of self when interoceptive signals are not processed in a proper way. Finally, we will discuss the implications of these interoceptive constraints for theoretical accounts of the self-concept and formulate some empirical predictions that could be tested by future studies in the field. Interoceptive constraints on the material self If one had to make an inventory of the features that uniquely define one's own being, in all likelihood one's own body would feature prominently on the list. We spontaneously include our body in our concept of self because we feel that the body is always with us, at least when we are sane and awake. Thus, the body is not just an object among many others; rather, it is the key component of our material self (James, 1890). This feeling of intimacy with one's own body is variously termed as corporeal awareness (Berlucchi & Aglioti, 1997Critchley, 1979), embodiment (Longo et al., 2008) or bodily self-consciousness (Blanke et al., 2015). It can be thought of as a three-pronged construct, as someone who is aware of their body is aware of having a body (sense of body ownership), of controlling its movements (sense of body agency) and of dwelling in it (sense of body location). Research on neurological disorders (Blanke & Arzy, 2005;Ronchi et al., 2018) and on bodily illusions like the rubber hand (Botvinick & Cohen, 1998), the full-body illusion (Lenggenhager et al., 2007), the body swap illusion (Petkova & Ehrsson, 2008) and the enfacement illusion (Paladino et al., 2010;Sforza et al., 2010;Tsakiris, 2008) implies that the body becomes part and parcel of one's material self when the appearance of the body, the position of its parts, and other cues occur at the same time and in the same place so that the brain can integrate them in an efficient manner (Blanke et al., 2015). In principle, these cues can come both from outside and from inside the body. Classical studies on corporeal awareness chiefly focused on exteroceptive signals, from touch to vision: that is, researchers investigated how various features of exteroception determined specific changes in the awareness of one's own body, both in its constituent parts and as a whole. However, interoception, i.e. the sense of the physiological condition of the body (Craig, 2002), is increasingly thought to have an important role in shaping corporeal awareness, too (Craig, 2009;Critchley & Harrison, 2013;Herbert & Pollatos, 2012;Park & Tallon-Baudry, 2014). To date, research trying to ascertain the impact of interoceptive signals on the material self has been hampered by the fact that these signals are extremely diverse and difficult to record, but some significant progress has been made. For example, a series of experiments applied an interoceptive twist to the standard bodily illusions. Tsakiris et al. (2011) found that the degree of susceptibility to the illusion correlates with individual levels of interoceptive accuracy, i.e. how well participants objectively detect inner bodily signals at the conscious level, as in the heartbeat counting task (Schandry, 1981): the worse their performance, the more likely they were to self-report that the rubber hand was theirs. In a similar vein, people with low interoceptive accuracy are more prone to include the face of another person in their self-face representation (Tajadura-Jiménez & Tsakiris, 2014). Although these studies tantalisingly suggest the existence of a closed loop between interoception and corporeal awareness, they did not manipulate either the physiological signals giving rise to interoception or the perceptual or symbolic representation of these signals in the minds of the participants. Other research groups made a step further by making a virtual rubber hand (Suzuki et al., 2013) or a full virtual body (Aspell et al., 2013) or a face (Porciello et al., 2016;Sel et al., 2017; but see Porciello, Bufalari, et al., 2018) flash either in sync or out of sync with the participant's heartbeat. Their results suggested that visuo-cardiac synchrony generally boosts embodiment. However, one may wonder whether cardiac signals can be considered an embodiment factor also in ordinary ecological circumstances, in which heartbeats are perceived only faintly and transiently. An elegant study by Park et al. (2016) answered the question in the affirmative. Combining electrocardiography, electroencephalography, and virtual reality, they discovered that heartbeat-evoked potentials (HEPs) originating from the posterior cingulate cortex are linked to changes in corporeal awareness induced by the full-body illusion. This landmark result spurred the quest for other forms of coupling between visceral signals and bodily self-consciousness beyond the cardiac domain. Rebollo et al. (2018) did find that also the electrical oscillations coming from the interstitial cells of Cajal in the stomach can explain a sizeable degree of fluctuation in the resting BOLD signal of a cluster of brain areas they termed the 'gastric network'. Although the authors propose that this gastro-cerebral coupling contributes to the representation of the bodily self, further research is needed, since this study relied on a notask, resting state paradigm and the gastric network may simply act as a homeostatic circuit sensitive to hunger cues (Porciello, Monti, et al., 2018). Other lines of research on corporeal awareness targeted physiological signals straddling the boundary between interoception, proprioception and exteroception. For example, pleasant touch, which is mediated by 'interoceptive' C tactile afferents (Björnsdotter et al., 2010), is more effective than neutral touch, which is coded by 'exteroceptive' Aβ afferents, in giving rise to the rubber hand illusion (Crucianelli et al., 2013;Lloyd et al., 2013;van Stralen et al., 2014). However, both kinds of touch are equally effective drivers of the full-body illusion (Carey et al., 2021). Recent studies have also elucidated the embodying power of another multisensory cue, respiration. In a variation of the cardiac full-body illusion (Aspell et al., 2013), participants report a higher degree of self-identification with virtual silhouettes flashing in sync with their breathing Allard et al., 2017). In the new 'embreathment' illusion, experimental subjects show higher ratings of body ownership and agency towards a full-fledged virtual body that inspires and expires as the real body of each participant, compared to an avatar that displays an inverted, anti-phase breathing pattern (Monti et al., 2020). Furthermore, the strength of the embreathment illusion is moderated by individual levels of interoceptive accuracy and sensibility: those who objectively detect bodily signals with more accuracy or subjectively report to pay more attention to them are also those who are less swayed by the illusion itself (Monti et al., 2020). Symmetrically, patients with eating disorders are both worse than healthy controls at feeling their cardiac signals (Jenkinson et al., 2018;Pollatos et al., 2008) and more likely to embody a rubber hand (Eshkevari et al., 2012;Keizer et al., 2014). In a similar vein, schizophrenia patients are less interoceptively accurate (Ardizzi et al., 2016), are more prone to incorporate a rubber hand than controls (Peled et al., 2000;Thakkar et al., 2011), and display an altered sense of body agency (for an excellent review, see Hur et al., 2014). These convergent findings suggest that inner bodily signals, if properly felt, buttress the material self so that it becomes more stable. Later on, we will see how this could fit in a bigger picture of the relationship between interoception and the whole concept of self. James (1890) perceptively noted that "if every person we met 'cut us dead,' and acted as if we were non-existing things" they would wound us more than if they physically tortured us. This form of social pain implies that we would deem ourselves incomplete if we did not get any recognition from our mates. Thus, in addition to the material self, we also have a social self: our concept of ourselves includes not just our body, but also our place in society -our role, our rank, and our reputation. This is especially true if 'society' is taken in the more restrictive sense of the social groups we belong to, i.e. our family, friends, colleagues, peers, and so on. The specific interaction we have with each of those groups reveals a specific facet of our social self, which has as many referents as persons we get involved with as siblings, workers, or fellows. Interoceptive constraints on the social self Although interoceptive signals are private and almost invisible to others, a growing body of evidence is revealing how interoception shapes the social self in a diverse range of circumstances, often to a surprising extent. For example, interoceptive signals can directly prevent the social self from being confused with the identity of someone else-a distinction which is key to a well-balanced social life (Decety & Sommerville, 2003). In particular, a recent study by Babo-Rebelo et al. (2019) revealed that cardiac signals contribute to encoding the difference between self and other, as imagining oneself in a first-person perspective induced different heartbeat-evoked responses (HERs) at the level of precuneus and posterior cingulate cortex with respect to imagining a friend in a third-person perspective. Besides shaping the social self as a whole, interoceptive signals can also increase or reduce the chance that specific social properties are included in one's self-concept, i.e. the chance that one thinks of oneself as socially anxious, afraid of being judged by others, accommodating to injustice, and so forth. People who are more attuned to interoceptive signals are also more likely to consider themselves as socially anxious (Oldroyd et al., 2019;Pollatos et al., 2007;Stevens et al., 2011; see also Terasawa et al., 2013;but cf. Krahé et al., 2018). On the contrary, individuals who have a physiologically higher resting blood pressure, which is linked to a reduced interoception of physical pain (see Makovac et al., 2020), also report to feel less social pain, since they are less likely to describe themselves as fearful of receiving negative evaluations or worried about social rejection (Inagaki et al., 2018; on the role of pain as a threat to the social self, see also Karos et al., 2018). Poor interoception is also linked to high levels of self-reported avoidant attachment (Oldroyd et al., 2019) as well as to several impairments affecting the social self, such as those characterising autism (Quattrocki & Friston, 2014; but see Shah et al., 2016), eating disorders (Ambrosecchia et al., 2017), and conscious vicarious pain/ mirror-pain synaesthesia (Bowling et al., 2019). In turn, social interactions can influence the processing of internal physiological signals. In particular, unbalanced economic offers reduce the perception of pain (Mancini et al., 2014;Nicolardi et al., 2020). Furthermore, direct gaze increases interoceptive accuracy (Isomura & Watanabe, 2020), while social exclusion decreases it (Durlik & Tsakiris, 2015). Thus, the link between interoception and the social self is bi-directional, suggesting a feedback cycle in which inner physiology significatively influences who we think we are when we interact with others, but is also influenced by top-down social cues. As we transition to a world in which traditional social interactions involving the physical presence of the agents are increasingly supplemented or replaced with digital or virtual interactions, a key challenge will be to preserve as much bodily information as possible in the new virtual contexts in order not to disrupt the feedback cycle between interoception and the social self (Monti & Aglioti, 2018). Interoceptive constraints on the spiritual self The concept of self does not include only one's body and social role; our notion of who we are usually extends also to what James (1890) calls our spiritual self, that is, our "inner or subjective being", our "psychic faculties or dispositions". Among these faculties, James lists cognitive skills, morality, and agency, remarking that "[t]hese psychic dispositions are the most enduring and intimate part of the self, that which we most verily seem to be (James, 1890, p. 296)". Indeed, moral traits are so essential to the self that, if they change or disappear from an individual, people are more likely to deem that she is no longer herself than when she changes her appearance, perceptions, or personality (Strohminger & Nichols, 2014;Strohminger et al., 2017). Internal physiological signals shape, or misshape, also this part of the self-concept. In particular, personality types that belong to the Dark triad (e.g., Narcissism, Machiavellianism, and Psychopathy) feature not only a socially malevolent character and tendencies toward self-promotion (Paulhus & Williams, 2002), but also anomalous patterns of body awareness. Specifically, Machiavellians have little trust in their own bodily sensations, psychopaths report to be only feebly aware of their interoceptive feelings, and narcissists report an increased interoceptive awareness, more trust in bodily signals, and higher attention regulation abilities (Lyons & Hughes, 2015). More generally, interoception can impact on how one perceives one's own moral temper and character. Indeed, people who are better at detecting their cardiac beats are also more likely to believe they are better at regulating their affective states, as they self-report to be more inclined to reappraise negative situations (Füstös et al., 2013;Pollatos et al., 2015), suppress their emotions (Pollatos et al., 2015), swiftly and flexibly adapt their emotions to environmental changes (Shaw et al., 2018), differentiate among affective states, and temper their impatience (Weiss et al., 2014;cf. Jäger et al., 2012). At the same time, low interoception has been associated not only with low selfregulation, but also with low resilience (Haase et al., 2016). Surprisingly, the role of interoceptive signals in what contemporary readers would more naturally call 'spiritual' self, i.e. one's religious identity and attitude towards transcendence, has never been subject to thorough empirical investigations. A partial exception is the neuroscientific study of meditation. Straddling the border between the religious and secular world, meditation is a set of diverse techniques that often rely on conscious monitoring of respiration and bodily feelings in general (Gibson, 2019). The prolonged exercise of such techniques promotes changes in the concept of self, reducing self-judgment, increasing self-centredness, self-directedness, cooperativeness and compassion, and inducing a heightened religious/spiritual self-representation (Campanella et al., 2014;Crescentini, Urgesi, et al., 2014b;Fiori et al., 2017;Neff, 2016;Neff & Germer, 2013; for a review, see Crescentini & Capurso, 2015). Although a consistent finding across the literature is that meditation alters the structural and functional properties of the insular cortex, a key hub of the interoceptive and selfrelated brain networks (Craig, 2009), the existence, strength, and direction of a causal link between meditation and interoception is far from clear (Gibson, 2019;Khalsa et al., 2020). However, many common explicitly religious practices, from fasting to iterative prayers, tap into the conscious regulation of interoceptive signals such as hunger and respiration to achieve a state of higher spiritual awareness. Based on previous data on the environmental, sensory and neural underpinnings of religion and spirituality (see Crescentini, Aglioti, et al., 2014a;Crescentini et al., 2014bUrgesi et al., 2010), a recent theoretical model of the neurobiology of mysticism predicts that during self-transcendent experiences interoceptive inputs should be substantially attenuated or decoupled from exteroceptive signals (van Elk & Aleman, 2017). To test these and other related predictions, new empirical studies may systematically explore or manipulate the relation between active downregulation of physiological signals, interoceptive accuracy, and self-transcendence. Interoception and the pure ego So far, we have shown that changes in the intensity, frequency, and conscious perception of inner physiological signals covary with changes in the likelihood that certain properties are included in one's notion of oneself or not. In so doing, we have focused on the concept of self as a bundle of self-properties-properties that someone consciously ascribes to their own person. However, the act of ascribing these properties implies that there is an agent who ascribes them to the self. In principle, the self as agent is not identical with the self as bundle of properties. Building on this, James (1890) asserts that the self-agent (which he calls "pure Ego", or "I") is the thinking subject that recognises the self-bundle (which he calls "Me") as an object that keeps existing and being the same across time-regardless of the fact that this continuity is true or illusory. Does interoception shape only the notion of the "Me" or also the notion of the "I"? Sparse but tantalising evidence points to the fact that interoception can indeed inform also one's concept of oneself as a thinking and acting subject. Although the precise interplay between interoception and meditation is not firmly established (see above), meditators report that their training allows them to experience the self not as a static being that keeps existing in the same way across time but in an event-like manner, as if a temporary "Me" was created and dissolved with the ebb and flow of desires and impulses (Olendzki, 2006;cf. Farb et al., 2007 andGibson, 2019). By definition, this conscious 'loss of selfhood' cannot take place in the "Me"; rather, it is the "I" that experiences it as a consequence of an interoception-based meditation. Moreover, the experience of 'losing oneself', so that the commonsense distinction between the self and the world does not make sense anymore, occurs also in several psychiatric disorders, notably depersonalisation-derealisation (DD), which is characterized by impairments in self-awareness, feelings of disembodiment, and emotional numbing. A single casestudy (Sedeño et al., 2014) showed that DD is indeed characterized by altered interoception both at the behavioural and neural level. In a similar vein, another experiment found that, relative to baseline, heartbeat-evoked potentials (HEPs) recorded during a heartbeat counting task become more prominent in controls but not in DD patients (Schulz et al., 2015). However, another study found that DD patients' cardioceptive accuracy does not significantly differ from controls' (Michal et al., 2014). This suggests that these patients might have difficulties in integrating visceral signals in their own representation, rather than difficulties in interoceptive awareness per se. At the same time, there is preliminary evidence indicating that interoceptive exposure exercises like hyperventilation may trigger depersonalisation/derealisation experiences in a non-clinically anxious sample (Lickel et al., 2008). More research is needed to sift through these conflicting pieces of evidence and also to assess whether there are impaired interoceptive patterns in other neuro-psychiatric conditions, such as the Cotard syndrome, which is characterised by nihilistic delusions of non-existence or immortality (Berrios & Luque, 1995). More straightforwardly, a series of clever experiments (Babo-Rebelo, Richter, et al., 2016a;Babo-Rebelo, Wolpert, et al., 2016b) sought to tease apart the visceral influences on the "I" and the "Me" dimensions of the self combining magnetoencephalography (MEG), heartbeat signals, and mindwandering. In the first experiment of the series, participants mind-wandered while their MEG responses to heartbeats were monitored. At random intervals, they were interrupted by a visual signal and asked to rate how much the interrupted thought included the self as an agent ("I") or as an object ("Me"). Results showed that i) the more the thought was related to the "I", the stronger the neural responses to heartbeats in the ventral precuneus; ii) the more the thought had to do with the "Me", the stronger the neural responses to heartbeats in the ventromedial prefrontal cortex Babo-Rebelo et al., 2016a). A subsequent study confirmed and refined these results through intracranial electroencephalography (iEEG) and suggested that also the right anterior insula may encode the "I" dimension of the self (Babo-Rebelo et al., 2016b). Interoception as a firm foundation of self Overall, the studies reviewed in the previous sections indicate that inner physiological signals exert a remarkable influence on the material, social, and spiritual facets of the abstract concept of self in its "Me" instance. There is also preliminary evidence linking interoception and the "I" (Fig. 1). Across all levels of analysis, a common thread is the fact that the most intimate, unique, unchanging features of our selves seem to be those which are, quite literally, closest to our heart, i.e. most influenced and shaped by interoceptive signals. On the contrary, extrinsic, negotiable, transient features have a looser link with interoceptive signals. Thus, we claim that interoception provides the self-concept with a firm foundation, contributing to its stability and sanity over time by making it less permeable to external influences. Insofar as the material self is concerned, a number of authors have already underscored that those who are better at detecting cardiac and respiratory signals are also those who are less prone to bodily illusions -the fact that they are more attuned to their bodily signals translates in a greater stability of their material "Me" (Monti et al., 2020;Tsakiris et al., 2011). Interestingly, a recent study revealed that individuals who have a fuzzier concept of self are also more susceptible to the rubber hand and body swap illusion (Krol et al., 2019). Hence, it would be useful to assess whether the relationship between self-concept clarity and bodily illusion proneness is mediated by individual levels of interoceptive accuracy, sensibility, and awareness. If so, this would further corroborate our hypothesis that interoception stabilises the concept of self. We argue that this stabilising role of interoception on the self-concept is not limited to the material self, but extends also to the social and spiritual self. As we have seen above, those who are better at detecting interoceptive signals describe themselves as more self-disciplined, more resilient, and more self-centred, as they cope better with social exclusion, tend to prioritise their own needs over those of the others, and are more prone to act selfishly. While interoception-driven self-centredness may easily degenerate into egoism, we note that social interactions can in turn modulate interoceptive abilities (for example, through eye contact) and thus make people more or less centred on their own bodily signals. Further studies should also better clarify if interoception stabilises not only the "Me" but also the "I" part of the selfconcept-for example, measuring how much interoceptive accuracy predicts the degree to which one thinks of oneself as a subject endowed with freedom and continuity over time. Importantly, these signals should not be looked for only in the cardiac domain. For example, a recent study found that spontaneous initiation of action in the classic Kornhuber and Libet tasks is modulated by the phase of the respiratory cycle (Park et al., 2020). Although in that study participants did not report to be aware of any link between respiration and their choice to act, it would be interesting to see if individuals more attuned to respiratory signals also feel to be more in control of their actions and are more inclined to portray themselves as free agents. If we are right in claiming that interoception serves as a firm foundation for the concept of self, preventing it from becoming too overstretched or too unstable, then two consequences follow. The first is that the classic distinction between 'core' and 'peripheral' self-conceptions (Gergen, 1968;see Markus & Wurf, 1987) is more than a metaphor-core features of the self-concept are indeed rooted in core physiological signals, whereas marginal or ephemeral features attached to the self are not viscerally coloured. Since James (1890), self-concept theorists have been adding ever new layers and partitions to the concept of self, while struggling to find principles of unity and organisation (Stryker, 1989). Interoception may be one of these principles. The second consequence is that there is now further support to the idea that the self-concept is embodied, rather than purely propositional (Schubert & Koole, 2009). Indeed, the evidence reviewed so far points to an even stronger conclusion, namely, that the self-concept is not just embodied, but deeply embodied, as it is based not only on sensorimotor states (Schubert & Koole, 2009), but also on visceral physiological states. While at the present stage we are not able to grasp the details of the process by which sensory, motor and visceral information translates into linguistic descriptions of the self, we can state that whenever we think of ourselves, we think with our body and not just with our mind. Fig. 1 Interoceptive constraints on each dimension of the concept of self, conceived as "Me" (material, social and spiritual self) and as "I" (pure Ego). Coloured arrows show the known influences of specific interoceptive signals on specific facets of the self-concept. Truncated lines with no terminal arrow indicate hypothetical links between the physiological and conceptual realms that are yet to be investigated Conflict of interest The authors declare that they have no conflict of interest. Ethical approval This article does not contain any studies with human participants performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-05-30T06:16:22.823Z
2021-05-28T00:00:00.000
{ "year": 2021, "sha1": "19cf50bd541fcba75fc69c0aa1910a793eb24e92", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00426-021-01477-7.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "9f5d9efd373bbd3a501ccfcb30753cb0eac897a8", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229323806
pes2o/s2orc
v3-fos-license
Biologically Inspired Surgical Needle Steering: Technology and Application of the Programmable Bevel-Tip Needle Percutaneous interventions via minimally invasive surgical systems can provide patients with better outcomes and faster recovery times than open surgeries. Accurate needle insertions are vital for successful procedures, and actively steered needles can increase system precision. Here, we describe how biology inspired the design of a novel Programmable Bevel-Tip Needle (PBN), mimicking the mechanics and control methods of certain insects ovipositors. Following an overview of our unique research and development journey, this paper explores our latest, biomimetic control of PBNs and its application to neurosurgery, which we validate within a simulated environment. Three modalities are presented, namely a Direct Push Controller, a Cyclic Actuation Controller, and a newly developed Hybrid Controller, which have been integrated into a surgical visual interface. The results of open loop, expert human-in-the-loop and a non-expert user study show that the Hybrid Controller is the best choice when considering system performance and the ability to lesson strain on the surrounding tissue which we hypothesis will result in less damage along the insertion tract. Over representative trajectories for neurosurgery using a Hybrid Controller, an expert user could reach a target along a 3D path with an accuracy of 0.70±0.69 mm, and non-expert users 0.97±0.72 mm, both clinically viable results and equivalent or better than the state-of-the-art actively steered needles over 3D paths. This paper showcases a successful example of a biologically inspired, actively steered needle, which has been integrated within a clinical interface and designed for seamless integration into the neurosurgical workflow. Introduction The optimal low level control of needles for the application of percutaneous interventions [1] for Minimally Invasive Surgery (MIS) is an open research topic. Whilst passive needles have more simple kinematics compared to active needles and so are easier to model and control, active needles are better able to adapt to varying environmental conditions intraoperatively, leading to higher accuracy of path following and final placement. They can also follow trajectories along curvilinear 3D paths, which is particularly important when navigating through complex geometry [2]. The first, and earliest, category of steerable needles, consists of the base manipulation of straight, flexible needles through solid tissue, whereby the base manipulation of the robot, and the needle-tissue reaction forces, are used to move the needle along a curved path. Often, these needles have bevelled and/or precurved needle tips [3][4][5], to increase the achievable curvature. This technique is commonly used in practice and is accurate for shallow insertions, however; for deeper insertions, steerability is limited, allowing little room for adjustment and area avoidance. If the tip is not accurately placed, a reinsertion may be required, increasing the tissue trauma to the patient [6,7]. A second category includes needles with straight, stiff shafts or flexible shafts equipped with an articulated tip-mounted tool. Articulation of the tip is provided either by pivoting the straight shaft about a fulcrum located at the insertion point into the patient, introducing further tissue deformation, or by transferring the applied force to the tip with a transmission mechanism [8]. An example is the tendon driven actuated tip placed on a flexible shaft [9]. Hand held and robotic devices, both in research [10] and available commercially [11], are often used in MIS to access body cavities such as the abdomen. Work has been carried out for MRI capability with a tendon driven needle [8] to ensure accurate position sensing for intraoperative navigation control. Tip steering has the advantage of a follow-the-leader trajectory, where the shaft follows the path of the tip, and steerability is largely unaffected by the insertion depth. However, the needle-tissue interaction forces are difficult to model and predict, and nonholonomic constraints complicate the control of such designs. To minimise tissue trauma, and allow for more complicated path following, a third category of needles is developing, broadly focusing on steerable, elongated and active devices. An example of a needle design in each of the third categories is shown in Figure 1. Multiple design solutions exist for this category and have been extensively reviewed in [12][13][14][15], including telescopic mechanisms such as the concentric combination of precurved elastic tubes presented in [16,17], and snake-like solutions via both shape memory alloys [18][19][20] and tendon actuation methods [21,22]. These designs also have their limitations, for example telescopic mechanisms require a priori designs, limiting the variability of the final trajectory in tissue. Shape memory alloy prototypes exhibit problems with heat dissipation, whilst tendon driven designs are problematic to miniaturise. Commonly, designs in this category employ bevel-tip needles. These can be actively steered by using duty-cycling control techniques, whereby the rotation of the needle along its longitudinal axis allows the needle to be steered in the desired direction [5,23]. Such needles are able to follow approximately straight paths as well as curved paths in 3D, although changing the plane of steering can introduce further tissue deformation. Continuous rotation of the needle is required to achieve a straight path, and it is possible helical motions of the needle occur-leading to a small blending effect on the surrounding tissue. There also exists actively steering cannula and stylet designs, such as the one presented in [24]. The amount of curvature of the needle is dependent on offset of the bevel tip from the cannula, which is controlled by extending and retracting the stylet. To change from one plane of steering to the other, the bevel tip can be fully retracted into the cannula before being rotated, limiting the amount of tissue damage during such a manoeuvre. Normal duty-cycling methods must be applied when following straight trajectories, as a zero offset configuration approximates a normal bevel tipped needle. Our bio-inspired Programmable Bevel-tip Needle (PBN) is in the third category. PBNs are inspired by insects that are able to use their slender ovipositors or mouthparts to both drill and steer through harder substrates, such as wood and bodies of other insects [25]. In [26], the authors used 2D and 3D motion analysis to see the motion of the fruit-fly parasitoid Diachasmimorpha longicaudata (Braconidae) three-valve ovipositor. They saw that the insect could steer their ovipositors in any direction relative to the body axis-with direct pushing motions in soft substrates and via reciprocal motions in stiffer substrates. This reciprocal, or cyclic, motion can reduce the net force of the ovipositor on the surrounding substrate, lowering the risk of buckling. This control motion has inspired the control of PBNs with cyclic motion, in order to reduce the deformation and strain on the surrounding tissue, with a likely reduction in damage along the needle insertion track. PBN designs consist of at least three interlocking, bevel-tipped segments which axially slide relative to one another to achieve 3D steering. The relative offset of the segments formed at the tip creates a bending moment when inserted into soft tissue, and is related to the resulting curvature of the needle. The multi-segment design of this needle is advantageous over other single segment needles as it can lend itself to more advanced actuation methods, hence different motion profiles, and does not suffer from the constraint of following helical paths as standard bevel-tip flexible needles do. Motion profiles can be optimised towards specific parameters, e.g., to minimise tissue deformation. Such needles can be formed by collections of nitinol wires as in [27], or via interlinking plastic segments such as the PBN used in this work. This paper presents research on the underlying motion profiles for PBNs that can be controlled in order to reduce the strain on the surrounding tissue, which could reduce the subsequent tissue damage that occurs along the needle tract-our clinical goal for this specific research. This can be achieved by employing cyclic motion, where each segment moves forward and backwards during a profile, in a similar manner to how some wasps can drill and steer their ovipositors through a harder substrate, such as bark, without buckling. A novel Hybrid Controller is presented and compared against a Direct Push controller and a fully Cyclic Controller. All three control methods have been integrated into the visual interface the surgeon would use to perform neurosurgery, where the navigation commands are measured by a control joystick. This is the first time the Cyclic and Hybrid Controllers have been tested using the surgical interface. Section 2 tells the story of the background research and development undertaken over the last decade to produce the current design of the PBN. Section 3 describes the control methods, their design, and the system setup. Section 4 describes the experiments and results designed to test the controller performance and measure the integration success of the controllers within the surgical interface. Two experiments were undertaken by an expert user, and one experiment involved a user studies trial. Section 5 discusses these results, highlighting the areas still to be improved for the controller design and interface integration. Section 6 concludes this work and presents the upcoming trials to evaluate the full clinical setup. Related Work PBNs have evolved over a 13-year journey that found its roots in a blue-sky collaboration with Bath university and Prof Julian Vincent, a world leading biomimeticist and the source of inspiration of this unique design. Thanks to national and international funding, the concept matured from early proof-of-concept to a medical-grade, pre-production prototype suitable for live use, which is currently at the centre of a large-scale European consortium effort on precision neurosurgery. The Enhanced Delivery Ecosystem for Neurosurgery (EDEN2020) project is a European Union Horizon 2020 Research and Innovation Action (RIA) involving six universities and two companies, which aims to develop the gold standard for one-stop diagnosis and minimally invasive treatment in neurosurgery. Imperial College has predominately been in charge of developing the needle and robotic platform, with specialist manufacturing support provided by Xograph Technologies Ltd. (Xograph, Stroud, United Kingdom). The PBN design consists of four interlocking, bevel-tipped segments that axially slide relative to one another, as shown in Figure 2. An offset of δ i causes a deflection in the [x, y] frame of the needle, as it is inserted along the z axis. The resulting curvature can be determined by measuring the radius of curvature, R. In [28], Ko et al. first modelled a 12 mm outer diameter PBN in 2D and experimentally calibrated the curvature and offset relationship with open (feed-forward) control. Following this, simulated 2D needle insertions were controlled with closed (feed-backward) control using the chained form representation, originally developed to control nonholonomic non-linear robotic cars. This work found that the needle curvature was approximately proportional to the steering offset. Follow up in vitro experiments are presented in [29]. Using the same needle, kinematic model and controller, experimental results demonstrate 2D trajectory following with 0.68 mm tracking error and 1.45 mm standard deviation. This control method was limited in regards to the magnitude of the control gains and a large initial position perturbation, which caused instability due to a number of control input constraints. In the work presented in [30] a Model Predictive Controller (MPC) considered the input and output constraints arising from the mechanism of the needle, specifically considering the maximum achievable curvature and rate of change of curvature of the needle. The tracking error model was modified such that the non-linear kinematic model of the needle was linearised, and the model was used to convert the optimisation problem into a well known quadratic programming (QP) problem, with input and output constraints represented as inequalities. These works only dealt with the needle considering 2D or planar insertions. Further design work focused on minimising the outer diameter of the PBN, to make it closer to clinically viable and to see the effects of smaller diameter needles on the system performance. Open loop 3D steering of the needle with an outside diameter of 8 mm was presented by Burrows et al. [31]. Results from fourteen insertions into a phantom gelatin with constant offsets in eight principle axes found a linear relationship between the segment offset and needle curvature, in agreement with results previously presented for the larger size needle. In [32], a prototype with a 4 mm outer diameter was presented that was able to steer around bends with a radius of curvature of approximately 70 mm. A path planner was developed to comply with the mechanical constraints of the design to avoid needle buckling whilst avoiding obstacles. This used an iterative based gradient approach to produce smooth paths with bounded curvature gradients. Experimental results for 2D insertions into phantom gelatin using the same MPC controller as in [30] provided 0.1 mm average tracking error, with 0.64 mm standard deviation. However, the path planner in [32] was computationally expensive. Whilst appropriate for preoperational path planning, it was limited in its usefulness for online path planning during operations. In [33] a Deformation-as-control (DAC) path planner was developed, considering a bounded curve derivative, giving a non-linear formulation which was then linearised. Using the same MPC controller from [30], experimental results using the extended DAC planner demonstrated that the system could successfully guide the needle in a 2D plane to a moving target with a total mean end position error of 0.27 mm and total mean approach angle error of 0.8 • . In [34], further experimentation with the same needle as in [33] was undertaken in a series of in vitro scenarios, for both single and double planar target locations. Feedback via a laser based 3D visioning system allowed closed loop control, using the same MPC controller as in [30]. By experimental calibration, a linear relationship between offset and curvature was found, as was the case in all previous experiments. These experiments demonstrated the the needle was able to steer in situ to compensate for moving targets due to soft tissue deformation. Overall, the mean positional error to reach the target was 0.46 mm with a standard deviation of 0.17 mm and the overall mean approach angle error for target orientation was 1.05 • , with a standard deviation of 0.32 • . These results are similar to those obtained in earlier experiments in phantom gelatin, demonstrating that the path planner and control can be successfully used to guide the needle to multiple moving targets in a plane, whilst taking into account the path constraints. To better understand the deformation caused by the needle on surrounding tissue, Oldfield et al. [35] demonstrated a laser-based digital image correlation technique (DIC) to observe the tissue displacement around a needle as it is inserted into a transparent phantom. Building on this observation technique, it was then shown that cyclic actuation of the individual needle segments could reduce localised target motion and surrounding tissue displacement caused by the needle-tissue interaction. This was shown via FEA simulations and then in open loop gelatin phantom tissue insertions, all in 2D [36]. Specifically, cyclic motion with pullback of 30% and a 4 mm stroke length resulted in the least tissue deformation. After five years of research and development, the PBN design was well understood and tested in 2D. Via extrusion of a medical-grade polymer, a clinically viable sized needle with a 2.5 mm outer diameter, as shown in Figure 3, was achieved by Xograph. However, for accurate control, a model of the system in 3D was required. To expand the system to be able to handle 3D paths, initial kinematic modeling based on an UAV with fixed wings was first developed and presented by Secoli and Rodriguez y Baena [37]. However, this model could not easily account for the material properties of the extruded needle, so a different avenue, namely 3D modelling via a multi-beam approach based on Euler-Bernoulli beam theory, was explored. Finite element simulations for known loads were used to validate the multi-beam deflection model in [38], and the maximum achievable curvature was found to be 0.0192 mm ± 0.0014 mm −1 from a series of phantom trials in gelatin. These trials helped to formulate the forward model of the needle, in order to calculate the expected curvature from the known offset configuration. However, for closed loop control, it was necessary to formulate the inverse model, so an operator could provide steering commands and the system could respond with the appropriate offset configuration to achieve these. The inverse model-determining the required offset to generate a desired curvature-was the subject of work presented in [39]. This used an optimisation algorithm to find a numerical solution based on the proposed steerability measures of the steerability index (analogous to the manipulability index of serial manipulators and geometrically proportional to the area of the steerability ellipse in curvature space) and the steering condition number (a dimensionless value representing how far or close to a singularity condition the needle is). As PBN four-segment needles are over-actuated, an optimisation technique is required to find the configuration that results in the highest steering curvature with the smoothest configuration changes. Importantly, the relationship between offsets and curvature for the 2.5 mm outer diameter needle was found to be non-linear, in disagreement with previous results for the larger diameter needles, highlighting the need to use more complex models for the forward and inverse calculations. Recent work of a path planner has used these models for the 2.5 mm outer diameter needle in order to create 3D paths that are optimised for the needle kinematics and the start and end pose constraints of the neuro anatomy. This is important, as the effectiveness on drugs (for instance, used in chemotherapy) has been linked to the precision of the infusions of the drug at the tumour site [40]. To date, the PBN is clinically viable sized, soft, steerable, and MRI compatible. It has been fully modelled and the control and path planning tools have been created in order to allow for optimised, 3D trajectories to be achieved through brain tissue. The remaining sections of this paper focus on the control modality of the PBN to move the individual segments of the PBN to the desired offset configuration. Control Methods There are two main methods to control and move the catheter segments; the first is via a cyclic motion, whereby each segment is extended forward and then backward for each period of motion control. This controller is bioinspired from how wasps can drill into bark using their ovipositors. The second method is direct motion, whereby each segment is extended forward only. These modes are further described in Section 3.2. Previous work presented in [41] demonstrated the performance of a cyclic controller with 30% pullback for planar insertions of a flexible, medical grade PBN needle with an outer diameter of 2.5 mm. Thirty per cent pullback was chosen as this value was found to cause the least amount of tissue deformation [42] which is hypothesised to produce less tissue damage. The main limitations of this work were that the control planning window (cyclic period) of the actuation profile was 8 s, that the model of the needle assumed a linear relationship between the offset of the needle segments and the resultant curvature of the needle [28], and that the trajectories were limited to planar paths. However, it successfully showed that the tissue deformation around the needle track was significantly reduced when compared to the direct controller and that there was no significant difference in target reaching performance. The cyclic controller was extended in [43] to handle 3D trajectories with a cyclic period of 2 s, increasing the control frequency four-fold, and presented the design for position and velocity profiles of simulated motors to drive the four needle segments in SIMULINK and MATLAB 2017b c (MathWorks Inc., Massachusetts, USA). The results from this work showed that the cyclic controller under-steered when following curvilinear paths, the effects of which were more pronounced on 3D trajectories. Future work from this paper recommended two avenues of improvement, both of which are explored in this work; 1. the human operator can correct the performance in real-time and keep the open loop cyclic controller design as is; and 2. use a combination of direct and cyclic motion profiles according to the magnitude of offset configuration change to benefit from the timely response of the direct controller and cyclic profiles from the cyclic controller. This work seeks to validate and optimise a controller via a full system simulation in C++ of a four-segment PBN with outer diameter of 2.5 mm, which is modelled according to Watts et al. [38], that does not assume a linear relationship between offset and resulting segment curvature. To understand the optimal control profile, three control modalities are presented, namely a cyclic actuation controller (CAC), a direct push controller (DPC), and a hybrid controller (HC), which combines both the cyclic and direct control motions according to the desired offset from the higher level controller. Experiments were undertaken for open loop curvilinear trajectories in 3D to evaluate the path following and target reaching performance of each of the control modalities, followed by a closed, human-in-the loop, single expert user study to evaluate the same metrics. Finally, a small human user trial was performed to understand if the control modality affected the user when using the neurosurgical human machine interface. The following hypotheses are made: Hypothesis 1. The CAC will achieve significantly less performance compared to the DPC. Hypothesis 2. There exists a HC that achieves the same performance metrics of the DPC while still using CAC profiles more than 75% of the insertion time Hypothesis 3. Users will not notice the difference in the choice of underlying control modality when using the system's visual interface. Controller Design Controlling the catheter requires continuous adjustment of the relative offsets between the segments. A direct push controller (DPC), first described in [30], creates the desired offset by pushing the segments forward at different speeds, though the net speed of the catheter through the tissue is pre-defined and constant. When no offset is requested, the segments each move forward at the net velocity V net ; however, when an offset is requested, the relevant segments move forward at a higher velocity, V DPC f wd . The cyclic actuation controller (CAC), first described in [41], moves the catheter by simultaneously pushing and retracting the different segments, while keeping the pre-defined and constant net speed of the catheter. This motion profile reduces the strain hence displacement of the surrounding tissue [36]. In a cyclic period, each segment moves forward and backward for part of the time-the forward and retraction speeds are bounded and constant, but the period for which each is active in a cycle is variable depending on the desired offset to be achieved. This implementation is limited in that a new command is only able to be processed once a cyclic period has finished, placing an upper bound on the control frequency, and that it takes longer to achieve the desired offset compared to the DPC [43]. However, as the net speed is constant, the insertion takes the same overall time as for the DPC on the same path. The main equations governing the motion of the CAC are as follows. The velocity profile of one segment consists of a period of the segment extending at V CAC f wd for T CAC f wd seconds and a period retracting at V CAC ret for T CAC ret seconds. This gives the following definition for the velocity profile of one segment V Seg over a full actuation cycle T c seconds: where T CAC f wd = t b − t a . In Equation (2), t a and t b define when in the cyclic period the segment extends forward, which is related to the segment order of movement that is defined a priori. To create a relative offset, a stroke modification factor, S f , is introduced. This percentage value ranging 0 ≤ S f ≤ 1 allows the maximum time a segment is moving forward to be increased by up to the value of the factor: Consequently, the retraction period of this segment will be reduced. The full mathematical model of the CAC is presented in detail in [41,43]. The control parameters used for this work are summarised in Table 1. The difference between the DPC and the CAC segment motion profiles to achieve the same offset configuration is shown in Figure 4. A hybrid controller (HC) combines the two approaches presented previously. When the desired offset, δ, is less than or equal to a predefined threshold value, δ t , the CPC is active. This means that the catheter will move through the tissue using the cyclic, low tissue displacement method. However, when the desired offset is higher than the threshold, the DPC is activated, achieving the desired configuration more quickly than the CPC. System Simulation The Enhanced Delivery Ecosystem for Neurosurgery in 2020 (EDEN2020) project has developed a mechatronics driver for a four-segment PBN [44], where each segment is actuated by a motor connected to a linear stage that is in turn fastened to a flexible transmission line which can push or pull each segment. For this work, a full simulation of the mechatronics system in C++ has been developed in order to have a test platform for different control modalities before implementing the most optimal controller. The simulated modules have been connected with the system architecture, and communication is handled by ROS [45]. Brief descriptions of each of the modules for the system setup in simulation as depicted in Figure 5 are given below. Actuation: The physical system uses Maxon high-precision DC brushed motors (DC16XS) with a planetary gearhead with reduction of 35:1, each equipped with a rotary magnetic encoder (1024 pulses/revolution). Each motor has a mechanical time constant of 8.57 ms, meaning it reaches 63.2% of its maximum no load speed of 10,000 rpm in 8.57 ms. The motors are connected to 1 mm pitch linear stages, and, taking into account the gearing, this means it takes 8.57 ms to reach a linear speed of 3 mm/s. The DPC has a control frequency of 5 Hz, and the CAC has a control frequency of 20 Hz. For each the net speed of the catheter is 1 mm/s, and the maximum speed of any segment when moving to create an offset is 5 mm/s for the DPC and 5.5 mm/s for the CAC; thus, the response time of the motor can be considered to be very high. In simulation, each motor velocity profile has been modelled with instantaneous acceleration, and the motor position is determined via trapezoidal numerical integration of the velocity values, with a time step of 0.05 s. Sensing: Pose measurements of the catheter are calculated on the physical system via either embedded electromagnetic (EM) or fibre Bragg grating (FBG) optical sensors. In simulation, the pose of the catheter is calculated using the forward mechanics-based model of the catheter as detailed in [38]. This model estimates the resulting curvature of the needle given the mechanical properties of the catheter, and the relative offset between the four segments. Gaussian noise is applied to the simulated sensor readings, with a known mean and standard deviation. The error values given by the manufacturer of a single EM Aurora sensor c (NDI Medical) is 0.7 mm in position and 0.2 degrees in rotation. These are 5DOF, in the physical system each segment is sensorised with one, and the 6DOF pose is estimated from the fused measurements. For these experiments, a standard deviation of 0.1 mm about a mean of 0 mm has been applied for position measurements, and a noise of 0.7 about a mean of 0 degrees has been applied for the rotational measurements. User Input: The user steers the catheter using a foot pedal for on/off motion, and a 2DOF joystick to control the curvature left/right and up/down, as shown in Figure 6, in a similar manner to the joystick of a plane. Visual Interface: This interface has been integrated into the commercial neurosurgical planning and intraoperative software neurosinpire TM (Renishaw plc). The standard release of the software provides three orthogonal views of the brain-a new fourth window that visualises the 3D navigation view has now been added. In this view, the user sees a visual interface showing the preoperative MRI and CT scans of the patient, as well as a navigational "First Person View" of the catheter position in the surrounding anatomy in order to steer the catheter along the desired path. In this window, the user can also choose to see a third person "Overview View" of the environment. They are shown visual cues to help this process: the desired path; way-points as rings that they should steer through, the colour of which reflects the Euclidean error in position from the path; the expected curvature of the future catheter track as commanded by the joystick (green overlay); and the actual curvature of the future catheter track as predicted by the forward model (blue overlay). Further details of the interface are published in [46] and an image of the online mode "First Person View" is depicted in Figure 6, and the path and overlays in "Overview View" mode are shown in Figure 7. Experimental Design Three experiments were setup to test the different control modalities when moving the catheter along a desired path. Ethical approval for the experiments was given by the Imperial College London Joint Research Compliance Office (ICREC 18IC4564). The paths were created using the path planner presented in [47]. Based on the Adaptive Hermite Fractal Tree (AHFT) method, the path planner generates 3D obstacle-free trajectories that satisfy curvature constraints given a specified start and target pose. The obstacle map is generated from segmented anatomy from MRI data sets of the human brain. The experimental hardware setup for all experiments consisted of a laptop showing the visual interface, the input joystick (both shown in Figure 6), and a foot pedal for on/off control. Experiment 1: Open Loop Control Two trajectories were generated: a single curve planar trajectory at maximum curvature, and a double curve 3D trajectory at maximum curvature. The maximum curvature was ρ = 1/r = 0.012 mm −1 , which was experimentally achieved without buckling and corresponds to a 20 mm offset between the segments. The same open loop commands were given to each of the controller modalities to highlight the differences in performance, if any, between them. The curvatures k 1 , k 2 (mm −1 ) defined in the two ortho-normal planes [x, y] planes, respectively, as in Figure 2, and radii of curvature (r) mm for the single curve trajectory were Results from [43] show that the CAC exhibited under steering, particularly over 3D trajectories. This effect is replicated here, as can be seen from the trajectory tracks in Figures 8 and 9. Indeed, we see increasing path following and target reaching errors for both the single and double bend trajectories as the HC threshold parameter increases from 0 (DPC) to 20 mm (CAC) (see Figures 10 and 11). In the author's experience, planned trajectories through real neuro-anatomy generally only require one or two shallow curves to reach the target from the entry point, and so the per cent of time the CAC is activated for these trajectories is representative of a realistic scenario. The large control inputs requiring offset configuration changes above the threshold value normally occur when the user needs to fully change direction-which would happen at the beginning of each path and at the junction of the double bend curve. At other times, only small adjustments are necessary, meaning a HC is attractive to use as the CAC can be active for the majority of time, and the DPC can be activated only when a large offset is requested. In Figures 10 and 11, we can see there is a steady increase in path following and target reaching error for both the position and orientation for the single and double bend trajectories. Notably, the target positioning error even for the best case, the DPC, is still >5 mm, a clinically unacceptable result highlighting the need for closed loop control. The cyclic controller is active for more than 90% of the time when the HC is used, and the HC has similar performance to the DPC at low threshold values δ t ≤ 5 mm. In the real system, there can be some misalignment of the catheter segments, ∼2 mm, during an insertion due to the elasticity of the plastic segments. For this reason, a HC threshold value of δ t = 5 mm is chosen for Experiments 2 and 3, as this is larger than any expected misalignment with a safety factor of 2 and gives the next best results for the HC. Experiment 2: Expert Single-User Closed Loop Control Three paths were generated by the path planner in order to provide trajectories the catheter could reasonably be expected to follow during a neurosurgical procedure, each with a different target and initial insertion point into the skull, corresponding to paths approximately 60 mm in length-Path 2 in overview mode is shown in Figure 7. A single, expert user was asked to follow these curves to the best of their ability in order to reach the target. Three low level modalities were tested: the DPC, the CAC, and the HC with the threshold parameter, δ t = 5 mm, which was chosen based on the results of Experiment 1. As the needle moves at a constant net speed of 1 mm/s, each insertion took the user approximately 1 min to complete once they had started the motion via the foot pedal. The user made five insertions for each of the controller modalities over each path. The underlying low level control modality was changed unknowingly to the user following a Latin squares assignment. The user was asked to achieve the following objectives in decreasing order of priority. 1. Obj 1: Reach the target pose with the catheter tip 2. Obj 2: Follow the desired path Results from [41] show that for planar insertions of a single or double bend trajectory, there was no significant difference between the target reaching or path following errors for the DPC or the CAC when a high level MPC controller was employed. However, the MPC used the linear model of the needle to simplify the model calculations and for the purpose of this paper, we use human-in-the-loop high level control, as it is not currently clinically accepted to have a fully autonomous system performing neurosurgery. An expert user undertook five insertions for each path using the DPC, HC with threshold of δ t = 5 mm, and the CAC, with the trajectories followed shown in Figure 12 and performance results in Figure 13, with details in Table 2. After testing for normal distributions, a one-way between modalities ANOVA was conducted for each of the performance metrics: path following position and orientation errors and target reaching position and orientation errors. There was a significant difference for the path position error (F(2, 42) = 65.63, p < 0.001). Post hoc comparisons using the Tukey HSD test indicated that the mean path position error for the CAC (M = 0.999, SD = 0.038) was significantly higher than those for the DPC (M = 0.433, SD = 0.038) and the HC (M = 0.498, SD = 0.038). Taken together, these results show the CAC displays higher path following position error compared to the DAC and the HC, but that all modalities result in similar performance in reaching the target pose. The average percentage of cyclic mode employed by the HC over all trials and all paths was 79.1 ± 6.6%. Experiment 3: Multi-User Trial To validate the real-time performance of the visual interface and teleoperated joystick for the PBN navigation system when using different low level controller modalities, a human study (n = 5) trial measured the performance of multiple users under controlled conditions. The full experiment protocol took each user approximately 1.5 h to complete. A path trajectory was generated for the training round, and the user completed five insertions using the DPC and five insertions using the CAC in order to become familiar with the system. After training, the same three trajectories as in Experiment 2 were used for the user trial. Each user completed three insertions for each of the three modalities (DPC, HC, and CAC) over the three paths based on a Latin square assignment-giving nine insertion performance results per modality per user, or a total of 45 trials per modality. The hybrid controller threshold parameter was set to be δ t = 5 mm as in Experiment 2, and users were asked to achieve the same objectives as in Experiment 2. The results from the user trial of non experts (n = 5) are given in Figure 14 and Table 3. All users were right handed and between the ages of 25 and 35. After testing for normal distribution, a one-way between modalities ANOVA was conducted for each of the performance metrics: path following position and orientation error and target reaching position and orientation error. There was a significant difference for the means of the target position (F(2, 132) = 15.48, p < 0.001) and the path position error (F(2, 132) = 14.34, p < 0.001). Post hoc comparisons using the Tukey HSD test indicated that the mean target position error for the CAC (M = 2.055, SD = 0.154) was significantly higher than those for the DPC (M = 1.051, SD = 0.154) and the HC (M = 0.971, SD = 0.154), and that the mean path position error for the CAC (M = 1.110, SD = 0.061) was significantly higher than those for the DPC (M = 0.762, SD = 0.061) and the HC (M = 0.670, SD = 0.061). Taken together, these show that the users could not compensate for the extra errors introduced by the CAC, and that the CAC modality had higher target reaching, and path following, position errors than the DPC. There was no significant difference in the performance of any metric between the DPC and the HC. The average percentage of cyclic mode employed by the HC over all trials and all paths was 77.0 ± 10.3%. Users were asked to complete a short survey when they finished the experiment asking which modality they found the easiest to use, and for which they thought their performance was the best as well as an open ended question to leave other comments. Sixty per cent of users did not notice a difference in difficulty between the modalities, and 40 per cent thought the DPC was the easiest to use. Eighty per cent of users did not notice a difference in performance between the DPC and the HC, and 20 per cent of users thought their performance was best when using the DPC. Comments from the users noted that 'The CAC was particularly hard to control' and that 'They were frustrated the needle was turning to slowly when using the CAC'. These initial qualitative results show that there is little difference in user perception when using the DPC or the HC, but that the CAC has notable performance loss (agreeing with the quantitative results). Discussion The results of Experiment 1 highlight the trade off among under-steering, hence performance, and per cent of time the CAC is active. As the CAC can reduce tissue deformation, which is hypothesised to reduce tissue damage, it will be necessary to quantify the magnitude of this effect during in vivo trials via functional analysis of the tissue tract after the surgery. In this way, the final choice of optimal HC threshold value can be chosen. The results of Experiments 2 and 3 indicate that the CAC has significantly lower performance than the DPC (validating H1). Here, performance is measured by our metrics of the error measurements of the needle tip to the target position and orientation and path position and orientation. The CAC had significantly higher error in path position for the expert user and in target and path position for the non-expert users. However, it should be noted that the results from Experiment 2 are only from one user, and it may be too early to draw this conclusion for expert users. In Experiment 2, the result of the CAC having significantly higher error only in the path following position when compared to the DPC or the HC was surprising. It highlights that the expert user was able to mitigate the error introduced by the under steering of the CAC to achieve the target pose as per the first objective, but possibly at the expense of the path following metric which was the second objective. The user trial in Experiment 3 shows that less expert users are not able to compensate for the CAC error in target reaching performance, hence an HC could also be attractive as less training may be required from the users to achieve expert results. We found an HC with no significant performance differences compared to the DPC when used by both an expert and a group of non-experts, with δ t = 5 mm, who used CAC motion profiles more than 75% of the time (validating H2). From Experiment 3, H3 has not been validated-while the users did not notice a difference between the DPC and the HC control modalities, 40% noticed the performance difference of the CAC and found it harder to use. This supports the papers ultimate conclusion that the HC modality is the best choice of control for the PBN. Future work should further explore the HC parameters to see if even better performance can be achieved with different threshold values, and evaluate a bigger group of expert users. All modes show orientation errors, with the CAC showing the worst case of target orientation/heading error of 12.34 ± 6.58, and the DPC having 9.60 ± 6.24. The accuracy of target heading can be linked to how users prioritise the task. As they reach the end of the path, if there is a position error, the user can turn directly to the target increasing the heading error, although lowering the position error. This is supported by the data that the path orientation error, or how well the user keeps the path heading as they track the desired path, is around half of that for the target heading error. To aid this, further instructions should be given to the users such that they prioritise heading errors more highly at the target, and visual overlays or haptic indicators could be developed in the user interface to better help them understand the magnitude of the heading error as this can be difficult to understand from the current setup. Conclusions Following a 10-year research and development journey, this paper summarises the main milestones associated with a novel, biologically inspired needle steering system, followed by our latest control approaches, validated in silico. These experiments highlight how cyclic motion control, which is also biologically inspired, can be delivered optimally though a blended approach, where the HC threshold value must be optimised under appropriate surgical conditions. This will be the focus of future research work for the low level control of the PBN, as well as further developing the Human-Robot Interface, both of which must be evaluated by clinicians. The EDEN2020 system is currently being used for in vivo clinical trials with an animal (ovine) model, expected to be completed in Q1 of 2021. The trial will evaluate the safety of the system, as only a gadolinium contrast solution is being injected into healthy tissue. It will also evaluate the accuracy of path following and target reaching performance based on intraoperative sensing (FBG fibres and ultrasound real-time measurements) and from pre-and postoperative CT and MRI imaging comparisons. Upon completion, a pre-commercial prototype of the EDEN2020 platform will have been fully developed under a quality management system, and the safety data from the trial will help drive the commercialisation of the project and inform future efficacy trials.
2020-12-20T06:18:05.425Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "3db0e912f38b758fc3ccf8021e622b6e9a713d6d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-7673/5/4/68/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4dd9f8e44e3faf6d12499a851be3c357d3f91c92", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
16335861
pes2o/s2orc
v3-fos-license
Antiferromagnetic spin ladders: crossover between spin S = 1/2 and S = 1 chains We study a model of two weakly coupled isotropic spin-1/2 Heisenberg chains with an antiferromagnetic coupling along the chains. It is shown that the system always has a spectral gap. For the case of identical chains the model in the continuous limit is equivalent to 4 decoupled noncritical Ising models. For this case we obtain the exact expressions for the asymptotics of spin-spin correlation functions. When the chains have different exchange integrals the spectrum at low energies is well described by the O(3) nonlinear sigma model. We discuss the topological order parameter related to the gap formation and give a detailed description of the dynamical magnetic susceptibility. Introduction It has been widely recognized that one-dimensional antiferromagnets with halfinteger and integer spins have dramatically different excitation spectra. The original theoretical prediction by Haldane [1] that Heisenberg chains with halfinteger spin are gapless, whereas those with integer spin are gapped, has been confirmed experimentally [2]. To gain insight into the physics underlying this beautiful result, one may study systems intermediate between spin S = 1/2 and S = 1. The simplest of these is the "Heisenberg spin ladder", which has isotropic couplings J || along the chains and J ⊥ between them. Although such systems with both antiferromagnetic [3], [4], [5], [6], [7] and ferromagnetic [8], [9], [10], [11] interchain couplings have been the subject of considerable recent theoretical interest, certain problems remain unresolved leaving room for our contribution. In this paper we present our analysis of a weakly coupled spin ladder J || >> |J ⊥ |. We have found this limit more interesting from the theoretical point of view for it allows us to study the crossover between the gapless spin S = 1/2 regime and the strong coupling limit corresponding to the S = 1 chain. Unfortunately, our results have only qualitative validity for the known experimental realizations of double chain ladders ( Sr n−1 Cu n+1 O 2n [12] and (VO) 2 P 2 O 7 [13]) where both exchange integrals are of the same order. The paper is organized as follows. In Section 2 we derive the continuous version of the spin ladder Hamiltonian for the case of identical chains. To achieve this we employ the bosonization approach, but the resulting effective theory is most simply represented in terms of fermions. In this representation the effective Hamiltonian of the spin ladder contains four species of weakly interacting real fermions 1 . Three of these modes comprise a degenerate triplet and the remaining one lies above having a mass approximately three times as big. The magnitude of the mass gaps is of the order of the interchain exchange. For any sign of the interchain coupling, the leading asymptotics of the correlation functions are determined by the triplet of Majorana fermions as for the S = 1 chain [14]. The fact that the low energy sector of the model is essentially a free theory makes it possible to obtain non-perturbative expressions for asymptotics of all correlation functions. This is done in Section 3. In Section 4 we discuss a situation where the 1 The difference between ordinary (Dirac) and real (Majorana) fermions is that the latter ones have only positive energies ǫ(p) = p 2 + m 2 . Therefore one can always describe one Dirac fermion as a superposition of two Majorana fermions. ladder consists of inequivalent chains. It is shown that, in the limit when exchange integrals on the chains strongly differ, the low-lying excitations are described by the O(3)-nonlinear sigma model. The adequacy of this treatment is guaranteed by the fact that this sigma model has a small bare coupling constant. As we have pointed out, the excitation spectrum of the spin ladder always has a gap. The appearence of such a gap is usually related to some symmetry breaking process. In Section 5 we discuss this process and derive the corresponding order parameter which happens to be nonlocal (the so-called string order parameter). The paper has a Conclusion and two Appendices where we provide technical details about bosonization and string order parameters. 2 Coupling of identical chains; the Abelian bosonization. In this Section we apply the Abelian bosonization method to the spin-ladder model H = J j=1,2 n S j (n) · S j (n + 1) + J ⊥ n S 1 (n) · S 2 (n) (1) describing two antiferromagnetic (J > 0) spin-1/2 Heisenberg chains with a weak interchain coupling (|J ⊥ | ≪ J ) of arbitrary sign. Abelian bosonization is a wellknown procedure, but for the sake of completeness we briefly overview it in the Appendix A. In the continuum limit, the critical properties of isolated S = 1/2 Heisenberg chains are described in terms of massless Bose fields φ j (x) (j = 1, 2): where the velocity v s ∼ J a 0 and Π j are the momenta conjugate to φ j . The interchain coupling is expressed in terms of the operators J j (x) and n j (x) which represent, respectively, the slowly varying and staggered parts of local spin density operator and are defined in the Appendix A. According to (106), the current-current term in (3) is marginal, while interaction of the staggered parts of the spin densities is strongly relevant. So we start our analysis by dropping the former term (its role will be discussed later). Using then bosonization formulas (105) for n j (x), we get and introduce linear combinations of the fields φ 1 and φ 2 : The total (φ + ) and relative (φ − ) degrees of freedom decouple, and the Hamiltonian of two identical Heisenberg chains transforms to a sum of two independent contributions: In the above derivation, the J 1 · J 2 -term has been omitted as being only marginal, as opposed to the retained, relevant n 1 · n 2 -term. It is worth mentioning that there are modifications of the original two-chain lattice model for which the J 1 · J 2 -term does not appear at all in the continuum limit, and mapping onto the model (6) becomes exact. In two such modifications, the interchain coupling is changed to or The structure of these models explains why the low-energy physics of two weakly coupled Heisenberg chains must not be sensitive to the sign of the interchain coupling J ⊥ . This conclusion is in agreement with recent results of Ref.7. Let us turn back to Eqs. (7) and (8). One immediately realizes that the critical dimension of all the cosine-terms in Eqs. (7), (8) we get For future purposes, we introduce two real (Majorana) fermion fields to represent H + as a model of two degenerate massive Majorana fermions where Apart from the usual mass bilinear term ("CDW" pairing), the Hamiltonian H − also contains a "Cooper-pairing" term originating from the cosine of the dual field: We introduce two Majorana fields The Hamiltonian H − then describes two massive Majorana fermions, ζ R,L and ρ R,L , with masses −m and 3m, respectively: Now we observe that ξ 1 ≡ ξ, ξ 2 ≡ η and ξ 3 ≡ ζ form a triplet of Majorana fields with the same modulus of mass m. There is one more field ρ with a larger mass, 3m. So, with The The resulting model H m [ ξ ] was suggested as a description of the S = 1 Heisenberg chain by Tsvelik ([14]). This equivalence follows from the fact that, in the continuum limit, the integrable S = 1 chain with the Hamiltonian is described by the critical Wess-Zumino model on the SU(2) group at the level k = 2, and the latter is in turn equavalent to the model of three massless Majorana fermions, as follows from the comparison of conformal charges of the corresponding theories: The k = 2 level, SU(2) currents expressed in terms of the fieldsξ a are given by When small deviations from criticality are considered, no single-ion anisotropy (∼ D(S z ) 2 , S = 1) is allowed to appear due to the original SU(2) symmetry of the problem. So, the mass term in (21) turns out to be the only allowed relevant perturbation to the critical SU(2), k = 2 WZW model. Thus, the fields ξ a describe triplet excitations related to the effective spin-1 chain. Remarkably, completely decoupled from them are singlet excitations described in terms of the field ρ. Another feature is that this picture is valid for any sign of J ⊥ , in agreement with the effective lattice models (9) and (10) which we actually are dealing with. Since the spectrum of the system is massive, the role of the so far neglected (marginal) part of the interchain coupling (3) is exhausted by renormalization of the masses and velocity. Neglecting the latter effect, this interaction can be shown to have the following invariant form which, after transforming back fromξ a to ξ a , reads In a theory of N massive Majorana fermions, with masses m a (a = 1, 2, ..., N) and a weak four-fermion interaction renormalized massesm a estimated in the first order in g are given bỹ Using (25) and (26), we find renormalized values of the masses of the triplet and singlet excitations: 3 Correlation functions for the identical chains. Since the singlet excitation with m s = 3m does not carry spin, its operators do not contribute to the slow components of the total magnetization. The latter is expressed in terms of the k = 2 SU(2) currents (23): Therefore the two point correlation function of spin densities at small wave vectors (|q| << π/a 0 ) is given by the simple fermionic loop. A simple calculation gives the following expression for its imaginary part: Thus the dynamical magnetic susceptibility at small wave vectors has a threshold at 2m. It turns out that it is possible to calculate exactly the two-point correlation functions of the staggered magnetization. This is due to the fact that the corresponding operators of the Heisenberg chains are related (in the continuum limit) to the order and disorder parameter fields of 2d Ising models [15], [16]; the correlation functions of the latter operators are known exactly even out of criticality [17]. Using formulas (105) of the Appendix A, the components of the total (n (+) = n 1 +n 2 ) and relative (n (−) = n 1 −n 2 ) staggered magnetization can be represented as The fields φ + , θ + and φ − , θ − are governed by the Hamiltonians (7) and (8), respectively. Let us first consider exponentials exp(±i √ πφ + ), exp(±i √ πθ + ). Their correlation functions have been extensively studied in the context of the noncritical Ising model (see, for example, Ref. [17]). It has been shown that these bosonic exponents with scaling dimension 1/8 are expressed in terms of the order (σ) and disorder (µ) parameters of two Ising models as follows: Let us briefly comment on this correspondence. As is well known (see, e.g., Ref. [18]), a theory of massive Majorana fermion describes long-distance properties of 2d Ising model, the fermionic mass being Ising models. Let σ j and µ j (j = 1, 2) be the corresponding order and disorder parameters. At criticality (zero fermionic mass), four products σ 1 σ 2 , µ 1 µ 2 , σ 1 µ 2 and µ 1 σ 2 have the same critical dimension 1/8 as that of the bosonic exponentials exp(±i √ πφ + ), exp(±i √ πθ + ). Therefore there must be some correspondence between the two groups of four operators which should also hold at small deviations from criticality. To find this correspondence, notice that, as follows from (7), at This explains the first two formulas of Eq.(32). Clearly, the exponentials of the dual field θ + must be expressed in terms of σ 1 µ 2 and µ 1 σ 2 . To find the correct correspondence, one has to take into account the fact that a local product of the order and disorder operators of a single Ising model results in the Majorana fermion operator, i.e. This leads to the last two formulas of Eq.(32). To derive similar expressions for the exponents of φ − and θ − , the following facts should be taken into account: (i) the Hamiltonian (17) describing ′′ − ′′modes is diagonalized by the same transformation (18) as the Hamiltonian (12) responsible for the ′′ + ′′ -modes; (ii) the Majorana fermions now have different masses, and (iii) one fermionic branch has a negative mass. In order to take a proper account of these facts one should recall the following. (a) A negative mass means that we are below the transition. (b) It follows from (ii) that ′′ − ′′ bosonic exponents are also expressed in terms of order and disorder parameters of two Ising models, the latter, however, being characterized by different t's. We denote these operators as σ 3 , µ 3 (mass −m) and σ, µ (mass 3m). (c) Operators corresponding to a negative mass can be rewritten in terms of the ones with the positive mass using the Kramers-Wannier duality transformation Taking these facts into account we get the following expressions for the ′′ − ′′bosonic exponents: Combining Eqs. (32) and (34), from (31) we get the following, manifestly SU(2) invariant, expressions: It is instructive to compare them with two possible representations for the staggered magnetization operators for the S = 1 Heisenberg chain ( [14]): Agreement is achieved if the singlet excitation band is formally shifted to infinity. A more precise meaning of this approximation becomes apparent when one considers asymptotic behaviour of the corresponding two-point correlation functions in the two limits r → 0 and r → ∞( [19]). In the limit r → ∞ they are as follows; (33)). Therefore, as might be expected, at large distances, a difference between the ladder and the S = 1 chains appears only in exp(−3mr)terms due to the contribution of the excitation branch with M = 3m absent in the S = 1 chain. In the limitr → 0 the correlation functions are of power law form: plus non-singular terms. The ratio of the constants A 1 and A 2 is a universal quantity involving Glaisher's constant (A): We conclude this Section by writing down the exact expression for the staggered magnetization two-point correlation functions. The correlation function for spins on the same chain is given by The interesting asymptotics are where The complete expressions for the functions G σ,µ (r) are given in Ref. ([19]). For the interchain correlation function we get At mr << 1 it decays as (mr) −2 ; the leading asymptotics at mr >> 1 is the same as (46) (up to the -1 factor). The difference appears only in terms of order of exp(−5mr). The important point is that at mr >> 1 the contribution from the singlet excitation appears only in the fifth order in exp(−mr). Therefore it is unobservable by neutron scattering at energies below 5m. Using the above expressions we can calculate the imaginary part of the dynamical spin susceptibility in two different regimes. For |π − q| << 1 we have where the transverse "momentum" q ⊥ takes values 0 and π. The factor Z is assumed to be m-independent so that at m → 0 we reproduce the susceptibility of non-interacting chains. We have calculated the function F (ω, q) only near the 3m threshold where it is equal to For |q| << 1 we have where f (s, m) is given by Eq.(30). In this section we consider two interacting spin S = 1/2 chains with different intrachain exchange integrals J 1 || = J 2 || . It turns out that the most adequate approach in this case is non-Abelian bosonization. The reason for this is that non-Abelian bosonization explicitly preserves the SU(2) symmetry present in the Hamiltonian. The Abelian bosonization approach which does not respect this symmetry encounters difficulties. As shown by Affleck [20], by a mapping from a fermionic theory, the S = 1 2 Heisenberg antiferromagnet can be described by a k = 1, SU(2) Wess-Zumino-Witten (WZW) model with the following action; where matrix g ∈ SU (2). There can in general be marginally irrelevant perturbations to this theory, which generate logarithmic corrections to the correlation function exponents, but do not change their qualitative behaviour (i.e. power law). In general this model describes not just the spin S = 1/2 Heisenberg chain, but any (1+1)-dimensional system of fermions with the charge degree of freedom frozen out and no gap in the spin sector. The WZW model may look unfamiliar, but it is not so difficult to deal with since its operators and their exact correlation functions are already known from the application of conformal field theory [21](see also [22]). As we have mentioned above, the great advantage of the WZW model is that it explicitly possesses the SU(2) × SU(2) symmetry of the massless fermion spin sector, and is also critical The bosonized expression for the spin operator of the Heisenberg chain is given by [20]; where the currents are given by; (T a are the Pauli matrices -generators of the SU(2) group). These currents satisfy the SU(2) Kac-Moody algebra described in the Appendix A. Consider two Heisenberg chains coupled by an antiferromagnetic nearest neighbour interaction. It can be represented like this; where the dynamics of one chain is represented by the matrix g and the currents − → G R,L and the other by h and − → H R,L . The indices 1,2 distinguish between different spin wave velocities. Without a loss of generality we can put v 1 > v 2 . The currents have conformal dimensions (1, 0) and (0, 1); using the formula for the conformal dimensions of the matrices for the SU(n) group derived in Ref. [21]: we get that for n = 2, k = 1, g and h both have conformal dimensions ( 1 4 , 1 4 ). The λ 2 term is therefore the relevant interaction, whereas the current couplings are only marginal. For this reason, the current interaction will be neglected at this stage. Then the interaction can be written as; Making the substitution α = gh + , which leaves the measure invariant, and using the remarkable identity [21]: we arrive at the following expression for the action: ). The identity (58) is nothing very mysterious. It is simply a generalisation of an identity familiar from Abelian bosonization. To see this, consider substituting explicitly for the special case of Abelian bosonization, the U(1) fields e iβφ 1 and e iβφ 2 for the matrices α and h respectively. Then the WZW action, W (α) reduces to the action for free scalar bosons as we would expect; and the interaction term in (58) becomes; The field αh + is e iβ(φ 1 −φ 2 ) = e iβφ − , and so the identity (58) becomes; Therefore the identity (58) is just an analogue of the following simple statement: where the last term is the "interaction term". We shall consider the most relevant interaction, Tr(α + α + ) first. The effective action for α is in this approximation; From the first order RG equation we get Integrating up to a scale where the coupling becomes of order 1 and taking this to give some estimate of the dynamically generated mass, one gets M ∼ λ At this large scale the cross term containing derivatives of h and α gives the irrelevant contribution Therefore the asymptotic behaviour at large distances is governed by the following action: where c 2 ∼ λ 4/3 and which can be further modified by the coordinate rescaling: such that we finally have Excitations, which correspond to configurations where Trh = 0, acquire a gap. The estimate for this gap is The reason why the Wess-Zumino term effectively disappears from the action is the following. After substituting Eq.(73) into the expression for Γ(h) the Wess-Zumino term reduces to the topological term: where k is an integer number. The factor in front of the topological term is such that its contribution to the action is always a factor of 2πi and therefore does not affect the partition function. The mass gap of the model (75) is given by As long as this gap is much smaller than M 0 , the adopted approach is selfconsistent. The latter is achieved for any appreciable difference between the velocities. Note that the first term in the expansion coincides with the one for identical chains. Therefore a difference in dynamical magnetic susceptibilities for both cases will become manifest only at energies ω > 3m. The lowest feature in ℑmχ (R) (ω, q) is in both cases the sharp peak corresponding to the triplet excitation. Such a peak has been observed in 5 String order parameter in the spin-ladder model Den Nijs and Rommelse [25] (see also [26]) have argued that the gapful Haldane phase of the S = 1 spin chain is characterized by a topological order measured by the string order parameter The nonzero value of < O α > has been related to the breakdown of a hidden Z 2 × Z 2 symmetry [27]. In this section we use the Abelian bosonization method (section 2) to construct the string operator in the continuum limit of the S = 1/2 spin-ladder model and identify the corresponding discrete symmetry with that of the related Ising models. Since spin-rotational invariance remains unbroken, the string order parameter must respect this symmetry. However, Abelian bosonization is not an explicitly SU(2) invariant procedure. For this reason, it turns out that it is the z-component of the string operator that acquires a simple form in the continuum limit. On the other hand, due to the unbroken SU(2) symmetry, the very choice of the quantization (z-) axis is arbitrary; therefore the expectation values for all components of the string operator will coincide. To construct a string order parameter O z (n, m) for the spin-ladder model, we shall follow the same route as that previously used for the bond-alternating S = 1/2 chain [27] (technical details are given in the Appendix B.). We start from the lattice version of the model, construct a product of two spin-1/2 operators belonging to the j-th rung, S z 1 (j)S z 2 (j), and then take a product over all rungs between j = n and j = m: Assuming that |m − n| ≫ 1, we pass to the continuum limit in the exponential and retain only the smooth parts of the spin operators expressing them in terms of the spin currents J z a;R,L (x), (a = 1, 2): Using Eqs.(5),(97), we find that the exponential is expressed in terms of the field φ + only. Thus we find a very transparent representation for the string operator: Using Eq.(32), we find that the string operator is expressed in terms of the Ising order and disorder operators. For either sign of J ⊥ , we find that, in the limit |x − x ′ | → ∞, the vacuum expectation value of O z (x, y) is indeed nonzero: As in the case of the bond-alternating spin chain, the nonvanishing expectation value of the string order parameter in the limit of infinite string manifests breakdown of a discrete Z 2 ×Z 2 symmetry. This is the symmetry of two decoupled Ising models described by the Hamiltonian H + in the Majorana fermion representation (14): Conclusions As the reader might see the spin ladder presents an exciting opportunity to study a formation of massive spin S = 1 and S = 0 particles which appear as bound states of spin S = 1/2 excitations of individual Heisenberg chains. At small interchain coupling |J ⊥ | << J || the masses of these particles are of order of |J ⊥ |. The S = 1 branch is always lower independently of the sign of J ⊥ . At J ⊥ /J || → 0 the singlet spectral gap is three times as large as the triplet one. The comparative smallness of the interchain coupling creates room for the crossover from the single chain behaviour to the strong coupling regime which resembles the S = 1 chain. The imaginary part of the dynamical spin susceptibility χ ′′ (ω, q; q ⊥ ) calculated in Section 3 contains essential information about this crossover. At small energies the susceptibility exhibits a sharp peak around q = π correspondinding to the stable S = 1 massive particle; at energies ω > 3m χ ′′ (ω, q) exhibits an incoherent tail originating from the multi-particle processes. Below the 5m-threshould the singlet branch does not contribute to χ ′′ (ω, q) and the latter coincides with the susceptibility of a S = 1 chain. The contribution from the singlet mode becomes essential at energies much greater than the spectral gap and the susceptibility asymptotically approaches its value for a spin-1/2 chain. We emphasise that the described picture holds only in the ideal limit J ⊥ /J || → 0. We suppose that in real systems it will be difficult to make this ratio less than 0.1. Acknowledgements We acknowledge useful discussions with I.E. Dzyaloshinskii can be mapped onto fermionic theories. Using bosonization, these can be recast as generalised Sine-Gordon or WZW models. This is useful because a great deal is known about these theories, such as correlation functions, scaling dimensions of operators etc. A brief summary of this approach is given below. Following Refs. [20], we start from a symmetry preserving fermionization of the spin operators To eliminate the redundant zero-and double-occupancy states, the constraint α ψ † nα ψ nα = 1 for all lattice sites n should be imposed. Such a constraint will effectively work, if one considers a 1/2-filled, U > 0 Hubbard model for the field ψ nα . In this model, a Mott-Hubbard charge gap m c is known to exist for any positive U. Therefore, at low energies, |E| ≪ m c , only spin excitations remain; those describe universal dynamical properties of the spin-chain model (88) in the continuum limit. Assuming that U ≪ t, we linearize the free-particle spectrum near two Fermi points, ±k F (k F = π/2a 0 ), and decompose the Fermi field into right-moving and left-moving chiral components: satisfying anomalous (U(1) and SU(2)) Kac-Moody algebras: (with similar relations for the left components). These algebras lead to fermionboson duality which allows to represent the Hamiltonian of free fermions as a sum of two independent (commuting) contributions of gapless charge and spin collective modes (Sugawara form): The charge part is equivalently described in terms of a massless scalar field φ c . where Π c is the momentum conjugate to the field φ c , one obtains The spin part H 0 SU (2) represents the level k = 1 SU (2) which at g ∼ U/t > 0 occurs in its strong-coupling, massive phase (β 2 < 8π), with the single-soliton mass m c being just the the Mott-Hubbard commensurability gap. In the spin sector, interaction −2gJ R · J L is added to H 0 SU (2) . This interaction is marginally irrelevant (since g > 0). Therefore, the universal scaling properties of the Heisenberg S = 1/2 spin chain (88) are described by the level k = 1 WZW [20]. The possibility of an Abelian bosonization of the Heisenberg chain (88) stems from the fact that conformal charges of the k = 1 SU(2) WZW models and free massless Bose field coincide: C W ZW SU (2),k=1 = C boson = 1. Using relations can be expressed in terms of J z -currents only; introducing then a pair of canonical variables, φ s and Π s , via one finds The price we pay for this simplification is the loss of spin rotational invariance in the bosonized structure of the spin currents: the J x and J y cannot be represented as simply as J z , and require bosonization of the Fermi fields: Linear combinations where ∂ x θ c,s = Π c,s . To bosonize J ± R,L , use (99) to obtain: Note that, as expected, the charge field φ c does not contribute to the spin SU (2) currents. Moreover, despite the fact that the definitions (101) contain cutoff a 0 explicitly, the current-current correlation functions are cutoff independent and reveal the underlying SU(2) symmetry: The SU(2) currents J R (x), J L (x) determine the smooth parts of the spin operators in the continuum limit. Namely, at a 0 → 0 where is the staggered part of the local spin density. When bosonizing (104), the (redundant) charge excitations emerge, since offdiagonal bilinears like ψ † R ψ L and ψ † L ψ R describe particle-hole charge excitations with momentum transfer ±2k F . We find: Being interested in the energy range |E| ≪ m c , one can replace the charge operator sin( √ 2πφ c /2) by its nonzero vacuum expectation value; we denote this (nonuniversal) value by λ = − < sin( √ 2πφ c /2) > and arrive at bosonization formulas for n(x): This completes the bosonization of the spin operators for the isotropic Heisenberg chain. Notice that the critical dimensions of the smooth and staggered parts of the spin densities are different: dim J a = 1, dim n a = 1/2. Appendix B. Hidden Z 2 × Z 2 symmetry and string order parameter in the bond-alternating S = 1/2 Heisenberg chain In addition to the S = 1/2 spin-ladder model, there is another system which is related to the S = 1 spin chain -the spin-1/2 chain with alternating ferromagnetic and antiferromagnetic bonds: This model is instructive in the sense that the string order parameter, whose nonzero expectation value signals breakdown of a hidden discrete symmetry, can be easily constructed [27]. The analogous construction is then directly generalized for the spin-ladder model. A gap in the excitation spectrum of the model (107) persists in the whole range 0 < β < ∞. At β = 0 the ground state of model represents an array of disconnected singlets. At β ≫ 1, strong ferromagnetic coupling between the spins on the < 2j, 2j + 1 >-bonds leads to the formation of local triplets, and the model (107) reduces to a S = 1 Heisenberg chain. Using a nonlocal unitary transformation, Kohmoto and Tasaki[27] have demonstrated equivalence of the model (107) to a system of two coupled quantum Ising chains, i.e. two coupled 2d Ising models. This transformation provides an exact representation of the spin operators S α n as products of two Ising-like order (σ, τ ) and disorder (σ,τ ) operators, essentially a lattice version of relations (32), (34) (see, e.g., [28]). Nearest-neighbor bilinears of the original spin operators take the form where σ x j =σ z [ (βσ z j σ z j+1 + σ x j ) + (βτ z j τ z j+1 + τ x j ) + (βσ z j σ z j+1 τ z j τ z j+1 + σ x j τ x j ) ] (111) The model (111) is invariant under independent rotations of the σ and τ spins by angle π about the spin x-axis which comprise a Z 2 × Z 2 group. Since this group is discrete, it can be spontaneously broken, in which case the spectrum of the system would be massive. It is easily understood from (111) that, in the limit of large positive β when the model reduces to the S = 1 chain, the Z 2 ×Z 2 -symmetry is broken, with < σ z j > = < τ z j > = < σ z j τ z j > = 0, (It has been used in Eq.(112) that, under transformation µ z j = σ z j τ z j to a new pair of variables, µ z j and τ z j , the two-chain Hamiltonian (111) preserves its form.). Representation (108) hints to the way how an order parameter measuring breakdown of the Z 2 × Z 2 -symmetry should be constructed out of the spin operators S α n . Following Kohmoto and Tasaki, consider a product = (−1) n−k (σ z k σ z k+1 )(σ z k+1 σ z k+2 ) · · · (σ z n−1 σ z n ) = (−1) n−k σ z k σ z n (113) Using the relation iσ α j = exp(iπσ α j /2), we find that This is the x-component of the string order operator. According to (112), in the limit |k − n| → ∞, its vacuum expectation value is nonzero: It is important that the string always contains an even number of sites, starting at an even site and ending at an odd site. For a string starting at an odd site and ending at an even site, the corresponding string operator is expressed in terms of disorder operators and therefore has zero expectation value: follows from (112). Notice that in the limiting case β ≫ 1, the string order parameter (118) for the S = 1/2 bond-alternating chain automatically transforms to the exponential of the string order parameter (81) for the S = 1 chain.
2014-10-01T00:00:00.000Z
1995-08-14T00:00:00.000
{ "year": 1995, "sha1": "0bcdbb2fe3bad3d6f6a7156077c8d597e5a250cc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9508047", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33a73a4cef34f064232af67b348949769e7434c0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
267632540
pes2o/s2orc
v3-fos-license
Evaluation of exhaust emissions of agricultural tractors using portable emissions measurement system in Korean paddy field Recently, diesel engine emissions have been designated as a first-class carcinogen by the World Health Organization (WHO). As such, problems with diesel engine emissions continue to increase around the world. This study aimed to analyze the emissions (CO, NOx, PM) of agricultural tractors during farming operations in order to build a reliable national inventory of air pollutant emissions. Emission data were collected using a portable emission measurement system during actual agricultural operation. The load factor (LF) of the engine was calculated using the collected engine information, the emission factor was analyzed using the LF and the measured emission. The LF was significantly different from the current standard value of 0.48, which is used in Korea to calculate exhaust emissions. The deviation ratio of the emission factor was 0.039 ~ 56.59 compared to Tier-4 emission regulation standards. Under many conditions, the calculated emission factor was higher than the emission limit. Thus, this study provides useful information for emission inventory construction through emission calculation under actual conditions and suggests the need to realize the currently applied emission factor. issues.However, the emissions of agricultural machinery currently managed by CAPSS are calculated using the number of agricultural machinery, load factors (LFs), etc., based on emission factors developed by the US environmental protection agency (EPA) 3 .Because the agricultural work environment and conditions (especially soil) in Korea are very different, the accuracy of this method low, and the reliability of the data cannot be easily secured.It is also impossible to calculate the amount of emission that reflects actual working conditions 12 .In addition, because the currently applied emission factor was calculated for an engine unit using a dynamometer, applying it to reflect the actual agricultural work conditions is difficult.To overcome this problem, in the US and Europe, vehicle monitoring is performed using a portable emission measurement system (PEMS) [13][14][15] .In the field of construction machinery, classified as NRMM, along with agricultural machinery, some studies have reported the measurement of real working emission (RWE) data under actual vehicle conditions using a PEMS [17][18][19][20] .Kim and Lee measured exhaust emission data using a PEMS under actual working conditions (no load and load conditions) of an excavator and analyzed the correlation between engine load and major factors of exhaust emissions to estimate CO 2 emissions 16 . In the field of agricultural machinery, some studies on the measurement of exhaust gas emissions from tractors using PEMS equipment have been conducted by researchers from countries in the United States and Europe 2,21 .Lijewski and Merkisz, in which the emissions of passenger vehicles and agricultural tractors were compared based on actual driving under on-road conditions 11 .They reported that the emissions of air pollutants (CO 2 , CO, NO x , HC, and PM) for tractors were higher than those for passenger vehicles.In particular, the largest differences were recorded for road emissions of CO and NO x (90 and 97% lower, respectively, for passenger cars).Merkisz et al established a measurement system using a PEMS to measure the CO 2 emissions of tractors according to actual vehicle conditions and conducted experiments at three speeds (5, 10, and 15 km/h) 21 .It was reported that the CO 2 emissions per unit area at 10 km/h were the highest (18.8 kg/ha).Lindgren and Hansson simulated the effects of engine control strategies and transmission characteristics on the exhaust gas emissions of agricultural tractors according to on-road transport and soil cultivation 22 .They reported that different driving strategies and transmission characteristics can be used to significantly influence emissions without affecting work hours or fuel consumption.However, in Korea, there has been no case of measuring RWE using PEMS under actual vehicle conditions, and only a few studies have been reported in which exhaust emissions were estimated using fuel consumption 11,22 .Therefore, research is required to analyze the exhaust emissions and emission factors of each air pollutant in Korea by measuring the tractor RWE generated under actual working conditions 10,23 . The aim of this study is to secure basic data and evaluate standard rationalization for emission factors.To this end, in this study, the engine characteristics and exhaust gas emissions of agricultural tractors were measured and analyzed according to various tillage treatments (moldboard plow tillage and rotary tillage operations).The detailed research goals are as follows: (1) to develop a data measurement system for measuring tractor engine characteristics and exhaust emissions; (2) measure and analyze tractor engine and exhaust emission data through actual tillage operations; (3) map the measured engine characteristics using the actual work on the engine performance curve; and (4) evaluate emission factors by comparing the analysis results of emission factors with current emission regulations. Test engine In this study, a four-wheel drive tractor was used to measure engine characteristics and exhaust emissions during actual field operation.The dimensions and empty weight of the tractor were 4020(L) × 2270(W) × 2790(H) mm and 4000 kg, respectively.The maximum traction force of the tractor was 26.18 kN at a travel speed of 2.08 km/h, and the maximum running speed was 33 km/h.The tractor used is a 2019 model, and the engine mounted on the tractor under test was a diesel engine that satisfied Tier-4 emission regulations.The engine displacement was 3409 cc and the compression ratio was 17:1.The engine rated torque and power of the tractor were 290 Nm and 67 kW, respectively, at the rated engine rotational speed of 2200 rpm.The tractor was equipped with selective catalyst reduction (SCR). Measurement system A tractor measurement system was constructed to measure the engine characteristics and exhaust emissions according to the tillage operations, as shown in Fig. 1.Engine characteristics such as torque, rotational speed, and power and fuel consumption of the tractor were collected in real time through controller area network (CAN) communication according to the J1939 protocol.In this study, a PEMS was used to collect tractor exhaust emission data using the RWE during the major tillage operations.The PEMS (OBS-one, Horiba, Kyoto, Japan) used in this study is an on-board exhaust gas measurement system used in various industrial fields, such as automobiles, construction machinery, and agricultural machinery.It can measure exhaust volume flow rate (EVFR), CO, NOx, PM, etc 15,24 .This PEMS is divided into gas analyzer (CO, NOx) and particle analyzer (PM).In the emission gas calculation, non-dispersive infrared (NDIR); a heated chemiluminescence detector (HCLD); and the filter gravimetric method (FGM), diffusion filling method, and diffusion charging method (DCM) were used for CO; NOx; and PM, respectively.The PEMS used in this study applies a dilution sampling method, and the dilution ratio is 10-20:1.The information measured by a gas analyzer is measured in dry form by removing moisture from the sample before measurement, and then converted to wet form through post-processing.For particle analyzers, measurements are made in real time in μg/m 3 .Therefore, in the case of PM, the separate dry-wet concept is not applied.The temperature of the filter block is maintained at 40-50 °C while the equipment is operating.One hour before/after the test, the PM filter is conditioned for a certain period of time under constant temperature/humidity conditions and the weight is recorded.The exhaust gas temperature is measured in the PEMS module, not the engine, after passing through a pipe of 2.5 m, so the results are expected to be slightly lower than the engine temperature.The PM sensor has its own zero-point adjustment function, and the equipment was calibrated to zero before and after the test.In accordance with RDE (real driving emission) regulations, the PEMS equipment was calibrated (zeroing and spanning) using standard gases before and after the RDE test, which lasted approximately 4 h.The standard used for calibration is a product of Daewoo Gas Corporation, and the concentrations of the span calibration gas are 7690, 1540, and 259.6 μmol/mol for CO, NO, and NO 2 , respectively.Sensor drift was confirmed through zeroing and spanning calibration before and after testing.The system response time of PEMS components is less than 12 s.The time-alignment of data collected from various sensors (Exhaust gas, GPS, engine OBD) was matched by taking into account operation start and end times.The PEMS system used in this study includes data analysis software with a built-in time alignment function, which solves the problem of response time differences between various components of PEMS.The detailed specifications and calculation method for the emission gas of the PEMS are listed in Table 1. The PEMS equipment was covered using a casing jig to protect it from the dust generated during agricultural work.There was insufficient space to install the PEMS on the tractor; therefore, the existing ballast was removed, and the PEMS was installed in the ballast position in front of the tractor, as shown in Fig. 2. The weights of the PEMS and jig were approximately 100 and 200 kg, respectively, and the total added weight was 300 kg (Table 2). Field experiment The field experiment was conducted in October 2020 in a paddy field of 3132 m 2 (36 m × 87 m) located at 674-10, Dangsan-ri, Dangjin-si, Chungcheongnam-do, South Korea (36°56' 04.0" N 126°37' 58.1" E).The ambient temperature and humidity of the field experiment site were 17 to 20 ℃ and approximately 75%, respectively.The experiment was conducted for approximately 4 h per day over the entire field experiment site.The soil texture of the field experiment site was Loam by the soil classification triangle of United States Department of Agriculture (USDA), and the soil moisture content was measured at 20 random locations in the test sites using soil moisture sensor (TDR350; Spectrum Technology, Aurora, IL, USA), and the average value was 41.8%.Plow tillage and rotary tillage, which are the most widely used major tillage operations in Korea, were selected for field data collection.The implements used were a moldboard plow (WJSP-8, Woongin Machinery Co., Ltd., Gimje, Korea) and rotavator (E260, Celli SpA, Forli, Italy) 25 .The depth during tillage operations was set to be maintained at the 15-20 cm level according to the recommendations of farmers in consideration of the characteristics of agricultural operations in Korea.The number of working stages was selected as B3 (7.60 km/h) for plow tillage and A3 (2.67 km/h) for rotary tillage 26 .The data from the tillage operations used in this study were based on the minimum unit condition consisting of one set of straight forward (tillage) and steering operation.The agricultural operation of the tractor was carried out in a C-type pattern.The engine rotational speed was set at the rated speed (2300 rpm).Tractors are controlled by decreasing engine rotational speed (lowering the throttle) and increasing torque when higher torque is required based on real-time agricultural work load.Therefore, basically the tractor is operated at the 2300 rpm, but when there is a demand for a high load, the engine rotation speed may be lowered.In this study, only data for hot conditions after the tractor's engine was sufficiently preheated were used for analysis, and data on cold conditions were not considered.To collect data only after the engine was sufficiently hot, the experiment was performed 5 min after engine start.This is a result that also satisfies the values presented in previous studies 27 .The reference value of the cold condition (cold start) was based on the coolant temperature of less than 70°C as defined in EU Directive 2012/46/EU, and temperatures above that were considered hot condition 28 . Tractors perform tillage operations by traveling straight ahead, but they also turn at the end of the straight path to work on the next row.The characteristics of the tractor's load and exhaust emission are different for tillage operation at straight path and steering operation at turning work sections.Therefore, in this study, the entire work section of the tractor was divided into a tillage section and a steering section.Depending on the operating conditions, the dynamic characteristics of the engine vary significantly, which directly affects the exhaust emissions 29 .Therefore, in this study, the data collected during the two tillage operations were divided into tillage and steering sections, respectively, and the dataset for each section was analyzed 10,11 .The sampling rate for both tillage operations is 200 Hz.The data collection times for plow and rotary tillage operations were 117.91 and 142.89 s, respectively.The number of data used in the analysis was 13,588 and 9994, respectively, for tillage and steering operations in plow tillage and 20,758 and 7820, respectively, for tillage and steering operations in rotary tillage. Load factor The LF refers to the average power ratio of the engine; it is an important indicator that shows how much power is actually used compared to the rated power of the engine and is significant for calculating the exhaust emission factors of air pollutants and emission sources 30 .In Korea, the LF of agricultural machinery and construction machinery is collectively applied as 0.48, regardless of conditions such as type, model, and year of machinery 4 .www.nature.com/scientificreports/Because this does not reflect the engine load characteristics that vary depending on various conditions, a method that reflects the actual LF is necessary.In this study, real-time engine power was measured using Eq.(1) based on engine rotational speed and torque data measured according to actual agricultural operations, and LF was derived using Eq. ( 2) using real-time measured engine power and the rated power. where T denotes torque (Nm), N denotes rotational speed (rpm), EP denotes the engine power (kW), EP a denotes measured engine power and EP r denotes rated engine power. Exhaust emission and emission factor In Korea, the emissions from agricultural machinery, including tractors, are calculated and managed by NAIR through CAPSS 4 .In CAPSS, emissions are calculated using the number of units, engine rated power, LF, annual operating time, and emission factor based on Eq. ( 2) 4 .The annual agricultural machinery yearbook published by the Korean Society of Agricultural Machinery (KSAM) was used to calculate the number of tractors 6 , and the results of a survey on agricultural machine use, provided by the National Institute of Agricultural Sciences (NAS) of the Korea Rural Development Administration, were used to calculate the annual operating time 31 .As mentioned in Sect.("Load factor") the LF of 0.48 is currently applied collectively; however, in this study, the value calculated using Eq.(1) was applied.The emission factor of the tractor from the Korea National Air Pollutant Emissions Guidebook (IV) published by NAIR was adopted 4 .In Korea's national air pollutant emissions inventory guidebook (IV), emission factors are classified into those before 2012 (~Tier-2), those between 2013-2014 (Tier-3), and those after 2015 (Tier-4) according to environment regulations, as shown in Table 3.In this study, the emission factor according to the RWE was calculated using Eq. ( 3), and the calculated emission factor for Tier-4 regulation, listed in Table 3, were compared and evaluated.Because this study analyzed the emission factor using the real-time LF and exhaust emissions for a single tractor, the number of tractors and annual operating time were not taken into account. where E denotes the exhaust emission (kg/year), N denotes the number of units, LF denotes the load factor, EP denotes the engine rated power (kW), HRS denotes the annual operating time (h/year), and EF denotes the emission factor (g/kWh). Evaluation To analyze the standard deviation with respect to the mean of the sample group, the relative standard deviation (RSD) was calculated using Eq. ( 4).SPSS Statistics (SPSS 25, SPSS Inc., New York, USA) were used for the statistical analysis.The emission factor for tractor exhaust gas under RWE conditions can be analyzed as a ratio, that is, deviation ratio (DR), by dividing the calculated emission factor by the emission factor obtained using the regulation standard, as shown in Eq. ( 5) 32 .This can provide intuitive results by comparing emission factors calculated according to RWE with emission regulations 33 . where RSD denotes the relative standard deviation and S denotes standard deviation. where DR denotes the deviation ratio, EF denotes the calculated emission factor (g/kWh), and EF s denotes the emission factor from the regulation standard (g/kWh). Engine characteristic profile The profiles of the engine characteristics (rotational speed, torque, power, and fuel consumption) for the tillage and steering sections during plow tillage are shown in Fig. 3.In the tillage section, the engine rotational speed was in the range of approximately 850-2300 rpm, and the engine torque was in the range of approximately 30-350 Nm but had an opposite trend to the engine rotation speed.The engine power was calculated using the engine rotational speed and engine torque, and it showed a large variation in the range of approximately 3-67 kW.In particular, the engine rotation speed and torque exhibited opposite tendencies.This is related to the ability to lower the engine rotation speed and increase the engine torque through throttling down when a high torque is required from the tractor powertrain.This is consistent with the trend in engine characteristics according to the load variation of the tractor during tillage operation, as suggested in a previous study.In addition, fuel consumption was in the range of approximately 6-18 L/h, exhibiting a profile similar to that of engine power 5 .In the steering section, the engine rotation speed and engine torque were 800-1500 rpm and 30-350 Nm, respectively, and the engine power was 3-50 kW and exhibited an irregularly fluctuating profile.Table 4 shows statistical analysis data of engine characteristic data for each work section according to plow tillage.Overall, higher rotational speed, torque, power, and fuel consumption were observed in the tillage section www.nature.com/scientificreports/compared to those in the steering section.In particular, the power in the tillage section was found to be higher than that in the steering section, ranging from approximately 1.1 to 21 times ranges (average 5.3 times).In the RSD, torque and power that are approximately 8.5 and 12.3 times higher, respectively, than those in the tillage section are observed in the steering section.This suggests higher data variability in the steering section compared to the tillage section.Figure 4 shows the engine profile according to the rotary tillage.In the tillage section, the engine rotational speed was in the range of approximately 2000-2200 rpm, with a maximum variability of 10%.The engine torque was in the range of approximately 280-315 Nm and showed fluctuations of up to 13%.The engine output showed a change of up to 5% in the range of approximately 64-67 kW.In addition, the fuel consumption was in the range of approximately 13-18 L/h.In the steering section, the engine rotation speed and torque were 800-2200 rpm and 30-330 Nm, respectively, and the engine power fluctuated irregularly, ranging from 3 to 66 kW.As shown in Fig. 3, engine torque and rotational speed showed very large fluctuations during plow tillage operation but on the other hand, engine performance showed relatively low fluctuations during rotary operation.This is believed to be due to differences in characteristics (particularly, presence or absence of PTO operation) between plow and rotary tillage. Table 5 shows the statistical analysis results of the engine characteristic data for each work section according to rotary tillage.According to the results, the rotational speed, torque, power, and fuel consumption in the tillage section are higher than those in the steering section, similar to plow tillage.In particular, the power was found to be approximately 177% in the tillage section compared to that in the steering section.The RSDs for the torque and power in the steering section were approximately 2700 and 10,700%, respectively, compared with those in the tillage section. Load factor analysis Figure 5 a shows the results of the mapping of plow tillage-section and steering-section data on an engine LF curve.Because this study was performed over a wide range of rotational speeds in the tillage and steering sections during plow tillage, the LFs in the tillage and steering sections are approximately 0.59-0.90and 0.04-0.8,respectively.Additionally, the average LFs are 0.80 (red circle) and 0.15 (blue star), respectively.This is significantly different from the standard value (0.48), which is collectively applied in current agricultural machinery in Korea regardless of the conditions such as the type of agricultural machinery and working conditions 12 .Figure 5 b shows the results of the mapping of rotary tillage-and steering-section data on an engine LF curve.For rotary tillage, only a relatively narrow rotational speed of 2000-2200 rpm is used in the tillage section, and it can be seen that the operation was performed under a high LF close to the maximum.However, a wide range of rotational speeds is observed in the steering section.The LF in the tillage-and steering-sections are approximately 0.96-0.99 and 0.04-0.99,respectively, and the averaged LFs are 0.98 and 0.55, respectively.This is significantly different from the currently applied LF of 0.48, which is similar to that in the plow tillage case. Analysis of the exhaust emission of the tractor due to tillage operations The tractor exhaust emissions collected during plow tillage were divided into tillage-and steering-section data, and the results are shown in Fig. 6 a.However, it is difficult to distinguish between the tillage-and steeringsections using only the exhaust emission characteristics.Therefore, in this study, the sections of the exhaust emission profile were divided by applying the standard value that divided the work section according to the tractor load characteristics (Figs. 3, 4).The EVFR in the tillage-and steering-sections is in the ranges of approximately 100-120 g/s and 27-120 g/s, respectively.In all sections, the CO and PM emissions fluctuated irregularly, and the NOx emissions fluctuated for 30 s before rising and subsequently decreasing from 30 to 60 s, beyond which they show a tendency of converging at zero.Fig. 6 b shows the tractor exhaust emissions collected during rotary tillage.Similar to plough tillage, the entire section was divided based on the engine characteristics.It can be seen that the EVFR in the tillage and steering sections is in the ranges of approximately 111-120 g/s and 28-118 g/s, respectively.In all sections, the CO and PM fluctuate irregularly, and the NOx emissions fluctuate for 3 s before rising and subsequently decreasing for 3-40 s, beyond which they exhibit a tendency for converging at zero.This trend is similar to that in plow tillage, but the NOx emissions in this case decrease at a faster rate compared to that in plow tillage.In general, exhaust gas emissions are reduced due to the influence of various after-treatment devices, and the tractor used in this study is equipped with SCR, which reduces NOx emissions.As shown in Fig. 6, it can be seen that in both operations, NOx increases at the beginning of the work and then gradually decreases over time, which is believed to be an effect of the operation of the SCR.Overall, the exhaust temperatures during plow tillage and rotary tillage were in the range of 180-191℃ and 192-225℃, respectively.As mentioned earlier, because the exhaust gas temperature measurement location is long enough from the engine, these results are considered to correspond to the research results showing that temperatures above 190°C must be reached for SCR to operate properly 34 .Based on the plow tillage data, the CO, NOx, and PM emissions were statistically analyzed for the tillage, steering, and entire section (tillage section + steering section), and the results are presented in Table 6.The average CO, NOx, and PM emissions in the tillage section were 8.8 × 10− 3 , 1.0 × 10− 2 , and 2.2 × 10− 3 , respectively, and those in the steering section were 5.6 × 10− 3 , 3.6 × 10− 5 , and 1.4 × 10− 3 , respectively.The RSDs of CO, NOx, and PM were 18.3-52.2,67.6-151.9%,and 12.6-52.5%,respectively, depending on the section.Consequently, it can be seen that the NOx emissions exhibit the highest fluctuation in the entire section.This is considered to be due to the rapid reduction in the NOx under the influence of SCR after a certain operating time. Analysis of emission factors for the tractor by air pollutants The emission factors for each working condition based on the obtained tractor emissions (CO, NOx and PM) and LFs were calculated and compared with those outlined in the Tier-4 emission standards, as shown in Fig. 7. When compared with the Tier-4 standard, the emission factors of CO are higher under all conditions, as shown in Fig. 7 a.The emission factors for NOx show similar or higher values in all conditions except the steering section for both tillage operations when compared to Tier-4 emission standards, as shown in Fig. 7 b.The emission factor for PM under all conditions showed higher than the Tier-4 emission standard, as shown in Fig. 7 c.Table 8 presents the analysis results of the average emission factor for each working condition.The CO values during plow tillage are 0.754, 3.046, and 1.725 g/kWh in the tillage, steering, and total sections, respectively, and those during rotary tillage are 0.378, 1.682, and 0.735 g/kWh, respectively.The NOx emissions during plow tillage are 0.718, 0.021, and 0.423 g/kWh in the tillage, steering, and total sections, respectively, and those during rotary tillage are 0.232, 0.007, and 0.171 g/kWh, respectively.The PM emissions during plough tillage are 0.191, 0.906, and 0.494 g/kWh in the tillage, steering, and total sections, respectively, and those during rotary tillage are 0.132, 0.570, and 0.252 g/kWh, respectively. Evaluation of emission factors for each working condition using emission standard The DR was evaluated by comparing the emission factors for each analyzed working condition (Table 8) with Tier-4 emission regulations, as shown in Fig. 8. DR is a numerical value that indicates how much higher the measured emission factor under each condition is compared to the reference value, thereby enabling an intuitive comparison.The measured DR of CO was found to be higher than 1 in all operating conditions, which indicates that the measured emission factor is higher than the Tier-4 emission factor.The overall measured emission factor of CO was found to be 5.324 to 42.9 times higher than the Tier-4 emission factor.The minimum value of this difference was 5.324 times in the tillage section (c) for the rotary tillage, and the maximum value was 42.9 times in the tillage section (b) for plow tillage.The measured DR of NOx was found to be less than 1 in the total conditions of the steering for plow tillage, and steering and total for rotary tillage, showing that it satisfies Tier-4 emission standards.This result is due to the fact that the NOx emissions in the steering section are close to zero.In three working conditions other than those previously mentioned, the DR of NOx ranged from 1.236 to 3.82, exceeding Tier-4 emission standards.In all six conditions, the DR of PM was found to exceed 1, which was higher than the Tier-4 emission standard, and the DR was found to be 8.25-56.59,which was very high compared to the Tier-4 emission standard. Discussions The aim of this study is to measure the LF and emissions of tractors under actual working conditions and evaluate the emission factor based on LF and emission.The proposed PEMS-based measurement system was considered to be suitable for collecting exhaust emissions in the field.Based on this measurement system, exhaust emission was measured in the field, and data analysis by tillage and steering sections were analyzed.The LF value according to engine rotational speed was mapped to the engine performance map and compared with the current standard value of 0.48.In this study, the emission factor was analyzed based on LF and emission data measured under actual working conditions.It was concluded that the emission factor shows a significant difference when compared to the Tier-4 emission standard.This difference can be considered a reasonable result since the Tier-4 emission standards are not derived from actual operating conditions in the field.Nevertheless, to verify the results of this study, the results of this study were compared with similar previous studies.Data related to agricultural machinery types, power, emission standards, and exhaust emission (CO, NOx and PM) by operation derived from previous research are listed in Table 9.The subjects of investigation for comparative analysis are 70-132 kW tractors and 86 kW agricultural combine harvester.In previous studies, CO overall ranged from 0.2 to 5.8 g/kWh, and the values proposed in this study (plow tillage: 1.725 g/kWh and rotary tillage: 0.735 g/kWh) are within the range suggested in previous studies.In previous studies, NOx was found to be in the overall range of 2.06 to 10.6 g/kWh, which is much higher than the 0.171 to 0.423 g/kWh data analyzed in this study.This is presumed to be because the tractor used in this study was equipped with an SCR, which reduced NOx emissions.In the case of PM, in previous studies, it was found to be in the range of 0.007-0.0.89 g/kWh, and in this study, it was found to be in the range of 0.252-0.494g/kWh.It is believed that the wide range of PM emission factor is because the load appears differently depending on the various tasks performed by the tractor.It was found to be as low as 0.007 g/kWh during transport work under low-load under on-road condition during tractor work, and as high as 0.89 g/kWh during high-load work such as cultivation under off-road condition.Thus, this can be considered a somewhat reasonable difference considering the irregular variability of field work.As a result, the reasonableness of the actual operation-based emission factor derived in this study was evaluated by comparing it with previous studies. Conclusions In this study, a method for measuring the LFs and tractor exhaust emissions during actual tillage operations using a PEMS and calculation of the emission factors based on various evaluation methods is provided.A comparison of the measured emission factors with the Tier-4 emission standard are also included in the proposed method. The tractor emission measurement system was built using a PEMS and GPS to measure the exhaust gas, and the ECU data were collected through CAN communication to record information on the engine operation.Data were collected from plow tillage and rotary tillage operations in a paddy field in Korea, wherein the tractor engine characteristics (torque and rotational speed) were significantly different under each working condition.This had a direct effect on the engine LF characteristics, and caused the LF calculated in this study to be significantly different from the current applied value of 0.48.Additionally, the engine LFs for the tillage and steering sections were mapped to the engine curve for each operation to assist in the determination of the statistical descriptions of the engine characteristics and exhaust emissions.Based on the results, the exhaust emissions showed a tendency to significantly fluctuate according to the characteristics of the working condition, but did not exhibit a linearity that immediately changes based on changes in the engine characteristics.Moreover, the measured emission factor was compared with the emission limit and a numerical value was obtained.The measured value was higher Figure 1 . Figure 1.Measurement tractor layout equipped with sensor system.Engine = Engine properties (torque, speed, and power) and fuel consumption; GPS = Travel speed; and PEMS = CO, NOx, PM, and exhaust flow rate. Figure 2 . Figure 2. Portable emissions measurement system attached to the front part of the tractor. Table 3 . Emission factor of agricultural machinery according to air pollutant based on the regulation stages.a The emission factor depends on the engine power, and the above values are based on a 67-kW engine. Figure 3 . Figure 3. Engine profile of the tractor according to the plow tillage operation. Figure 4 . Figure 4. Engine profile of the tractor according to the rotary tillage operation. Figure 6 . Figure 6.Results of exhaust emission for the tractor engine (left: plow tillage, right: rotary tillage). Figure 7 . Figure 7. Results of analysis of emission factors for tractor engines and comparison with emission regulation stage (left: CO, middle: NO x , right: PM). Table 8 .Figure 8 . Figure 8.Comparison of deviation ratios between emission factors for each condition derived from this study and regulations. Table 1 . Specification of the PEMS for measuring exhaust emissions of the tractor. Table 2 . Specifications of the implements used for tillage operations. Table 4 . Statistical description of engine profile according to the plow tillage. Table 5 . Statistical description of engine profile according to the rotary tillage. Table 6 . Statistical description of exhaust emissions (CO, NOx, and PM) for tractors based on plow tillage. Table 7 . Statistical description of exhaust emissions (CO, NOx, and PM) of tractors based on rotary tillage. Table 9 . Comparison of measured emission factor for agricultural machinery in actual working condition.
2024-02-14T06:18:32.177Z
2024-02-12T00:00:00.000
{ "year": 2024, "sha1": "fd2566cee76e7fa216db1f0fd3845aaad1a6dd4b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-53995-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ae011f711c7689d7ba69df7b92c183244136fa3", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
224879092
pes2o/s2orc
v3-fos-license
The equivalent dynamic stiffness of a visco-elastic half-space in interaction with a periodically supported beam under a moving load A periodically supported beam on a visco-elastic half-space is considered to model the vibration of railway tracks. The viscosity of the half-space is assumed to be of the Kelvin-Voigt type. Making use of the concept of equivalent dynamic stiffness, the reaction of the half-space to the sleepers is replaced by a system of identical spring located under each sleeper. The frequency-dependent equivalent stiffness of the springs is a function of the phase shift of vibrations of neighbouring supports. The equivalent stiffness is derived analytically employing the contour integration technique, resulting in a comprehensive expression for different phase velocities of the waves in the beam with respect to the wave speeds of the half-space. Apart from the Rayleigh type surface wave (quasi-elastic wave), an extra visco-elastic surface wave may exist in a visco-elastic half-space depending on the parameters of the half-space and the frequency range. The existence of this second surface wave has not been addressed within the field of train-induced ground vibration. The importance of this wave for the equivalent stiffness is analysed. An effective method to determine the frequency range for the visco-elastic surface wave to exist is proposed. Introduction With increasing computational power, in recent years numerical methods such as FEM, BEM and hybrid methods are commonly used in modelling train-induced ground vibrations (Hall, 2003;Sheng et al., 2006;Degrande et al., 2006;Yang and Hung, 2009;Galvín et al., 2010;Triepaischajonsak and Thompson, 2015).Analytical methods however retain their significance since they are apt to reveal the underlying mechanisms of the generation of ground vibrations caused by moving trains.Various analytical/semi-analytical models for ground vibration induced by moving trains on open tracks (Sheng et al., 1999;Karlstr€ om and Bostr€ om, 2006) and in tunnels (Forrest and Hunt, 2006;Metrikine and Vrouwenvelder, 2000;Yuan et al., 2015;Di et al., 2018;Zhou et al., 2020) can be found in the literature.Generally, the analytical/semi-analytical models have high calculation efficiency with reasonable accuracy.A disadvantage however is their assumption of linearity.A comprehensive review can be found in (Lombaert et al., 2015) which covers various prediction methods and mitigation measures for train-induced ground vibration. The train-induced ground vibration is essentially a threedimensional problem.Dieterman and Metrikine (1996) introduced the concept of the "equivalent stiffness" to characterize the interaction between the track and an elastic half-space.The track was modelled as an infinitely long Euler-Bernoulli beam whereas the half-space represents the subsoil.The equivalent stiffness was derived analytically for different phase velocities of waves in the beam with respect to the wave speeds in the soil using a contour integration procedure.By replacing the ground reaction by an equivalent foundation with frequency dependent stiffness, the three-dimensional coupled soil-track model was transformed to an equivalent one-dimensional description.Kononov and Wolfert (2000) reconsidered the beam on half-space model in which the viscosity of the half-space is addressed.A different choice of branch cuts, namely the EJP branch cuts (naming after the authors of (Ewing et al., 1957)) was chosen such that a uniform expression for the equivalent stiffness was obtained regardless of the velocity range.In (Metrikine and Popp, 1999), the vibration of a periodically supported beam on an elastic half-space was investigated.This model is a more realistic description of ballasted railway track due to the inclusion of discretely located supports (railpad, sleeper).The expression of the equivalent stiffness of the elastic half-space under each support was derived.It was concluded that the Rayleigh wave velocity is a critical speed besides the ones caused by the periodical nature of the system.Vostroukhov and Metrikine (2003) extended this work to the case of a periodically supported beam on a visco-elastic layer.Utilizing the equivalent 1D model, the response of the track and the ground can be readily obtained for railway tracks on a half-space (Dieterman and Metrikine, 1997;Chen and Wang, 2006;Steenbergen and Metrikine, 2007) or a layer of soil (Metrikine and Popp, 2000;Vostroukhov and Metrikine, 2003).The equivalent stiffness of a saturated poro-elastic half-space interacting with an infinite beam to a moving load was studied numerically in (Xia et al., 2009).More recently, Sun et al. (2018) investigated the equivalent stiffness for the same case analytically.The steady-state displacements of an Euler-Bernoulli beam resting on a poroelastic half-space subjected to a moving constant load were investigated using the equivalent stiffness in (Shi and Selvadurai, 2016). In the above-mentioned references in which the concept of the equivalent stiffness is utilized, such stiffness has not been derived for a visco-elastic half-space coupled to a periodically supported beam.The first aim of this paper is to deduce analytically the expression of the equivalent stiffness for this case by means of contour integration.Similar to the approach in (Kononov and Wolfert, 2000), the EJP type of branch cuts (Ewing et al., 1957) is chosen to obtain a uniform expression for the entire velocity range.As an additional novelty, it is established that an extra visco-elastic surface wave may exist in a visco-elastic half-space besides the Rayleigh type surface wave as described in the literature (Currie et al., 1977;Currie and O'Leary, 1978;Carcione, 1992;Romeo, 2001).For typical properties of underlying subsoil of railways, it is found that the second surface wave may exist in a certain frequency range.The contribution of this wave to the equivalent stiffness and consequently the dynamic response of the system cannot be ignored.This issue has not been addressed in the literature dedicated to railway-induced ground vibration.This work proposes also an effective method to determine the frequency range for the visco-elastic wave to exist. The paper is structured as follows.Section 2 presents the model and derivation of the equivalent stiffness of a visco-elastic half-space in interaction with a periodically supported beam.In Section 3 the equivalent stiffness is evaluated analytically making use of the contour integration.The results are validated by comparison with direct numerical integration as well as the corresponding elastic half-space case. The importance of the second surface wave is addressed explicitly in Section 4. In Section 5, the frequency range in which the second surface wave exists is analysed.An effective way to determine the frequency range is proposed regardless of the system parameters.The influences of both viscosity and Poisson's ratio on this frequency range are analysed.Section 6 summarises the conclusions of this paper. Model and equivalent stiffness Fig. 1 shows the model adopted to study the steady-state vibrations of a railway track.Two infinitely long Euler-Bernoulli beams (rails) are supported by equi-distantly distributed supports (sleepers) resting on a half-space (subsoil) consisting of a homogeneous, isotropic visco-elastic material.This work adopts the Kelvin-Voigt model to describe soil behaviour.Although other models such as the hysteretic damping model (Verruijt and Co� rdova, 2001) may be more appropriate in this context, the description is chosen in line with previous work on the equivalent dynamic stiffness (Metrikine and Popp, 2000;Kononov and Wolfert, 2000;Vostroukhov and Metrikine, 2003;Steenbergen and Metrikine, 2007), allowing for a direct comparison.The method itself however allows for a similar study on the basis of other constitutive damping models.For the system parameters, ν is the Poisson's ratio and ρ is the density of the half-space.λ and μ are the Lam� e constants.Each support consists of a rigid sleeper and a railpad which is modelled as a spring-dashpot element.Each sleeper occupies a rectangular contact area 2a � 2b as shown in Fig. 1.The distance between the centerlines of two neighbouring sleepers is denoted as d.A harmonic load PðtÞ ¼ P 0 expðiΩtÞ (i ¼ ffi ffi ffi ffi ffi ffi ffi À 1 p ) moves uniformly at a speed V on the track.Considering the symmetry of the loading with respect to the centerline y ¼ 0 of the track, only one equation of motion for one beam is presented.The coordinate system is shown in Fig. 1 as well. The governing equations of motion of the coupled system can be written as follows. The equation of motion for a visco-elastic half-space takes the form: where uðx; y; z; tÞ ¼ ½uðx; y; z; tÞ; vðx; y; z; tÞ; wðx; y; z; tÞ� T is the displacement vector.To account for the viscosity according to the Voigt phenomenological model, the Lam� e constants λ and μ of the elastic case are replaced by b λ ¼ λ þ λ � ∂=∂t and b μ ¼ μ þ μ � ∂=∂t in the governing equation (1) of the soil, respectively.To solve Eq. ( 1), the Helmholtz decomposition can be used.However, based on the assumption of zero shear stress at the soil-sleeper interface that is adopted in this paper, two so-called stress functions ϕðx; y; z; tÞ and ψðx; y; z; tÞ can be employed to decouple the original three-dimensional wave equations as shown in (Lamb, 1904;Dieterman and Metrikine, 1996;Vostroukhov, 2002).Hence, the governing equations for the visco-elastic half-space become: where cL ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ðλ þ 2μÞ=ρ p is the speed of the compressional wave (Pwave) and cT ¼ ffi ffi ffi ffi ffi ffi ffi ffi μ=ρ p the speed of the shear wave (S-wave).The viscous constants in Eq. ( 2) are defined as The displacements of the half-space are expressed in terms of the stress functions as (Lamb, 1904;Dieterman and Metrikine, 1996;Vostroukhov, 2002): (3) The equation that governs the vertical motion of the beam is (Metrikine and Popp, 1999): where m is the density of the beam, EI is the bending stiffness.K is the stiffness of the railpad, W 0 and W n s are the vertical displacements of the beam and the nth sleeper, respectively. Displacement compatibility along the centre-line y ¼ 0 is assumed between the sleepers and the half-space for the vertical motion at the interface z ¼ 0 (Metrikine and Popp, 1999): Eqs. ( 2)-( 7) complete the mathematical description of the problem.The technique of integral transformation is used to transform the problem statement to wave-number and frequency domain.The following integral Fourier transforms are adopted: All the governing equations, boundary and interface conditions Eqs. ( 2)-( 7) are then transformed to the frequency and wave-number domain using Eq. ( 8).One is referred to (Metrikine and Popp, 1999) for the detailed derivation of the expression of the equivalent stiffness since the same procedure is used here.It is worth mentioning that the general solutions of Eq. ( 2) after applying the Fourier transforms (Eq.( 8)) are assumed to be: Accounting for the proper behaviour for large positive values of z.In Eq. ( 9) RL;T ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi The expression of the equivalent stiffness from the half-space to the support can be written as (Metrikine and Popp, 1999): where In Eq. ( 12), μ ¼ μ À iωμ � and qðωÞ is the phase shift of the vibrations of two neighbouring supports and is given by (Metrikine and Popp, 1999;Vostroukhov and Metrikine, 2003): Hence, the equivalent stiffness (Eq.( 12)) of the springs under supports is established.Note that Eq. ( 12) is the same as Eq. ( 20) in (Metrikine and Popp, 1999), however, including the viscosity by replacing c 2 T and μ of Eq. ( 20) in (Metrikine and Popp, 1999) with c2 T and μ, respectively. Evaluation of the equivalent stiffness To evaluate the equivalent stiffness, the denominator of Eq. ( 12) must be computed.This denominator can be rewritten as The following auxiliary integral with respect to k 1 is introduced (Metrikine and Popp, 1999): where r is a real value.The introduction of "1 00 in the numerator accounts for the contribution of the pole k 0 1 ¼ 0 .It does not influence the result of integration since R þ∞ À ∞ f RL =ðk 1 ΔÞgdk 1 ¼ 0 (Metrikine and Popp, 1999). Using the auxiliary integral, Eq. ( 15) can be written as Denoting the summation in the integrand of Eq. ( 17) as S and evaluating yields: where the terms ZðbÞ; Zðnd þbÞ; Zðnd À bÞ can be evaluated after obtaining an analytical expression for the auxiliary integral Eq. ( 16). Branch points and branch cuts Eq. ( 16) is evaluated using contour integration.The EJP branch cuts (Ewing et al., 1957) are used here.Firstly, the branch points are specified from the radicals RL;T ðk It is assumed that k 1 > 0. The branch cuts can be chosen such that Reð RL;T Þ > 0 everywhere on the path of integration in accordance with the assumed solution in Eq. ( 9).To meet these conditions, the cuts should satisfy the following equations: where The branch cuts are governed by the parametric equations: Substituting k L;T 1 into Eq.( 20), one obtains Note that αL;T and tL;T in Eq. ( 19) are functions of k 2 and ω.The same holds for α L;T and t L;T in Eq. ( 22). Poles For a homogeneous, isotropic elastic half-space, it is well-known that in this case there is one and only one root of the secular equation Δ ¼ 0 (Achenbach, 1975).This means the integrand of Eq. ( 15) has only one pole which represents the Rayleigh surface wave.However, in a half-space made of homogeneous, isotropic, linearly viscoelastic material, more than one surface wave may exist.It is found that two roots (representing two surface waves) of the secular equation ( 13) which satisfy the traction-free boundary condition and radiation condition may exist, depending on the material properties and the frequency (Currie et al., 1977;Currie and O'Leary, 1978).Fig. 2(a) shows an integration contour with only the Rayleigh type pole.For the parameters used in Fig. 2(b), two poles, related to two surface waves exist.The surface wave whose characteristics are close to the classical Rayleigh wave of the corresponding elastic body is termed as the "quasi-elastic wave" (pole k qe 1 in Fig. 2(b)), whereas the other surface wave is called a visco-elastic surface wave (pole k ve 1 in Fig. 2(b)) in (Currie et al., 1977;Currie and O'Leary, 1978).The complete contour integration in Fig. 2(b) must include the contributions of both poles k qe 1 and k ve 1 . Analytical expression of the equivalent stiffness The branch cuts, branch points, poles and integration contours are shown in Fig. 2. Since the contributions of integration along the big semicircular contour C 0 and along the circular contours around the branch points are zero (Kononov and Wolfert, 2000), Eq. ( 16) can be solved as according to the residue theorem, where k pN are the poles derived from Eq. ( 13).Substituting Eq. ( 24) into Eq.( 18), the summation in Eq. ( 18) can be elaborated in the same manner as shown in (Metrikine and Popp, 1999).The contribution of the poles can be written as: (18) T. Lu et al. if only the quasi-elastic wave exists as illustrated in Fig. 2(a), or where The contribution of branch cuts can be written as follows, taking C 1 as an example: where k T 1 is given by Eq. ( 22) and tT is given in Eq. ( 19). S C2 through S C4 can be obtained in a similar way.Thus and The equivalent stiffness is the reciprocal of Eq. ( 30), namely Eq. ( 12) can be reduced to the case of a beam on a visco-elastic halfspace which was investigated in (Kononov and Wolfert, 2000); in this case the equivalent stiffness becomes In Eq. ( 32) a beam is the width of the beam on the half-space.The above equation can be evaluated using the same contour integration technique presented to evaluate Eq. ( 12) previously.Eq. ( 32) is the same as obtained in (Kononov and Wolfert, 2000).However, the contribution of the possible extra pole (the visco-elastic surface wave) is not discussed in (Kononov and Wolfert, 2000). Verification of the solution and the contribution of the viscoelastic surface wave In this section, the derived expression is verified by comparison with both the analytical expression for the elastic half-space case and the results from direct numerical integration of the latter.The importance of the contribution of the visco-elastic wave is addressed.The following soil parameters are adopted from (Vostroukhov and Metrikine, 2003): 31), (b) Equivalent stiffness of an elastic half-space to periodically supported beam according to (Metrikine and Popp, 1999).for the numerical evaluations.For simplicity, μ � =μ ¼ λ � =λ ¼ κ is assumed hereafter.However, the above-obtained expressions for χ hÀ s;s and χ hÀ s;b are also valid for the case μ � =μ 6 ¼ λ � =λ. Half-space with relatively small viscosity To show the validity of the expressions obtained in this paper for a visco-elastic half-space, the results are compared to those of the elastic half-space case.Firstly, the case of a half-space interacting directly (without the supports) with a beam is considered.The width of the beam is assumed to be 3.2 m, namely a beam ¼ 3:2 in Eq. ( 32).In Fig. 3 the result computed from Eq. ( 32) with a small viscosity (κ ¼ 1 � 10 À 7 s) is compared to that from (Dieterman and Metrikine, 1996) for an elastic half-space.The parameter v ph is the phase velocity in x direction and is defined as v ph ¼ ω=k 1 .It can be seen that the results agree with each other perfectly. In Fig. 4 the case of a periodically supported beam (Fig. 1) is considered.The soil parameters are according to Eq. ( 33) whereas the geometry of the sleepers is defined according to (Vostroukhov and Metrikine, 2003): In Fig. 4(a) the equivalent stiffness of a visco-elastic half-space for a constant phase shift q ¼ 1:0 is calculated considering a small viscosity (κ ¼ 1 � 10 À 7 s) using Eq. ( 31).For frequencies smaller than ω ¼ qc R =d, the imaginary part of the equivalent stiffness is zero because no waves are generated (Metrikine and Popp, 1999).At the frequencies where the equivalent stiffness equals to zero, the frequency satisfies ωd=c R ¼ jqðωÞ þ2πnj where n is an integer (Metrikine and Popp, 1999). The result is compared with Fig. 4(b) in which the equivalent stiffness of an elastic half-space is evaluated based on the expression obtained in (Metrikine and Popp, 1999) using the same parameters of the soil and sleepers.Once again there is a perfect match between the slightly viscous half-space and the elastic half-space cases.The comparisons in Figs. 3 and 4 conform the validity of expressions obtained in this work for the equivalent stiffness. Half-space with relatively large viscosity When a relatively large viscosity is considered for the half-space, an extra root of the secular equation Δ ¼ 0 may appear for certain parameters, meaning an extra pole of Eq. ( 15).Physically this implies the existence of an extra visco-elastic surface wave in the half-space caused by the viscosity.Some discussions on the existence of an extra wave can Beam on a visco-elastic half-space In Fig. 5(a) the equivalent stiffness obtained from contour integration with only the quasi-elastic wave (the conventional Rayleigh type) considered is compared to that obtained from direct numerical integration for relatively large viscosity.It is found that at a certain value of β cr T , a discontinuity of the stiffness occurs.This discontinuity indicates that a visco-elastic wave appears after β cr T .Using Cauchy's argument principle (Ying and Katz, 1988), two roots (equivalently two poles of the integrand in Eq. ( 32)) can be obtained from the secular equation ( 13) for β T > β cr T .Results from contour integration with both poles included for β T > β cr T and from direct numerical integration of the expression are shown to match completely in Fig. 5(b). Periodically supported beam on a visco-elastic half-space In Fig. 6(a), the equivalent stiffness of a visco-elastic half-space to a periodically supported beam is computed with and without the extra pole which appears after a critical frequency ω 1 , for damping ratio κ ¼ 1 � 10 À 4 s.Similar to the case of a continuous beam on a half-space shown in Fig. 5, the omission of the extra wave leads to discontinuity of the equivalent stiffness and erroneous results after the critical frequency ω 1 .However, when damping is increased, there may be an upper limit-frequency for two poles as shown in Fig. 6(b).In Fig. 6(b), the absolute values of the equivalent stiffnesses are shown using a logarithmic scale for κ ¼ 1 � 10 À 3 s.Two waves can be observed to exist in the frequency range ω 1 < ω < ω 2 . Figs. 5 and 6 demonstrate the importance of taking into account the extra visco-elastic wave for the evaluation of the equivalent stiffness and eventually the prediction of the dynamic response to a stationary/ moving load.The contribution of this second surface wave can be relatively significant with respect to the first type as found in (O' Leary, 1988;O'Leary, 1989) where the forced vibration of a semi-infinite viscoelastic medium due to an oscillating load applied at the free surface is investigated. Advantages of the proposed analytical method The proposed analytical method provides an exact expression for evaluating the equivalent stiffness.It has been shown in Fig. ( 5) that the prediction on the basis of the analytical method matches that from numerical integration for a beam on half-space.The agreement of the results confirms the correctness of the contour integration procedure presented in this paper.For the case of a periodically supported beam on a visco-elastic half-space, direct numerical integration requires a truncation of the number of supports.A relatively large number of sleepers is required to obtain a convergent result.Furthermore, the number of supports needed is different for different frequencies.In summary, on one hand, the proposed analytical method gives an exact solution without any truncation.On the other hand, it is far more computationally efficient than the direct numerical integration. The frequency range for two surface waves to exist in a viscoelastic half-space In (Currie et al., 1977;Carcione, 1992) it is stated that the second surface wave exists for certain values of the material parameters as well as a given range of frequencies.In this section a way is presented to determine such frequency range for two waves to exist for specific parameters.Before proceeding, one important observation can be made from Fig. 6.To obtain the equivalent stiffness from Eq. ( 12), integration of k 2 should be carried out.The poles shown in Fig. 2 depend on both k 2 and the frequency ω.For different k 2 , the poles are different.One may expect that for different k 2 , a different critical frequency may exist which corresponds to appearance of the second surface wave.However, from Fig. 6 it is shown that there is one discontinuity, thus one critical frequency for all the k 2 .To examine the dependence of the critical frequency on the wavenumber k 2 , the critical frequency ω 1 is calculated for various damping values and k 2 shown in Table 1.Since the convergence over k 2 is relatively fast, the calculation is only performed till k 2 ¼ 30.It can be confirmed that the critical frequency is independent of k 2 .Interestingly, the change of the critical frequency is proportional to that of the damping ratio.For typical values of soil damping, the frequency beyond which the extra visco-elastic surface wave exists may range from approximately 10 to 1000 Hz, as shown in Table 1.The extra surface wave therefore may be also of practical relevance, since this frequency range overlaps the typical frequency interval in which ground-borne vibration (1-80 Hz) and ground-borne noise (16-250 Hz) are of importance. Determination of critical frequencies governing the number of surface wave It is always possible to determine the critical frequencies by examining the positions of the discontinuities of the equivalent stiffness calculated including one pole (the quasi-elastic wave).Hereafter a systematic way is presented of the determination of the critical frequencies and therefore the frequency range in which two waves exist.First, the roots of Eq. ( 13) are analysed.Since the critical frequency is independent of k 2 , hereafter k 2 ¼ 0:1 is chosen for the following calculations and the parameters of the half-space are given by Eq. (33).To solve for the roots of Eq. ( 13), normally Δ ¼ 0 is rewritten to and rationalized into a cubic equation with respect to k 2 1 by squaring both the left and right sides of Eq. ( 35).Using the Cardano's formula, three roots are obtained for k 2 1 of Eq. ( 35).Taking the square root of k 2 1 , at least two roots are found in the first quadrant of the complex k 1 plane.It needs to be examined which ones of those are the admissible roots which satisfy the traction-free boundary condition and the radiation condition altogether. In Fig. 7 the branch cuts, the branch points and the roots of the secular equation Δ ¼ 0 derived using the Cardano's formula in the first quadrant of the complex k 1 plane are plotted for a frequency ω < ω 1 ¼ 12:07 (the ω < ω 1 surface) and another frequency ω 1 ¼ 12:07 < ω < ω 2 ¼ 517:80 (the ω 1 < ω < ω 2 surface).It can be seen that on the ω < ω 1 surface, there is only one pole representing the quasi-elastic wave.A spurious pole is located on the right-hand side of the branch cut for shear wave.However, on the surface on which ω 1 < ω < ω 2 , the pole of the quasi-elastic wave is still present, whereas the spurious pole crosses the branch cut for the shear wave and is now located in between the two branch cuts.The spurious pole becomes a pole for ω 1 < ω < ω 2 .To track this extra pole with increasing frequency, Fig. 8 shows the branch cuts, the branch points and the poles for a frequency on the ω 1 < ω < ω 2 surface and a frequency on the ω 2 < ω surface.Consistent with Fig. 6(a), a pole related to a visco-elastic surface wave exists in between the branch cuts for the P and S waves for ω 1 < ω < ω 2 .In contrast, when ω 2 < ω, the visco-elastic pole crosses the branch cut for the P wave and becomes a spurious pole again.Therefore, it can be assumed that the first critical frequency ω 1 is the frequency at which the pole for the visco-elastic wave is located on the branch cut for the S wave whereas the second critical frequency ω 2 is the one at which the pole for the visco-elastic wave is on the branch cut for the P wave.This observation is confirmed by investigating the branch line integrals S C2 and S C4 in Eq. ( 29).The branch integral S C2 at ω 1 has a discontinuity which indicates a pole on the path of integration C 2 .Therefore, this pole and ω 1 satisfies After substituting Eqs.22 and 23 into Eq.( 36), ω 1 and the pole itself can be obtained for a specific value of k 2 . On the other hand, the extra pole may disappear after another frequency ω 2 , and the critical point of disappearance is when the extra pole k p 1 is located on the branch C 4 .Therefore, ω 2 and the pole satisfy Eqs. ( 36) and (37) together determine the frequency range in which two surface waves exist for visco-elastic half-space of the Kelvin-Voigt type. Dependence of critical frequencies on Poisson's ratio and damping It is of interest to investigate the dependence of the critical frequencies on the Poisson's ratio.A threshold of the Poisson's ratio ν � ¼ 0:2631 is given in (O' Leary, 1981) and it is concluded that for all ν < ν � there is one and only one surface wave and for all ν > ν � there may be more than one surface wave for certain parameter combinations and frequency range.Therefore, critical frequencies, i.e. the boundaries which determine the number of poles (waves) are plotted versus the Poisson's ratio in Fig. 9 starting from ν ¼ 0:27.It can be concluded that the critical frequency is increasing with higher Poisson's ratio.In Fig. 9 (a) only ω 1 is plotted.The reason is that for relatively small viscosity, the second critical frequency ω 2 is large.For example, for ν ¼ 0:27, the second critical frequency ω 2 is about 5178 Hz.The second critical frequency ω 2 is even higher for ν > 0:27 which is of no interest for train-induced ground vibration and furthermore the Euler-Bernoulli description of the rail is no longer valid for such high frequencies.In Fig. 9(b) both ω 1 and ω 2 are plotted for a higher damping ratio.It is found that both the two critical frequencies become larger for increasing Poisson's ratio.However, ω 2 increases much faster than ω 1 .By comparing Fig. 9(a) and (b), it can be seen that larger damping ratio lowers the critical frequencies. Conclusions In this paper the equivalent stiffness of a visco-elastic half-space to a periodically supported beam under a moving load, as a model for traintrack interaction, is investigated.A uniform expression is obtained for the entire velocity range of the moving load regardless of the ratio between the load speed and the wave speeds of the half-space.The equivalent stiffness is evaluated analytically by means of the contour integration method and residue theorem.It is found that, apart from the Rayleigh type surface wave, a second surface wave exists in a certain frequency range due to the viscosity of the half-space for typical parameters of the subsoil.The contribution of this second surface wave cannot be ignored.An effective method to determine the frequency range in which the visco-elastic wave exists is proposed.It is concluded that the critical frequencies for the occurrence of multiple surface waves are the ones at which a second pole of the integrand of the equivalent stiffness is located on one of the EJP branch cuts.The dependences of the related frequency range on the viscosity and Poisson's ratio are investigated.The critical frequencies increase with larger Possion's ratio whereas they decrease for highly viscous materials. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. T .Lu et al. Fig. 4 . Fig. 4. (a)Equivalent stiffness of a visco-elastic half-space to periodically supported beam with small viscosity using Eq.(31), (b) Equivalent stiffness of an elastic half-space to periodically supported beam according to(Metrikine and Popp, 1999). Fig. 3 . Fig. 3. Equivalent stiffness of a beam directly on a half-space. Fig. 5 .Fig. 6 . Fig. 5. Comparison of equivalent stiffness obtained using contour integration and numerical integration for a beam directly on a visco-elastic half-space for κ ¼ 1� 10 À 2 s: (a) Contour integration without the extra pole, (b) Contour integration with the extra pole after β T > β cr T .
2020-07-09T09:03:48.078Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "272efd865b7c41027b579b3ecaaa6badf4626de6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.euromechsol.2020.104065", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2d08794022c949ce1deb09123e84fa1910f6c6aa", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259137654
pes2o/s2orc
v3-fos-license
Weakly supervised information extraction from inscrutable handwritten document images State-of-the-art information extraction methods are limited by OCR errors. They work well for printed text in form-like documents, but unstructured, handwritten documents still remain a challenge. Adapting existing models to domain-specific training data is quite expensive, because of two factors, 1) limited availability of the domain-specific documents (such as handwritten prescriptions, lab notes, etc.), and 2) annotations become even more challenging as one needs domain-specific knowledge to decode inscrutable handwritten document images. In this work, we focus on the complex problem of extracting medicine names from handwritten prescriptions using only weakly labeled data. The data consists of images along with the list of medicine names in it, but not their location in the image. We solve the problem by first identifying the regions of interest, i.e., medicine lines from just weak labels and then injecting a domain-specific medicine language model learned using only synthetically generated data. Compared to off-the-shelf state-of-the-art methods, our approach performs>2.5x better in medicine names extraction from prescriptions. Introduction Optical character recognition (OCR) enables the translation of any image containing text into analyzable, editable and searchable format. Over the last decade, many large scale models [10,18,26] and sophisticated techniques [4,5,29] have been developed with neural network based architectures for OCR. These systems are not only limited to printed text but also work quite well on handwritten text, as they are trained on large amount of labeled as well as synthetic handwritten data. In the past, there have also been works around developing domain specific OCR models [6,21,41]. Most of these works develop these models for generic text lines [20,31], and require meticulously labeled data for learning. In this work, we primarily focus on how we can improve the quality of existing OCR models on very hard to read, unstructured documents for specific entities of interest, with an application in handwritten medical prescriptions. In many countries, prescriptions are primarily delivered to patients in handwritten formats by doctors. A few billion prescriptions are generated every year Fig. 1: Samples representative images from the prescription dataset used in this work. As we can see the handwriting is often inscrutable and does not follow any specific structure or format. The task we focus in this paper is to extract medicine names from such images. world-wide [19]. Digitizing them would unlock numerous applications for many stakeholders and use cases in the healthcare ecosystem like e-pharmacies, insurance companies, creating electronic health records necessary for preventive healthcare, better diagnosis, analysis at a local and global level for policy making and so on. However, most of such documents, as shown in Figure 1 are often hard to read for non-pharmacists [33]. Even pharmacists go through months/years of training to decipher such prescriptions. Existing state-of-the-art OCR models though trained on large amount of data, do not perform well on such inscrutable documents. Procuring large domain-specific datasets is not a cost-effective or scalable solution, as it involves annotation that too from domain experts which can become quite expensive. Although there have been some works [1,15,34] in extracting information from handwritten prescriptions, the algorithms are not generalizable, heavily hand-tuned and lack rigorous evaluations. With these problems in mind, we propose an approach that can significantly enhance the performance of existing state-of-the-art OCR systems by selectively infusing domain knowledge using only weak supervision. Medical prescriptions consist of various information like data from lab reports, ordered tests, health vitals, observations along with medicine names. Our work focuses on the medicine section which is considered the most important from a consumer standpoint, but the techniques can be similarly applied to other sections or other types of documents beyond prescriptions, such as printed forms filled with handwriting. The medicine section of a prescription has a rough semantics consisting of medicine name, category, frequency of intakes and quantity (see Figure 1). As these are non-form type of documents and quite unstructured, it is a challenge to extract medicine name entities from such documents. Most OCR approaches [18,26] take a two step approach -first localize the text regions by detecting bounding boxes around them, and then recognizing the text using line recognition models. The recognition model often consists of an optical recognizer and a language model (LM) to correct the optical model errors. The LM gives us the flexibility to infuse domain-specific knowledge. But, injecting such knowledge to all lines in the document may not be optimal, as different parts of the document can correspond to different entities, or even domains. For example, the pattern in which a medicine name is written is very different from the pattern in which normal text such as observations are written in the same prescription. Thus, in order to enhance the recognition of medicine names and extract them from the prescription, we first detect lines where medicine names are written. Then in the recognition model, we inject a LM which is specific to medicine names. For the rest of the image, we inject the vanilla LM. Note that to learn the model which detects medicine lines, we do not use strong bounding polygon labels, but rather only weak labels, i.e., the medicine names present in the image. Such weak labels are much easier to obtain, as the annotators do not need to draw a bounding polygon and often labeling comes for free, for example, when a medicine bill is paired with a prescription. Apart from that, to learn the medicine LM, we do not use any annotated text lines, but rather generate synthetic text lines using a probabilistic programming approach. Our weakly supervised medicine line detector obtains 78% pixel mIoU with just weak labels, and helps to selectively infuse medicine LM, which in turn improves the overall performance from 19% to 48% jaccard index. The main contributions of this work are: -Develop a weakly supervised segmentation method to detect specific text entities, such as medicine names in handwritten prescriptions. -Learning a domain-specific medicine LM using synthetic medicine name lines generated by probabilistic programs and using it to enhance the performance of state-of-the-art OCR models. -A model dependent unique way of enhancing the performance of matching with words from the vocabulary. Related Works Optical Character Recognition OCR literature has seen tremendous improvements in the past decade. The successes [10,18,26] can be attributed to sophisticated models, synthetic data generation, various augmentation techniques, among others. An OCR system is made of multiple models, starting from text detection [29,30,43], script identification [12,17], and finally line recognition [3,10,26,27]. Even with all these advancements, recognition of handwritten lines still remains a challenging task as writing style can be a unique signature of the person, allowing room for huge variations. In our experiments, we found that off-the-shelf line recognition models, even though perform quite well for a lot of printed and handwritten datasets, they fail to perform equally well on handwritten images. In this work, we show how we can improve their accuracy by more than 2 times the baseline by first detecting specific entities of interest (rather than detecting all text) and then improving the line recognition model by injecting domain-specific LMs. We next discuss the existing literature around these topics. Weakly-supervised Detection Detecting specific entities of interest in an image can be posed as detection or segmentation task. However, to learn these tasks, traditional methods would need strong labels, i.e., either pixel-wise [9,30] or bounding box labels [30,37,38]. In the recent past, there has been a lot of work in developing methods which can learn from only weak labels, such as weaklysupervised object detection [25,47], segmentation [22,44], action detection [32,46], etc. These methods do not need access to strong labels such as bounding boxes, but can learn from just weak labels, i.e., image-level labels of the object categories present in the individual training images. Such a formulation reduces the manual labor needed to acquire strong labels, thus making it scalable to large datasets. Motivated by these, we aim to learn a segmentation model to detect entities of interest in an image, such as medicine names from just weak labels, i.e., list of medicine names given an image. In this use case, the individual entities do not correspond to any underlying category unlike segmentation or detection of objects in natural scenes. Recently, it has been shown [23] that using weakly labeled data along with strong labels improves the performance of scene text recognition. In our task, we only have weakly labeled data without any strong labels (synthetic or real) and the text is primarily handwritten which is often inscrutable even if text detection is perfectly done. Moreover in our use case, we need to detect specific entities among other cluttered text, and not any generic text. There are also works on defining rules to derive weak labels from the data [36]. While that is quite challenging and not generalizable in our use case, we use the intuition to convert the weak labels to strong labels via labeling functions. Domain-specific Language Models There has been a lot of work [24,35,45] which shows that injecting domain-specific knowledge in LMs helps to perform much better on those domains than models developed on generic text. Specifically for OCR, there have been some works [11,14] showing that having access to domain related text data helps to adapt existing LMs and thus improving final OCR performance. However, in our use case of decoding medicine names, it is non-trivial to acquire lines of medicine names written by doctors, as they are hardly available in normal text corpus. To solve that, we use domain knowledge to define a probabilistic program which can take in the medicine name and generates patterns of medicine lines as would be written by doctors in prescriptions. We show that using such a LM in the OCR decoder improves the performance significantly. Problem Statement In this work, we focus on the problem statement of extracting textual entities from non-form type handwritten document images, which are often hard to read. We specifically focus on the challenging problem of extracting medicine names from handwritten prescriptions as shown in Figure 1 The top-left block shows the weakly supervised medicine line segmentation pipeline. The top-right block shows the process of generating synthetic medicine lines using probabilistic programs and then using it to train a medicine LM. The bottom row shows the inference pipeline, that first localizes the medicine names using the segmentation network, and then injects the medicine LM while decoding the OCR outputs. the output of the framework should be the medicine names {m j } n j=1 that appear in the image, where m j ∈ V, the vocabulary of medicines. n denotes the number of medicines in the prescription that varies from prescription to prescription. The training data that we use to solve this problem is only weakly labeled, i.e. for every image, we have a list of medicine names that appear in the image, and not their bounding box locations. Thus, our training data contains tuples of image and unordered set of medicine names as follows, where n i denotes the number of medicines in that image, N denotes the number of images in the training data and G i is the ground truth list of medicines. OCR Line Recognition Model Most line recognition models have two parts -the encoder, often called the optical part of the model, which encodes the visual information, and the decoder, which is either trained end-to-end with the encoder, or CTC type decoder [13] where the encoder outputs are combined with LM scores to obtain the final text. We use the second option and train our network with CTC loss [13]. This allows us to decouple the optical and the LM, and replace it with domain specific LMs. Encoder: The encoder or the optical part of the line recognizer consists of first 7 layers of inverted bottleneck conv layers [39] with 64 filters and stride of 1, followed by 12 layers of transformer encoder [42] with hidden size of 256 and 4 attention heads, and finally a fully connected symbol classification head. We use this backbone from [10], as it achieves state-of-the-art performance on various datasets. Our pre-trained model is also the same as [10]. It is interesting to note that our method is agnostic to the encoder used as it can be used to boost the performance of any OCR backbone. Decoder: We use a CTC decoder [13] following [10], which combines scores from the encoder logits and a character n-gram LM. We set n = 9 unless otherwise mentioned. We will discuss how we train and use a medicine LM subsequently. Weakly Supervised Line Segmentation We next discuss our algorithm to detect medicine lines by just using weak labels while training, i.e., only the medicine names for every image, and not their bounding polygons. Note that while we use this method for medicine line detection, it can be also used for detecting other entities in other document types. Labeling Functions At the core of our algorithm is the idea of using labeling functions to automatically convert a weakly labeled dataset to strongly labeled. There have been some works [36] in literature where rules are defined as labeling functions. The labeling functions may not be as perfect as a human oracle and the strong labels they generate may have errors in them. There are often thresholds or rules used to reduce errors. Thus, while defining a labeling function we need to optimize coverage, which is the number of data points that can be labeled using such labeling functions and their error rate. Although there can be some noise in such labeling, this significantly reduces the annotation cost. We sequentially apply two labeling functions, as discussed next to convert a list of medicine names to bounding boxes. In our use case of assigning a bounding box to each medicine name, we can consider it as an assignment problem between the detected bounding boxes (p) by a generic text detector and the number of medicines in it n. Considering p = 50 and n = 5, the number of possible assignments turns out to be p C n p P p ≈ 2.5e8. We solve this problem via two techniques -using the content of the boxes (via OCR Labeling Function), and using the visual features (via Segmentation Labeling Function). OCR Labeling Function: As for every image, we have the list of medicines that appear on it, for each detected word in the image, we can naively find the closest medicine name (by edit distance) from the ground truth list, albeit applying a threshold. However, directly using the edit distance may not respect the model's predictions. For example, according to the OCR line recognition model, modifying an i to l may have lower cost than i to z, but it would be the same edit distance for both the cases. Thus, in order to utilize the model's predictions, we decode up to the top-k predictions, and stop when we find an exact match with a medicine name from the list of ground truth medicines, i.e., the weak labels. The bounding box associated with these matched words then can be used as the ground-truth bounding boxes of medicine names. We can define the labeling function as where the bounding boxes of m medicines are in the rotated box format and t j , l j , h j , w j , r j representing top, left, height, width, and rotation angle of each matched bounding box. Then, we can construct a training dataset as follows: . The number of matching bounding boxes q i ≤ n i , as in most cases the handwriting is so illegible that to decipher that even a higher number of top-k lines may not allow a match with the ground truth medicine names. This can happen for a sizable number of images, which in turn can introduce a significant noise in the data, leading to problems in learning the segmentation network. Thus, we only use those images to train our network where we find that at least 90% of the ground truth medicines have been matched. The reason behind setting such a high threshold is this set becomes the guiding signal for the rest of the algorithm. Thus our modified strongly-labeled training dataset can be represented as: While increasing the number of top-k paths helps more images to pass this threshold, we find that it saturates after a point, specially for documents which are hard to read, such as prescriptions used in this work. While the 0.9 threshold allows us to reduce missing bounding boxes in the training set, it also reduces the number of images in the training set, as |D tr | ≤ |D|. We next discuss a second labeling function to alleviate this problem. Segmentation Labeling Function It may happen that even after decoding a high number of paths (k), we still are not able to match all the ground truth medicine names. This can happen when the handwriting is quite challenging for the model to predict accurately. In such a scenario, we leverage the visual appearance features via the segmentation model itself, rather than just labeling via OCR. Motivated by the success of self-training in domain adaptation [2,28] and semi-supervised [7,40], we use the segmentation model to pseudo-label the images in the rest of the dataset, i.e., D -D tr . First, we train a segmentation network M using the relatively small training data D tr obtained from the OCR Labeling Function outlined above. Then, we use it to predict the medicine lines on the images in D -D tr . We can consider the output of the model to be M(x) = {(t j , l j , h j , w j , r j )} l j=1 . Following our previous threshold, we add those images to the training dataset, where the union of the number of predicted medicine lines by the segmentation network and the OCR labeling function above, is at least 90% of total number of medicines in that image. We can represent the new training set as follows: . Ideally, we can repeat this process, i.e. repeat pseudo-labeling the training images using a trained segmentation model and training a new model with the pseudo-labeled training set. The training set would grow over iterations. The two labeling functions can be generalized as: 0.9} N i=1 , where M t = F for t = 1, and the t th medicine line segmentation model for t ≥ 1, and T represents the total number of iterations. Figure 3 shows how segmentation improves over iterations. Using only the OCR Labeling Function misses out some of the medicine names, as it is dependent on the ability of the underlying OCR model we use to decipher the medicine names. However, applying the Segmentation Labeling Function on top of it helps to predict the medicine patches which were missed, as it does not depend on OCR or the content, but rather on the visual features, such as strokes, indentation, etc. which we will discuss later in Section 4. Segmentation Model Given the bounding boxes obtained using the labeling functions, we can train a medicine line segmentation model. Our segmentation model is DeepLab [9] with a ResNet50 backbone [16]. Although we use this architecture, it can be replaced by any other state-of-the-art segmentation model. We convert the bounding boxes to label masks, and use them as supervision to train the segmentation network. The label mask has either 0 or 1 at each pixel location, denoting whether a pixel belongs to a medicine line. The segmentation model is trained with the above data using a semantic head with two output channels. The predicted medicine label masks obtained from this model may not always respect text boundaries, and hence we use a generic text detector in the OCR pipeline to detect text and refine the boundaries. Then, we crop out the detected bounding box from the original image x and send only those lines to the line recognizer. As these lines correspond to a special domain of medicine names, we can inject that knowledge to the OCR using a LM. Domain-specific Language Model In OCR decoder, we can incorporate a LM to correct some of the OCR errors. Specifically, the decoded string Y * can be obtained as follows: where P (Y ) is obtained from the LM denoting the probability of occurrence of a certain string Y in the dataset, α is the weight applied on the LM, and X is the input. In a generic OCR model, P (Y ) is trained on a large corpus of text such that it represents a diverse set of documents. Particular to our use case, once we have detected the medicine lines as discussed in the previous section, we need only medicine line specific knowledge while decoding the OCR output. However, medicine line patterns occurring in handwritten prescriptions often do not appear in normal text. It is also difficult and expensive to acquire and annotate such large corpora of handwritten prescriptions from which we can learn medicine line specific LMs. We inject domain knowledge to solve this problem. In order to gather medicine line specific text data, we defined a probabilistic program from which we can sample data and learn a character based LM. Medicine lines written by doctors often have a few elements -a enumeration token (-, ., numbers, etc), followed by the type of medicines (injection, tablet, etc.), the root name of the medicine, and then the suffix. These altogether comprise a single medicine name line. Note that some of these entities other than the root word may not appear in all prescriptions. With this domain knowledge, we can define a probabilistic program as shown in top-right portion of Figure 2. The program starts from the START node and ends at the END node, and concatenates the output of each node with spaces in between. To sample a medicine name line, the program takes as input the medicine name and the type of the medicine, both of which appears in the vocabulary of medicines. We can create an exhaustive set of all possible medicine name lines, and then train a character based n-gram LM on that text corpus. Note that as we do not have the exact probabilities of the different transitions, we use equi-probable transitions between nodes, as well as for any choices in the nodes. In OCR, as decoding is done at a character level, we need character LMs, unlike recent advanced large LMs which operate on word or sub-word tokens. There are also character LM using transformers, but those are generally useful for longer context. But, in our case, medicine names on average are only 7 characters long. Moreover, using such a large model takes a lot more inference time. Hence we stick to an n-gram model. In-Vocabulary Prediction In many entity extraction tasks, such as medicine name prediction studied in this paper, the entities often belong to either from a fixed vocabulary, or are defined by a regular expression. However, the OCR predictions will not be constrained to our medicine vocabulary. To constrain that, we can make a nearest neighbor edit distance search for each medicine line text and the medicine vocabulary. However, as we discussed before, it would not respect the model's confidence. Thus, we use the top-k path decoding as a robust method. Specifically, for each line, we decode the top-k predictions, and then find all the text which have an exact match with one of the medicine names from the vocabulary. Then, we take a majority voting of all these matched names, and that becomes the prediction for every line. It is possible that for some of the detected medicine lines, we do not find any match for any of the top-k prediction. These detected medicine lines would not have any output prediction. We find this method to be more effective compared to edit distance based matching with the top-1 prediction, or predicting only the first match from the top-k predictions, as shown in Section 4. Experiments We first introduce the dataset and implementation details, before sharing the experimental results and rigorous ablations to understand the efficacy of the framework. Prescription Image Dataset: We use a dataset of handwritten prescriptions to validate the methodology outlined and evaluate the performance of the models. A few example images from the dataset are shown in Figure 1. The dataset contains 9645 images written by 117 doctors. Table 1a outlines some of the details of the dataset, and Figure 4a shows the distribution of prescription images per doctor. We use 80% of the dataset to train our models, and 20% for evaluation. There is no overlap between the doctors between the training and the test set at each iteration, ensuring that our results capture understanding across different handwriting styles. Each image in the dataset has a list of medicine names appearing in them, which we call weak labels, without any positional information. However, just for evaluation, we strongly annotate 500 images from the evaluation set to evaluate the segmentation performance. Prescriptions generally have multiple other sections as well (although unstructured in freeform), and Table 1b shows the percentage of images which have other sections such as lab/scans reported, observations and vitals. Also, note that any and all personally identifiable information was removed from the data prior to it being provided to the authors for this study. Medicine Vocabulary: We also use a medicine name vocabulary consisting of more than 90,000 medicine names. We use this to generate synthetic medicine name lines and train the character based medicine LM. This vocabulary is also used to make the in-vocabulary predictions. Evaluation Protocol: We evaluate all models on test set of the dataset mentioned above. To evaluate the performance of the segmentation model, we use mean intersection over union (mIoU) as used in the segmentation literature [8]. To evaluate the performance of the end-to-end medicine name prediction model, we use the mean jaccard index, over all the images. We also use two other metrics namely mean precision and mean recall, and the mean jaccard index can be considered as a combination of both these metrics. These are defined as follows - where P i , G i are the predicted and ground truth list of medicines for the i th image. M is the number of evaluation images. The comparison between the prediction and ground-truths are not case-sensitive, as they are medicine names. Results and Ablation Studies Iterative Training Performance: As discussed in Section 3, our algorithm for converting weak labels (only medicine names) to strong labels (bounding box annotations for each medicine name) involves two labeling functions -OCR and Segmentation Labeling Function, where the latter can be applied iteratively. The number of images auto-labeled by the labeling functions increases with iterations, and hence the performance of both the medicine line segmentation model as well as the medicine name prediction model increases with subsequent iterations. We highlight this in Table 2. Iteration 1 shows the performance on only OCR Labeling Function, and Iteration ≥ 2 shows the performance on multiple iterations of Segmentation Labeling Function. For a significant number of prescriptions, it is difficult to decipher some of the medicine names, even when we use a high value of top-k (k=20,000 in our experiments) decoded outputs per line. For Iteration 1, the number of auto-labeled prescriptions is < 25% of the training set. This shows the difficulty level of the problem at hand. Note that the train sets are used to train only the medicine line segmentation model and not the lines recognizer of the OCR, thus it can be with any off-the-shelf OCR model. The segmentation performance as well as the medicine name performance improve over iterations but saturates from Iteration 3. Note that mIoU computes the performance for every pixel, but normally a small change in the final bounding box do not have a lot of impact on the medicine name prediction, as long as they encapsulate the text within it. We also show the upper bound performance of medicine line recognition by using ground-truth medicine bounding boxes only while evaluating. As we can see, our algorithm with just using weak labels can reach within a few points of the strongest upper-bound with strong labels. Cues for medicine name segmentation: Unlike a generic text detector, specifically detecting medicine lines can be challenging, as handwritten prescriptions do not have any specific structure or location in the page. However, the segmentation model is still able to predict the location of the medicine lines with high performance as shown in Table 2. In order to understand the cues the segmentation model uses to segment the medicine names, we do the following experiment. Given a test image x, using a sliding window, we remove square patches from the image to remove potential cues, one at a time. Consider x i,j as the image when patch at location (i, j) is removed. We can run the segmentation model on this image, M(x i,j ) and obtain the mIoU. For every location (i, j), on the image, we can obtain the model's performance drop when a patch around that is removed, and then display that as a heatmap. A drop in performance in certain regions of this image depicts the regions necessary for the segmentation model to segment the medicine names correctly. As we can see in with what a pharmacist or even non-domain experts look to determine medicine lines, as in most cases the handwriting is illegible. These key demarcations serve as strong signals to recognize medicine lines, after which we can condition our knowledge to medicine names to enhance line recognition. Contribution of medicine LM and segmentation model: Here we show how selectively injecting medicine LMs can offer a significant improvment in performance. The vanilla LM is trained on a generic corpus of text from the Latin script. However, the medicine name LM is trained as discussed in Section 3.4. The performance improves with path length for both the models but for the medicine LM, the top-1 path itself performs much better than top-1000 path for the vanilla LM (Fig. 6). This also reduces the compute time in decoding the top-k paths from the logits, which is linear in the number of paths. Moreover, segmenting and selectively injecting the LM plays a critical role on the performance, and MedLM + Segmented Lines perform the best. Applying the MedLM on the full image actually reduces the precision significantly, but improves the recall slightly as expected, but reducing the overall metric, i.e., jaccard index. This shows that selectively injecting the LM is important, otherwise it can mess up the rest of the prescription, and hallucinate medicine names from them. Performance with varying weight on LM: The weight α in Eqn 1 on the LM scores can have an impact on the final performance. A low weight may lead to no improvement beyond the optical model's prediction, and a high weight may not ground the output to the actual text on the image. Figure 7a shows an ablation of the medicine name prediction performance on the LM weight. Note that the changes in performance is much lower for top-10k paths than for top-1 path, as only the first path in the top-10k path is affected by the LM because for paths > 1, the predictions come from the top-k decoded paths which is based on only the logits without any LM scoring. Nonetheless, we see that the performance of both the models are very close after a certain value of α. Varying the vocabulary of the LM: The medicine names used in generating the synthetic lines can have an impact on the quality of the medicine name LM. Here we also show how the performance varies as we increase the number of medicine names used to train the medicine LM. Figure 7b presents the results for top-1 and top-10k with different size of medicine name dataset. The performance improves as we add more medicines, but starts saturating after a certain point. Performance with different n-gram models: The n-gram LM involves a parameter n, which is the number of history characters the model looks to obtain the score of the next character. We created multiple n-gram models on the synthetically generated medicine line text data, and show the results in Table 3. More context definitely helps in performance, but it saturates after n = 7. This is also intuitive as the length of the medicine names is around 7.9 on average. Predicting In-Vocabulary Words: In the final step of our algorithm to predict medicine names, we only predict those words where we find a direct match with one of the elements of the medicine vocabulary. As discussed before, finding a match for only the top-1 prediction may not be the best. Thus, we decode until top-k and find matches for all the text. As the top-k decoding is directly dependent on the output of the model, such a matching respects the model's predictions. We then take a majority voting of all the matches and that becomes the final predicted medicine for a line. Note that some lines may not have any prediction at all. In this section, we compare multiple strategies of predicting in-vocabulary words in Table 4. Top-1 represents an exact match with the first path, top-1 edit distance finds the nearest prediction from the vocabulary by edit distance, top-k denotes we decode the top-k outputs but stop when we find the first exact match, and finally top-k+majority is the algorithm we use, where we decode all the top-k lines and take a majority voting of all the exact matches. Note that top-1-edit has the same jaccard index as top1, but the former has lower precision with higher recall than top-1, as expected, because it predicts beyond exact matches. We tried with multiple thresholds for edit distance, and found that 85% normalized distance performs the best. Increasing the threshold, i.e., allowing more matches significantly reduces the precision, at the gain of the recall, but hurting the overall performance. This is because of the intuition we discussed earlier that topk decodings respect the model's confidence, but edit distance treats every replacement with the same cost. Error-mode analysis The two types of errors possible are -medicine names predicted but not in the ground-truth (type I) and medicine names in the ground-truth but not predicted (type II). In our framework, there are two reasons behind the errors -segmentation network and OCR. If a medicine name is not segmented, then it leads to a type-II error. OCR errors contributes to the rest (type I and type II), a majority of which is contributed by misinterpreting very similar looking medicines such as emtel vs entel, eenosol vs eenasof, folvite vs folite, paro vs baro, zincovit vs zincort, aloliv vs alcoliv. Also we observe that the doctor can commit spelling mistakes, or vaguely write a medicine name, where only the first few characters are recognizable. To correct such errors, pharmacists generally use other contexts such as observation. Learning such contexts would need a lot more data, and injecting higher-level domain knowledge. Conclusion In this paper, we looked into the problem of extracting medicine names from inscrutable handwritten prescriptions. Our algorithm can selectively infuse domain knowledge to specific portions of a document to significantly improve the performance. We developed a framework that can learn to detect regions of interest from just weak labels, and also learn a medicine language model using synthetically generated text lines using probabilistic programs. The idea is generic enough to be applied to a variety of other types of documents, such as handwritten forms.
2023-06-13T01:16:01.293Z
2023-06-12T00:00:00.000
{ "year": 2023, "sha1": "1bfe35520140dbd1a21508ed3ff814e7ae218464", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1bfe35520140dbd1a21508ed3ff814e7ae218464", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1795778
pes2o/s2orc
v3-fos-license
Chlortetracycline and Demeclocycline Inhibit Calpains and Protect Mouse Neurons against Glutamate Toxicity and Cerebral Ischemia* Minocycline is a potent neuroprotective tetracycline in animal models of cerebral ischemia. We examined the protective properties of chlortetracycline (CTC) and demeclocycline (DMC) and showed that these two tetracyclines were also potent neuroprotective against glutamate-induced neuronal death in vitro and cerebral ischemia in vivo. However, CTC and DMC appeared to confer neuroprotection through a unique mechanism compared with minocycline. Rather than inhibiting microglial activation and caspase, CTC and DMC suppressed calpain activities. In addition, CTC and DMC only weakly antagonized N-methyl-d-aspartate (NMDA) receptor activities causing 16 and 14%, respectively, inhibition of NMDA-induced whole cell currents and partially blocked NMDA-induced Ca2+ influx, commonly regarded as the major trigger of neuronal death. In vitro and in vivo experiments demonstrated that the two compounds selectively inhibited the activities of calpain I and II activated following glutamate treatment and cerebral ischemia. In contrast, minocycline did not significantly inhibit calpain activity. Taken together, these results suggested that CTC and DMC provide neuroprotection through suppression of a rise in intracellular Ca2+ and inhibition of calpains. Stroke is one of the most common life-threatening neurological diseases. Despite significant advances in the understanding of the molecular events following cerebral ischemia, there are still no potent neuroprotective therapeutics against stroke-induced brain damage (1)(2)(3). The ischemia-induced excessive release of neurotransmitter glutamate causes excitotoxicity, which is believed to be the major cause of toxicity to neurons (1,4,5). Glutamate overactivates NMDA 2 receptors, causing increased intracellular Ca 2ϩ influx leading to the accumulation of toxic levels of intracellular calcium ions (4,5). Elevation in intracellular Ca 2ϩ concentrations activates Ca 2ϩ -dependent proteases, such as calpains, which break down critical structural proteins causing neuronal death (3, 6 -9). Chemical compounds directly blocking glutamate toxicity to neurons may have the potential to be developed as therapeutics to stroke. But NMDA receptor blockers, such as MK-801, have failed in human stroke clinical trials due to the severe side effects possibly resulting from interference with the normal physiological functions of the NMDA receptor, despite the fact that compounds like MK-801 are very effective in preventing glutamate-mediated neuronal death in cell culture models (2). Tetracyclines are antibiotic agents with a broad spectrum of anti-microbial activities and anti-inflammation properties (21). Recent studies demonstrated that minocycline, a tetracycline derivative, has potent neuroprotective properties in animal models of various brain diseases, such as global and focal cerebral ischemia (21)(22)(23), spinal cord injury (24), retinal cell death (25), Parkinson disease (26), Huntington disease (27), multiple sclerosis (28), and amyotrophic lateral sclerosis (29). The potential mechanisms of minocycline-mediated neuroprotection are through suppression of microglial activation and inhibition of the release of apoptotic factors such as cytochrome c and attenuation of intracellular caspase activities (21,27,29). However, it is still not clear whether minocycline interferes with NMDA receptor function. Tetracyclines, in general, have been used safely as an antibiotic agent for many years in the clinic. The properties of clinical tolerance and easy penetration into the brain make some of the tetracycline derivatives potential therapeutic reagents for neuroprotection in stroke (30 -33). In the present study, we tested the protective effects of two tetracyclines against glutamate-mediated excitotoxicity and cerebral ischemiainduced brain damage. The potential molecular mechanisms of such neuroprotection were also investigated. CTC and DMC were found to be strongly neuroprotective, not through inhibition of the NMDA receptor but rather through suppression of a Ca 2ϩ rise and inhibition of calpain activities. Primary Cultures of Cerebellar Granule Neurons (CGNs) Primary cultures of mouse (C57/B6) CGNs were prepared from 6-to 9-day-old postnatal mice as described previously (34,35). Briefly, cerebella were explanted and cleaned free of meninges. Mechanical and enzymatic dissociation in a 0.025% w/v trypsin solution for 25 min followed. A trypsin inhibitor was then added to block the enzyme, and 0.05% w/v DNase was added to break DNAs from dead cells. A series of trituration and mild centrifugation steps were included to disperse the neurons prior to resuspension in medium and to remove undissociated debris prior to plating in Eagle's minimum essential medium containing 0.8 mM glutamine, 27 mM glucose, 0.01% gentamycin, 9% fetal bovine serum and supplemented with K ϩ to a final concentration of 23 mM. Cells were plated onto 24-well dishes containing poly-L-lysine-coated coverslips at a density of 6 ϫ 10 5 per well. After ϳ18 h, cytosine ␣-Darabinofuranoside (AraC) was added to a final concentration of 5 M, to prevent glial cell proliferation. 100-mm dish cultures were seeded with 21 ϫ 10 6 cells in 10 ml of culture medium. Neuronal Viability Assays Tetracyclines were added to 8-day in vitro cultured CGNs at 37°C for 15-20 min prior to treatment with 50 M glutamate or NMDA. The plates were then incubated for 6 h at 37°C. Untreated controls were also included. At the end of the treatment period, neuronal viability was measured using the CFDA assay as described previously (34,35). The CFDA stock solution was diluted using Earle's balanced salts (Sigma) to a final concentration of 5 g/ml. Cultures were incubated with 500 l of the CFDA solution at 37°C for 30 min. The intensities of fluorescence was quantified using a Cytofluor TM 2350 Fluorescence Measurement System (Millipore) at ex ϭ 480 nm and em ϭ 530 nm. Cellular viability was normalized against the fluorescent reading from the control cells. Neurons were also fixed in 4% formaldehyde and mounted in Dako® fluorescent mounting medium containing 5 g/l Hoechst 33258 to detect nuclei under a fluorescent microscope. Duplicate assessment of each treatment was made on each plate in at least three separate experiments per treatment. Whole-cell Recording Whole-cell recording using electrophysiological methods has been described previously (36). Briefly, CGNs, cultured on 35-mm culture dishes, were perfused continuously at 1 ml/min at 22°C using a solution containing 140 mM NaCl, 5 mM KCl, 2 mM CaCl 2 , 10 mM HEPES, 3 mM glucose, pH 7.4. The perfusion solution also contained 1 M tetrodotoxin, 30 M glycine, and 1 M strychnine. Patch pipettes (2-4 M⍀ resistance) were constructed from 1.5 mm outer diameter/1.0 mm inner diameter Pyrex 7740 glass (Corning, Big Flats, MN). A modified DAD-12 perfusion system (ALA Scientific Instruments, Westbury, NY) was used to rapidly apply NMDA (2-s duration) followed by co-application of NMDA and the test compound (5-s duration). The pipette solution contained 140 mM CsCl, 1.1 mM EGTA, 10 mM HEPES, 2 mM Mg-ATP at pH 7.2. Whole-cell currents were acquired using an Axopatch 1-D amplifier equipped with a CV-4 head stage with a 1 G⍀ feedback resistor (Axon Instruments, Foster City, CA). Voltage command and current acquisition were accomplished using a Digidata 1200 interface and pClamp 6.0 software (Axon Inst). Neurons were held at a membrane potential of Ϫ60 mV. The fractional block of NMDAevoked currents was calculated according to the formula: B ϭ I Ϫ I B /I, where I is the steady-state current evoked by NMDA, and I B is the current evoked by NMDA in the presence of the test compound at the end of the co-application. Intracellular Ca 2؉ Measurement Fluo-4 Measurement-Intracellular calcium concentration was measured as described previously (37). Briefly, culture medium in the 24-well plate was replaced with a calcium sensitive dye Fluo-4 (4.5 M) in a balanced salt solution. After 30-min incubation, the dye was removed and cells were incubated with the original medium with or without the compound at 37°C for 15 min. Fluorescent intensities were quantified using a Cytofluor TM 2350 Fluorescence Measurement System (Millipore) at ex ϭ 485 nm and em ϭ 530 nm. NMDA (50 M) was then added to the wells and changes in fluorescence were recorded after 5, 10, 20, 30, and 40 min. The fold increase in Ca 2ϩ was calculated by subtracting the initial reading from each reading divided by the initial reading. Ratiometric Measurement of [Ca 2ϩ ] Using Fura-2-To quantitatively determine the effect of CTC or DMC on glutamate-induced changes in intracellular Ca 2ϩ ([Ca 2ϩ ] i ) level, ratiometric measurement of [Ca 2ϩ ] i was performed using fura-2 AM. Briefly, mouse CGNs at 7 days in vitro on glass coverslips were loaded with 5 M fura-2-AM (Molecular Probes) plus 0.02% pluronic (Molecular Probes) for 30 min at 37°C. After rinsing with PSS Mg 2ϩ -free buffer containing 2 mM HEPES (pH 7.2), 140 mM NaCl, 5 mM KCl, 2.3 mM CaCl 2 , and 10 mM glucose, and stabilized in the same buffer for 5 min, fura-2 intensities were measured using a Northern Eclipse Digital Ratio Image System (Empix, Mississauga, Ontario, Canada) with Axiovert 200 camera and light source (Zeiss, Thornwood, NY). Fura-2 fluorescence was measured at 510 nm emission with 340/380 nm dual excitation selected by a DG-5 system (Sutter Instrument Co., Novato, CA). Changes in [Ca 2ϩ ] i concentration was measured by converting the 340/380 ratio of fura-2 florescence (after correction for background) to approximate [Ca 2ϩ ] i using the method as described by Grynkiewicz et al. (38) and Young et al. (39). The 340-to-380 nm fluorescence ratio (R340/380) for 20 cells in one field of each coverslip was averaged. The minimal and maximal fluorescence ratios (R min and R max ) were obtained from a sample set of CGNs using 5 M ionomycin plus 6 mM EGTA and by 10 mM CaCl 2 , respectively. The K d for fura-2 was assumed to be 224 nM as described by Young et al. (39). The basal level of [Ca 2ϩ ] i was recorded for 10 s, followed by the application of the compound dissolved in PSS buffer, which also contains 10 M glycine for another 40 s, and finally glutamate (50 M) was applied, and the recording was continued for another 200 s. All measurements were repeated for at least three times. The data were analyzed using Microsoft Excel. Animal Ischemia Surgery All procedures using animals were approved by the Institute of Behavioral Science Animal Care Committee following the guidelines established by the Canadian Council on Animal Care. C57B/6 mice (20 -23 g) were obtained from Charles River and bred locally. Under temporary isofluorane anesthesia, mice were subjected to MCAO using an intraluminal filament as previously described (40,41). After 1 h of MCAO, the filament was withdrawn, blood flow restored to normal by laser Doppler flowmetry, and wounds sutured. Mice were injected with CTC or DMC intraperitoneal at 90 mg/kg body weight 4 h before ischemia, followed by injection twice per day. The control groups included no treatment or vehicle in which animals were injected with the same volume of saline. Brains were removed after 24-h reperfusion, and the brain infarction was measured as described as follows. Infarct Size Measurement Infarct size was measured by a colormetric staining method using 2,3,5-triphenyltetrazolium chloride (TTC) as described previously (40,41). Briefly, brains were dissected out and cut into four 2-mm-thick coronal slices, which were stained with 5 ml of 2% TTC for 90 min at 37°C. Afterward, the tissue was rinsed with saline and subsequently exposed to a mixture of ethanol/dimethyl sulfoxide (1:1), which was to solubilize the formazan product. After 24-h incubation in the dark, the red solvent extracts were diluted 1:20 with fresh ethanol/Me 2 SO solvent in three tubes and placed in cuvettes. Absorbance was measured at 485 nm in a spectrophotometer and the values were averaged. Percentage loss in brain TTC staining in the ischemic side of the brain was compared with the contralateral side of the brain of the same animal using the following equation: % loss ϭ (1 Ϫ (absorbance of ischemic hemisphere/absorbance of contralateral hemisphere) ϫ 100). Neurological Scores An expanded six-point scale was modified based on previous reports (40 -42) and used for the present investigation. Behavioral assessments were made at 0 and 24 h after reperfusion by an individual blinded to the treatment of the mice. The neurological deficits were scored as follows: 0, normal; 1, mild turning behavior with or without inconsistent curling when picked up by tail, Ͻ50% attempts to curl to the contralateral side; 2, mild consistent curling, Ͼ50% attempts to curl to contralateral side; 3, strong and immediate consistent curling, mouse holds curled position for more than 1-2 s, the nose of the mouse almost reaches tail; 4, severe curling progressing into barreling, loss of walking or righting reflex; 5, comatose or moribund. At least eight mice per group were evaluated for each compound and scores were averaged for statistical analysis. In Vitro Measurement of Calpain Activity Calpain activity was measured using a calpain activity assay kit (Calbiochem, Mississauga, Ontario, Canada) following the manufacture's instructions. The assay is based on fluorometric detection of cleavage of calpain substrate Ac-Leu-Leu-Tyr-AFC using a Cytofluor TM 2350 Fluorescence Measurement System (Millipore). The cleavage resulted in the release of AFC that can be measured in a fluorometer. Briefly, constitutive calpain I or II (0.1 unit/ml) (purchased from Calbiochem) was activated by 500 M Ca 2ϩ and was mixed with CTC (150 M), DMC (150 M), minocycline (150 M), or calpain inhibitors ALLN or calpastatin (10 M each) and 5 l of calpain substrate to a final volume of 100 l. The mixture was incubated at 37°C for 1 h in the dark. The cleavage of the substrate resulted in the release of AFC that can be detected by the Cytofluor at ex ϭ 400 nm and em ϭ 505 nm. Western Blotting Protein at 10 g was electrophoresed in a 7% SDS mini gel and then electroblotted onto a nitrocellulose membrane in transfer buffer (39 mM glycine, 48 mM Tris base, and 20% methanol) as described previously (34). The membrane was then probed with a polyclonal antibody selective to calpain cleaved fragment of brain spectrin at 4°C overnight. After washing with 0.01 M phosphate-buffered saline, horseradish peroxidase-conjugated secondary antibody was applied to the membrane for 1 h at room temperature. Enhanced chemiluminescence detection of the target protein was performed using a LumiGlo substrate kit (KP Laboratories, Gaithersburg, MD) and x-ray film. Data Analysis Data were analyzed using Microsoft Excel and Prism. Statistical significance was determined by Student's t test, and the significant group was determined using further post hoc Tuckey's test. p Ͻ 0.05 was considered statistically significant. CTC and DMC Protect Cultured CGNs against Glutamate Toxicity-The neuroprotective effects of CTC, DMC, and minocycline against glutamate-mediated excitotoxicity in cultured mouse primary CGNs were examined and quantified using the CFDA assay. All three compounds showed potent neuroprotection against glutamate-mediated toxicity to CGNs in a dose-and time-dependent manner (Fig. 1, A-C). More than 85% of the CGNs were protected by these two compounds at doses ranging between 80 and 150 M. This protection lasted up to 8 h following glutamate treatment when more than 50% of the control CGNs was killed by glutamate. The appearance of dead neurons was visualized by Hoechst staining of the nuclei (Fig. 1C). CTC and DMC were not toxic to CGNs at the ranges of doses tested (data not shown). CTC and DMC Reduce MCAO-induced Brain Damage-Since minocycline has been shown to provide neuroprotection against cerebral ischemia, we examined the neuroprotective effect of CTC and DMC in a mouse model of focal ischemia with 1 h MCAO followed by 24-h reperfusion. Each compound was administered by intraperitoneal injection 4 h prior to MCAO at 90 mg/kg and followed by two more injections (8 and 16 h following reperfusion) at 45 mg/kg. Animals were then killed to remove the brain for analysis as described under "Experimental Procedures." Both compounds significantly reduced the infarct size in the cerebral cortex by almost 50% in comparison with the non-treated ischemic control and vehicle-treated brain (p Ͻ 0.05, Fig. 2, A and B). Coronal sections of the brain slices numbered as 1-4 were shown in Fig. 2, B-D. Most of the infarctions occurred in the first two brain slices in the cerebral cortex and striatum as indicated by the arrows in Fig. 2B. The infarction was significantly reduced in the same areas in brains treated with CTC (Fig. 2C) or DMC (Fig. 2D). The protective effects of these compounds were also confirmed by the improvement of the neurological behavior of the compound-treated ischemic mouse. Using the six point valuation system as described in the "Experimental Procedures" section, the scores of the neurological behavior of ischemic animals were compared with those of vehicletreated or ischemic animals 0.5 h after surgery and after 24-h reperfusion. As shown in Fig. 2E, mice treated with the two compounds showed significant improvement after 24 h of reperfusion (p Ͻ 0.05) compared with the vehicle-treated or ischemic animals, demonstrating that CTC and DMC reduced MCAO-induced neurological deficits. CTC and DMC Weakly Antagonize NMDA Receptor Activity and Suppress the Rise in Intracellular Ca 2ϩ -To understand the mechanisms of neuroprotection conferred by CTC and DMC, we examined whether these two compounds blocked calcium entry through the NMDA receptor, which has been implicated in mediating glutamateinduced exitotoxicity. Both compounds at 150 M showed weak, but rapid, antagonism to 50 M NMDA-induced currents (Fig. 3A). A 5 s co-application of NMDA plus 150 M CTC resulted in a 14 Ϯ 1% (n ϭ 5) reduction in NMDA-induced current. A 5 s co-application of NMDA plus 150 M DMC produced a 16 Ϯ 2% (n ϭ 5) reduction in NMDAinduced currents. Since NMDA activation induces intracellular Ca 2ϩ influx, we next tested whether these two compounds affect glutamate and NMDAinduced intracellular Ca 2ϩ levels. As shown in Fig. 3B, NMDA receptormediated intracellular calcium influx increased immediately after the addition of glutamate. The addition of the two compounds partial blocked [Ca 2ϩ ] i influx, but the [Ca 2ϩ ] i level eventually increased to the same level as that of NMDA-treated CGNs after 40 min (Fig. 3C). Minocycline also exhibited a similar level of blockade of Ca 2ϩ influx as compared with those from CTC and DMC at the 10-and 20-min time points (Fig. 3C, p Ͻ 0.05 compared with NMDA-mediated Ca 2ϩ rise). On the other hand, MK-801, an antagonist to NMDA receptor, completely blocked glutamate and NMDA-induced Ca 2ϩ influx (Fig. 3, B and C). Taken together, these data demonstrated that CTC and DMC are weak and transient blockers of the NMDA receptor currents and only partially block [Ca 2ϩ ] i influx during the early stages of glutamate/NMDA treatment. However, such a transient reduction in NMDA receptor current and [Ca 2ϩ ] i influx may not be sufficient to account for the more than 85% neuroprotection conferred by these two compounds, suggesting that these compounds may inhibit intracellular targets. CTC and DMC Protect CGNs through Inhibition of Intracellular Calpain Activities-Calcium-activated intracellular proteases such as calpain are an important mediator of neuronal death in response to glutamate toxicity and cerebral ischemia (1). Although caspase activity may also play a role in the apoptotic component of ischemia-induced neuronal death, our previous work showed that caspase is not active in glutamate-induced neuronal death (1). To understand how CTC and DMC protect neurons in vitro and in vivo, we hypothesized that these two compounds could modulate the activities of Ca 2ϩ -activated calpains. To do this, in vitro experiments were first performed using purified exogenous calpains. CTC and DMC significantly inhibited the activities of active calpain I (Fig. 4A, p Ͻ 0.001) and calpain II (Fig. 4B, p Ͻ 0.001). 1-4 in B-D indicate the first to the last slice of the MCAO brain and arrows indicate ischemic infarction (white-colored region on the brain slice). E, the respective scores of neurological behavior of each mouse were plotted and presented. * indicates statistical significance (p Ͻ 0.05) by Student's t test. C is a graph indicating that CTC and DMC partially block [Ca 2ϩ ] i influx as measured by Fluo-4 assay. CGNs were treated with or without the compound indicated in the graph and followed by NMDA application. Intracellular calcium concentrations were measured using Fluo-4 as described under "Experimental Procedures." The fold of increase was calculated against non-treated CGNs. At least three independent repeats were performed, and data presented are mean Ϯ S.E.; ** indicates p Ͻ 0.01, and * indicates p Ͻ 0.05 when compared with NMDA. Specific inhibitors to calpains (ALLN and calpastatin), which inhibited calpain activity and also prevented neuronal death, were used as positive controls for the assay. Interestingly, minocycline, a potent neuroprotectant, did not inhibit the activities of calpains (Fig. 4, A and B). Next, we examined whether these two compounds could inhibit glutamate-induced activation of calpains in CGNs. Calpain activity was monitored by the presence and the level of the SBP on Western blot. As shown in Fig. 4, C and D, after 20-min treatment with 50 M glutamate, the level of SBP increased significantly (p Ͻ 0.001, Fig. 4D), and the level of SBP reached a peak after 2.5 h (Fig. 4D). Calpain inhibitors, ALLN, CTC, and DMC, were applied to cultured CGNs 30 min prior to glutamate treatment. Both the calpain inhibitor, and the two compounds significantly reduced the level of SBP caused by glutamate treatment in comparison with glutamate only treated sample at 2.5 h (Fig. 4D, p Ͻ 0.05). Furthermore, the two compounds CTC and DMC also inhibited calpain activities caused by MCAO in mice brain as shown by the reduced level of SBP on Western blot (Fig. 4, E and F). The SBP level increased sharply in the ischemic brain of vehicle-treated mouse, but the level of SBP was significantly reduced in CTC-and DMC-treated brains (Fig. 4, E and F, p Ͻ 0.05). Taken together, CTC and DMC inhibit calpains activation in response to excitotoxicity and cerebral ischemia. DISCUSSION In the present study, we report the findings that CTC and DMC are neuroprotective against glutamate toxicity in cultured mouse CGNs in vitro and focal cerebral ischemia in vivo through inhibition of calpains, a mechanism different from that of minocycline. To the best of our knowledge, the present study is the first demonstration that CTC and DMC conferred neuroprotection through inhibition of calpain activities. The molecular targets of CTC and DMC appear to be downstream of the NMDA receptors. CTC and DMC only weakly inhibited NMDAinduced intracellular calcium influx. The 14 -16% inhibition of NMDA receptors by the two compounds could contribute to the subsequent relatively slow increase in intracellular Ca 2ϩ levels seen in the two compound-treated samples; however, it is highly unlikely that this Ca 2ϩ entry could account for the potent neuroprotection conferred by CTC and DMC to glutamate-treated CGNs and suggests that these two com- Purified exogenous active calpains I and II at 0.1 unit/ml each were mixed with calpain-specific inhibitors ALLN or calpastatin at 10 M or the compound as indicated at 150 M concentration. After 30-min incubation with the calpain substrate, the release of fluorescent AFC was recorded using a Cytofluor. The fluorescent unit/g of protein/h was calculated from at least three independent experiments and plotted in A (calpain I) and B (calpain II) (mean Ϯ S.E.). ** indicates statistical significance (p Ͻ 0.01) by Student's t test. C-F show Western blotting and its quantification of SBP produced by the activation of calpain. Glutamate-treated CGNs in the presence or absence of calpain inhibitor or compounds were collected after the time indicated in C. The protein extract was subjected to Western blotting with a primary antibody against SBP. GAPDH was used as protein loading control. The production of SBP was normalized against GAPDH, and the mean Ϯ S.E. was presented in D. Similarly, ischemic brains were collected for Western blotting to detect the production of SBP and the level of it was normalized against GAPDH (E and F). The mean Ϯ S.E. was presented in F. NB, normal brain; C, contralateral side; I, ischemic side of the brain. ** indicates statistical significance (p Ͻ 0.01) by Student's t test. pounds target downstream intracellular death signal transduction pathways. Interestingly, the molecular targets of CTC and DMC appeared to be different from those of minocycline in that CTC and DMC inhibit calpain I and II, whereas minocycline does not. Previous studies have demonstrated that minocycline provides in vivo neuroprotection by suppressing microglial activation (21,22). Reports also showed that minocycline directly target intracellular death pathways to protect neurons through blocking cytochrome c release and the subsequent activation of caspase (23,29). The present in vitro studies using cultured CGNs showed that neuroprotection conferred by CTC and DMC came from direct inhibition of calpain activities but not from inhibition of microglial activation, since microglial activation played no role in such acute glutamate-induced neuronal death system. In addition, caspase did not become activated in this system (1). Indeed, in vitro and in vivo experiments, as shown in Fig. 4, clearly demonstrated that CTC and DMC were potent inhibitors of calpains activated in response to both glutamate treatment and MCAO in mice brains. Calpains are major upstream proteases that are activated following ischemic injury to the brain and are responsible for the rapid and sustained induction of spectrin breakdown in the infarct zone (9,20,43). The present study, using several techniques including in vitro calpain activity assay and SBP quantification by Western blotting, demonstrated that calpain activity increased rapidly following treatment with glutamate in cultured CGNs or MCAO and that this induction can be ameliorated by calpain-specific inhibitors, CTC and DMC. This early induction in calpain following cerebral ischemia is consistent with previous reports that calpain-induced SBP could be detected as early as 1 min in the dendrites of pyramidal cells in the CA2/CA3 border zone and 5-10 min in CA1, cortical, and thalamic regions of the ischemic rats (9). In addition to stroke, the activities of calpain are also implicated in a wide range of pathological conditions such as traumatic brain injury, Alzheimer disease, and type 2 diabetes mellitus (44). As a result, several calpain inhibitors have been developed to target these diseases, for example, natural product-based inhibitors and synthetic inhibitors (45). However, none of them have structural similarities to tetracyclines. Thus tetracycline may inhibit calpain activation/activity in a way distinct from those described in the literature and which requires further investigation. A compound with the potential to become a lead for development as therapeutics to stroke has to meet several requirements, such as 1) that the compound does not interfere with normal physiological functions of glutamate receptors, 2) that the compound targets intracellular death pathways to provide direct protection to neurons, and 3) that the compound should penetrate into the brain freely and have low toxicity. CTC and DMC used in the present study appear to meet all of the above requirements. CTC and DMC have been used clinically as antibiotics for many years. In addition, tetracyclines also have other beneficial activities and properties such as anti-oxidation and anti-glycation (33,46,47). These advantages of tetracyclines make them promising leads for drug development as stroke therapeutics. However, further modification of CTC and DMC is required. Since these compounds are known potent antibiotics, it is desirable to use them at low doses and for a short period of time. The doses that we used in the current study were 90 mg/kg prior to MCAO and followed by 45 mg/kg with no visible toxicity present, but a very recent report suggests that low dose minocycline (at 3 mg/kg prior to cerebral ischemia and 10 mg/kg afterward intravenously) indeed conferred neuroprotection in MCAO rats (48). Current work is under way to examine the low dose effect of CTC and DMC. In addition, novel derivatives of CTC and DMC, which do not possess anti-microbial activities, are under development. In summary, a battery of biochemical experiments performed in the present study demonstrated that CTC and DMC, the two clinically used antibiotics, provide neuroprotection not through blocking NMDA receptors but rather by inhibition of calpain activity. Future modification of these two compounds may lead to drugs capable of neuroprotection following cerebral ischemia.
2018-04-03T03:16:23.835Z
2005-10-07T00:00:00.000
{ "year": 2005, "sha1": "339d7c763cb70d5d78dafac355dcd63c14ad3d3f", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/40/33811.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "d3e4301d12c8d8b883b77bd3ba11461161404ad0", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15708415
pes2o/s2orc
v3-fos-license
Durable Sustained Virologic Response After Oral Directly Acting Antiviral Therapy Despite Immunosuppressive Treatment Treatment for hepatitis C has evolved from interferon-based therapy to all oral, directly acting antiviral (DAA) therapy. The influence of immunosuppression on maintaining sustained virologic response (SVR) in patients who have been treated with these directly acting agents is unknown. In this study, we report sustained hepatitis C virus (HCV) suppression in 3 patients undergoing various immunosuppressive treatments after achieving SVR with DAA therapy. Three patients, who were enrolled in 1 of 2 single-center National Institutes of Health clinical trials, achieved SVR12. Each patient had undergone between 6 and 24 weeks of DAA therapy with or without ribavirin. Immunosuppression was varied among the 3 patients. Therapy included adalimumab, carboplatin/irinotecan, or capecitabine. In all 3 cases, patients maintained HCV RNA levels below detection after immunosuppression. All patients had undetectable viral load and normalized liver-related enzymes during immunosuppressive therapy. This report suggests that SVR as a result of novel DAA therapy is durable and likely not affected by immunosuppressive therapy. Larger studies are required to confirm these results, but findings are promising for the treatment of large numbers of HCV-infected patients who may require subsequent immunosuppressive or immunomodulating therapies. Hepatitis C virus (HCV) is estimated to chronically infect 170 million individuals worldwide and 3.2 million persons in the United States [1]. Left untreated, up to 25% of patients will develop liver cirrhosis and/or hepatocellular carcinoma [2,3]. Treatment efficacy is measured by sustained virological response (SVR)12 and is defined as undetectable plasma HCV RNA level after 12 weeks of therapy. Historically, when treating with interferon (IFN)-based therapy, achieving SVR has been highly associated with host immune factors including IL28B and IFNλ4 genotype [4,5]. Previous studies have described increased HCV treatment failure [6] and even treatment relapse after achieving SVR [7] in patients who received IFN therapy and subsequently underwent immunosuppression. This suggests an influence of immune mediators on treatment response and/ or the presence of residual or persistent reservoirs of HCV even after achieving SVR [8,9]. In a recent study, IFN and ribavirin-free regimens of combination directly acting antivirals (DAAs) have shown improved efficacy and tolerability compared with IFN-containing regimens, and they have become the standard of care for treatment of HCV genotype (GT)-1 [10]. However, few patients who received these new DAA agents have subsequently undergone immunosuppressing or immune-modulating therapies. Although immune-modulating and chemotherapeutic agents have been described to increase HCV load and rates of liver fibrosis progression [3], the impact of immunosuppression on HCV in patients who have already achieved SVR after treatment with DAAs is unknown [3]. As the number of patients with hepatitis C receiving host immune-suppressing or immune-modulating therapies during cancer treatment, organ transplantation, and autoimmune disease management grows, understanding the impact of these medications on concomitant HCV treatment outcomes has become important. This is especially true in patients with chronic HCV who receive liver transplantation (LT). Because immunosuppression after LT is a necessity, evaluating the use of DAA to prevent recurrent HCV infection is imperative. This case series outlines the details of 3 patients who required immunosuppressive therapy shortly after completion of oral DAA regimen for HCV. In this series, SVR was maintained for all 3 patients even after immunosuppressive treatment. CASE SERIES One hundred ten patients were enrolled in 2 National Institute of Allergy and Infection Diseases (NIAID) clinical trials at the National Institutes of Health (NIH) evaluating DAA therapy alone for the treatment of HCV. One hundred seven (97%) of those patients achieved SVR12. Three patients were identified who received treatment with immunosuppressive or immunomodulatory agents after treatment completion. Both trials were conducted in accordance with the ethical standards of the Helsinki Declaration of the World Medical Association, and all patients signed NIH/NIAID Institutional Review Boardapproved informed consent. Hepatitis C virus RNA was measured using the Abbott HCV RNA Assay with a published lower limit of detection of 12 IU/mL. Case 1 A 50-year-old African American male with rheumatoid arthritis was diagnosed with HCV infection in 1998 during standard screening. He presented to the NIH in 2013 for participation in a phase 2 clinical trial investigating sofosbuvir, ledipasvir with GS-9451 for 6 weeks to treat chronic HCV infection. Hepatitis C virus GT during screening was 1a with a viral load of 10 526 493 IU/mL. Liver biopsy in 2013 showed a histologic activity index (HAI) inflammation score of 9/18 and an ISHAK fibrosis score of 0/6 with trace steatosis. Pretreatment aspartate aminotransferase (AST) level was 32 U/L, and alanine aminotransferase (ALT) level was 56 U/L. The patient was found to have an unfavorable IL28B (rs12979860 [TT]) and IFNL4 GT (rs368234815 [ΔG/TT]). The patient had a history of severe rheumatoid arthritis, diagnosed 1.5 years prior, with bilateral hand, arm, and leg pain. He required hydroxychloroquine 200 mg twice daily and methylprednisolone 8 mg daily, as well as tramadol 50-100 mg daily and sulfasalazine 1000 mg twice daily for pain. He also had a history of positive purified protein derivative skin test, for which he completed 9 months of isoniazid treatment. In addition, he had a history of using intravenous and intranasal heroin and intranasal cocaine from 1990 to 2007, but he had been substance free since that time. The patient completed 6 weeks of therapy with sofosbuvir, ledipasvir (400/90 mg combination pill), and GS-9451 ( protease/NS3/4 inhibitor, 90 mg). Approximately, 1 week after completion of therapy, the patient developed painful and swollen metacarpophalangeal joints and proximal interphalangeal joints as well as bilateral elbow and knee pain. The patient's rheumatologist recommended treatment with the addition of tumor necrosis factor (TNF) inhibitor adalimumab. The effect of a biologic such as adalimumab on the risk of HCV relapse was unknown, so the decision was made, in consultation with the patient′s rheumatologist, to defer treatment until after the The patient achieved SVR12 with an AST level of 12 U/L and an ALT level of 29 U/L. At week 27 (21 weeks after completion of DAA HCV therapy), the patient was started on adalimumab 40 mg every 2 weeks for treatment of his rheumatoid arthritis. Follow-up visits at week 30 (24 weeks after completing therapy) and 42 (36 weeks after completing therapy) revealed that despite the addition of adalimumab, his HCV viral load remained undetectable with normal ALT and AST. Case 2 A 57-year-old African American female was diagnosed with HCV infection in 2012 on routine screening. She presented to NIH in 2013 for participation in a phase 2 clinical trial investigating sofosbuvir and ledipasvir with GS-9669 for 6 weeks to treat chronic HCV infection. Screening laboratory test results showed infection with HCV GT-1a and an HCV viral load of 1 969 300 IU/mL. A liver biopsy in 2013 showed an HAI inflammation score of 6/18 and an ISHAK fibrosis score of 1/6 with minimal steatosis. Pretreatment AST level was 66 U/L, ALT was 59 U/L, and IL28B and IFNλ4 GT as determined by the rs12979860 (CT) and rs368234815 (ΔG/TT), respectively. The patient had a history of chronic obstructive pulmonary disease with an over 50-pack per year smoking history and was treated with advair, albuterol, and spiriva. She also had a history of well controlled hypertension, untreated depression with a history of delusions, and intravenous cocaine use approximately 20 years prior. She had been substance free since then. In addition, she had been treated for both active and latent tuberculosis, with 9 months and 6 months of therapy, respectively. The patient completed 6 weeks of HCV treatment with sofosbuvir/ledipasvir (400/90 mg combination pill) and GS-9669 (nonnucleoside NS5B inhibitor, 500 mg) without difficulty, and her HCV viral load declined on therapy, with normalization of ALT and AST (Figures 1-3). The patient had an undetectable HCV viral load by week 8 of the trial, and she achieved SVR12. At week 18 (12 weeks after completing therapy), the patient had worsening cough and difficulty lying flat. She was seen by her primary care doctor and given antibiotics and oral steroids, which improved her symptoms. Three months later, she had increased wheezing and completed another 5-day course of oral steroids. At week 33, the patient presented to the emergency department with shortness of breath and fever; she was diagnosed with pneumonia. Computed tomography of her chest also showed interval pleural and parenchymal changes concerning for malignancy. Four months later, an oncologist confirmed the diagnosis with a repeat computed tomography chest showing numerous large lesions in left lower lobe of the lung. These lesions were found to be small cell lung cancer on biopsy. The patient was started on chemotherapy (carboplatin and irinotecan) at week 53. A followup visit at week 54 continued to show undetectable HCV RNA viral load while on chemotherapy. The patient had progression of her lung cancer despite being on treatment, and she experienced worsening of her underlying depression with delusions. Despite multiple attempts, the patient was ultimately lost to follow up. Case 3 A 74-year-old white female tested positive for HCV infection in 1990. She presented to NIH in 2012 for participation in a phase 2 clinic trial investigating sofosbuvir with weight-based or lowdose ribavirin for 24 weeks to treat chronic HCV infection. During screening, she was found to be infected with HCV GT-1a with a viral load of 2 363 500 IU/mL. Liver biopsy in 2010 showed a HAI inflammation score of 10/18 and an ISHAK fibrosis score of 1/6 with minimal steatosis. Pretreatment AST and ALT were 176 U/L and 93 U/L, respectively. She had both unfavorable IL28B (rs12979860 [CT]) and IFNL4 (rs368234815 (ΔG/TT) GTs. The patient also had a history of right breast, triple-negative (Her2, estrogen, and progesterone receptors negative), stage IIIa adenocarcinoma diagnosed in 2009. She had completed extensive therapy including right lumpectomy with complete axillary node dissection, 6 cycles of taxotere and cytoxan chemotherapy, as well as breast and supraclavicular radiotherapy in 2010. She also had a history of myocardial infarction requiring stent placement in 2001, essential thrombocytopenia diagnosed in 2010 with positive Janus kinase-2 mutation, hypothyroidism, which was well controlled on levothyroxine, Crohn′s disease not requiring medication, hypertension, hyperlipidemia, and depression. The patient was randomized to treatment with sofosbuvir (400 mg daily) with low-dosed ribavirin arm (600 mg daily) for 24 weeks. By week 3 of treatment, the patient had undetectable HCV viral load. Between weeks 16 and 20, the patient complained of worsening fatigue and depression-causing her to miss 5 doses of study medication. Laboratory work was done, and she was found to be anemic with hemoglobin of 11.7 g/dL. Given her history of cardiac disease, her dose of ribavirin was decreased to 400 mg daily. Hepatitis C virus treatment was completed at week 24, and she reported improved mood and fatigue. At week 25, the patient complained of severe right lower quadrant pain. A computed tomography of the abdomen showed intestinal inflammation consistent with flare of the patient′s preexisting Crohn′s disease. In addition, mixed sclerotic and lytic lesions on the iliac bones and vertebral bodies were seen, suspicious for recurrent breast cancer with bony metastasis. Breast cancer was confirmed 2 months later by bone biopsy, and the patient was initiated on capecitabine at week 48 (24 weeks after completing HCV therapy). At this time, the patient had an HCV viral load that was undetectable (SVR24). The patient continued capecitabine for 2 months and was then switched to taxotere after developing a facial rash due to capecitabine. At weeks 60 and 72, after receiving chemotherapy, the patient had undetectable HCV RNA. The patient left the study at week 72, but she returned during week 113 for a final follow up and was found to have an undetectable HCV RNA viral load and normalized ALT and AST. DISCUSSION In this case series, 3 patients completed treatment with IFNsparing combinations of DAAs and then received immunosuppressive biologic or chemotherapeutic agents. There was no evidence of viral relapse at least 48 weeks after completion of DAA therapy. Although host immune factors influence relapse posttherapy with IFN-containing therapy [4,5], the durability of SVR despite immunosuppressive therapy suggests that immune factors do not play a role in maintaining SVR after treatment with IFN-sparing therapy. Standard therapy for HCV treatment has long included combinations of IFN and ribavirin. Although this treatment is efficacious, it can worsen autoimmune-mediated diseases [11], and its use is limited in medical and psychiatric conditions that often coexist in patients infected with HCV. In contrast, IFN-free options such as DAA regimens have fewer contraindications, no significant neuropsychiatric risk, and are much better tolerated [10]. Although correlates of immune system function, including host IL28B or IFNL4 GT, have been shown to predict treatment success to IFN-containing therapy [4,5], little is known about the effect of immunosuppression on SVR. Some studies have detected residual HCV RNA in blood components of patients who achieved SVR, suggesting the possibility of occult infections [8,9]. Hepatitis C virus relapse has been reported in patients who achieved SVR with IFN and ribavirin-containing therapy and were subsequently treated with immunosuppression [12]. A recent case series showed durability of SVR for a median of 96 months postchemotherapy in these patients treated with IFN and ribavirin who had previously achieved SVR [13]. There are several limitations to this study. First, the timing of immunosuppression was largely dictated by each patient's disease pathology. In case 1, the start of adalimumab was held until after the patient reached SVR12. This decision was made in conjunction with the patient′s rheumatologist because little is known about the effect of biologics on the risk of HCV relapse. In contrast, in cases 2 and 3, chemotherapy was initiated immediately after cancer was confirmed on biopsy. However, in both of these cases, the patients had achieved SVR12 and SVR24, respectively, before the start of immunosuppression. Other limitations include the lack of repeat viral load measures after immunosuppression, as seen in case 2. This is the first report illustrating that SVR after DAA therapy is not affected by immunosuppressive therapy. In these 3 patients treated with DAAs, SVR was maintained despite immunosuppression. All 3 of the cases had unfavorable HCV GT-1a, unfavorable host IFNL4 GT, high HCV viral loads at the start of the trial, and 2 of the 3 patients were African American. Despite these negative predictors of favorable outcome, all 3 cases were found to have undetectable viral load and normalized liver-related enzymes at the endpoint of their studies (SVR12 for cases 1 and 2, SVR24 for case 3). After achieving SVR, the patients continued to have undetectable viral load despite undergoing immunosuppression in the form of chemotherapy, steroids, or inhibition of TNF (Table 1). CONCLUSIONS As well tolerated DAA regimens are approved and their use becomes more widespread, treatment of a broader range of patients will occur. This includes patients on immunosuppressive therapy. Larger studies are needed to confirm the durability of HCV SVR in patients treated with DAAs who also receive immunosuppression. Pending validation in larger studies, it appears that SVR after DAA therapy is not compromised by the subsequent use of immunosuppressive medications. This is a significant finding for the large numbers of HCV-infected patients who may require future immunomodulatory or chemotherapeutic medications.
2016-05-16T05:27:58.106Z
2015-07-31T00:00:00.000
{ "year": 2015, "sha1": "cb3cf4169096291acc68df88b4836a26621ef677", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/2/3/ofv091/7641697/ofv091.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb3cf4169096291acc68df88b4836a26621ef677", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269400602
pes2o/s2orc
v3-fos-license
Duplicate Gene Expression and Possible Mechanisms of Paralog Retention During Bacterial Genome Expansion Abstract Gene duplication contributes to the evolution of expression and the origin of new genes, but the relative importance of different patterns of duplicate gene expression and mechanisms of retention remains debated and particularly poorly understood in bacteria. Here, we investigated gene expression patterns for two lab strains of the cyanobacterium Acaryochloris marina with expanding genomes that contain about 10-fold more gene duplicates compared with most bacteria. Strikingly, we observed a generally stoichiometric pattern of greater combined duplicate transcript dosage with increased gene copy number, in contrast to the prevalence of expression reduction reported for many eukaryotes. We conclude that increased transcript dosage is likely an important mechanism of initial duplicate retention in these bacteria and may persist over long periods of evolutionary time. However, we also observed that paralog expression can diverge rapidly, including possible functional partitioning, for which different copies were respectively more highly expressed in at least one condition. Divergence may be promoted by the physical separation of most Acaryochloris duplicates on different genetic elements. In addition, expression pattern for ancestrally shared duplicates could differ between strains, emphasizing that duplicate expression fate need not be deterministic. We further observed evidence for context-dependent transcript dosage, where the aggregate expression of duplicates was either greater or lower than their single-copy homolog depending on physiological state. Finally, we illustrate how these different expression patterns of duplicated genes impact Acaryochloris biology for the innovation of a novel light-harvesting apparatus and for the regulation of recA paralogs in response to environmental change. Introduction Gene duplication is an important mechanism of genome evolution (Andersson and Hughes 2009;Kondrashov 2012) and a major source of new genes (Ohno 1970).Although most duplicates are rapidly lost from genomes (Lynch and Conery 2000), gene duplicates may be retained by several mechanisms.For functionally redundant gene copies, this may involve either a beneficial increase in the number of transcripts (i.e.increased transcript dosage; Ohno 1970) or in the reduction of expression of individual duplicates to recover the ancestral dosage level (i.e.dosage-sharing; Papp et al. 2003;Qian et al. 2010;Birchler and Yang 2022).Alternatively, functional partitioning of duplicates may occur through either the evolution of new functions or expression patterns (neofunctionalization; Ohno 1970) or by the dividing of different ancestral functions between duplicates (subfunctionalization; Force et al. 1999). The relative importance of these different mechanisms of duplicate retention is still debated.This is particularly the case for duplicates that are not the product of wholegenome duplication, which can lead to transcript dosage imbalances that generally favor duplicate inactivation and loss (Papp et al. 2003).In mammals and yeast, paralogs are often retained through the sharing of ancestral levels of dosage in response to selection to avoid maladaptive stoichiometry following duplication (Qian et al. 2010;Lan and Pritchard 2016).For coregulated tandem gene duplicates, for example, this reduction of gene expression can arise by increased promoter methylation (Rodin and Riggs 2003;Weber et al. 2007;Keller and Yi 2014).While functional partitioning of paralogs is not common in mammals, it is a more likely outcome for duplicates that do not occur in tandem, which can promote the independent evolution of duplicates (Lan and Pritchard 2016). In bacteria, increased transcript dosage of functionally redundant gene duplicates can contribute to adaptation to stressful environments (Sandegren and Andersson 2009;Kondrashov 2012).Still, most duplicates are quickly purged from bacterial genomes in the absence of selection to maintain them (Romero and Palacios 1997;Reams and Neidle 2003;Reams et al. 2010), despite a high frequency of gene duplication (Anderson and Roth 1977;Haack and Roth 1995;Reams et al. 2010).Consequently, bacterial genomes typically contain far fewer gene duplicates compared with those of eukaryotes (Lynch andConery 2000, 2003;Hooper and Berg 2003).As a result of this limited sample size, understanding the general importance of increased dosage for the long-term evolution of individual bacterial genomes compared with other mechanisms of duplicate retention has been elusive. To address this issue, we have taken an integrative approach to investigate gene expression and the potential mechanisms of duplicate retention in the genomes of two strains of the cyanobacterium Acaryochloris marina (Miyashita et al. 1996;Wood et al. 2002;Miller et al. 2005), which are notable for both their production of the far-red light absorbing Chlorophyll d as primary photosynthetic pigment (Swingley et al. 2008;Miller et al. 2011).The genomes of A. marina strains MBIC11017 and CCMEE 5410 are ∼35% larger than more basal A. marina lineages (8.4 and 8.1 Mb, respectively, vs. 5.8 to 6.1 Mb in A. marina strains MU03 and WB-4;Miller et al. 2022), with a similarly greater number of genes (e.g.8,528 in MBIC11017 vs. 6,366 in MU03); this is a consequence of recent genome expansion due in large part to gene duplication (Ulrich et al. 2021) Miller et al. 2011).This skewed distribution more closely resembles what has been observed for duplicates in eukaryote genomes (Lynch andConery 2000, 2003), rather than the uniform distribution of comparatively fewer duplicates reported for other bacteria (Hooper and Berg 2003).Acaryochloris marina genomes consist of a chromosome and a variable number of extrachromosomal plasmids (e.g. the MBIC11017 genome has nine ranging in size from ∼2 to 375 kb; Swingley et al. 2008).These are low-copy number plasmids for which replication is tightly regulated during the bacterial cell cycle to maintain a characteristic copy number (∼1:1 equivalency with the chromosome for most A. marina plasmids), and faithful plasmid segregation into daughter cells during cell division relies on specific partition mechanisms (Scott 1984).Most A. marina paralogs reside on different genetic elements, either on different plasmids or on the chromosome and a plasmid, respectively (Miller et al. 2011).In addition, except for a minority of the most recent duplicates, paralogs are experiencing strong purifying selection against protein change (Miller et al. 2011).Although the molecular mechanism(s) that underlie the gene duplication process in A. marina is unknown, the large number of transposable elements (specifically, insertion sequence [IS] elements) in these genomes suggests that transposition may be involved (Miller et al. 2021).Acaryochloris genomes and transcriptomes therefore provide the power to resolve the respective contributions of different mechanisms to duplicate retention in bacteria, as well as whether these roles tend to change over time as paralogs age. Increased Transcript Dosage More Common Than Expression Reduction Following Gene Duplication The genomes of A. marina strains MBIC11017 and CCMEE 5410 vary in copy number for many genes due to both the differential retention of ancestral duplicates and their idiosyncratic histories of duplication events following divergence (Miller et al. 2011).While the A. marina MBIC11017 genome is closed, the A. marina CCMEE 5410 genome is a high-quality draft assembly of 23 contigs that appears to be complete with respect to gene content (Ulrich et al. 2021); however, there is still the potential for missed duplicates at contig breaks.To investigate the expression of duplicates in the respective genomes, we used RNAseq data collected for both strains in three physiological states: exponential growth, starvation, and recovery (Gallagher and Miller 2018).Depending on strain and condition, recent duplicates (d S < 2) accounted for ∼5% to 17% of protein-coding gene transcripts (supplementary table S1, Supplementary Material online).To account for ambiguously mapping reads (reads that map with identical matching scores on closely related paralogs), we developed a custom read-mapping pipeline (see Materials and Methods). We observed a strongly positive, generally stoichiometric relationship (i.e.slope of ∼1) between the ratio of gene copy number between strains and its expression level in a given genome (Fig. 1a; slope = 1.03, 95% CI = (0.94, 1.13); R 2 = 0.27; P < 0.0001; N = 1,194 for duplicate genes with d S < 5).For the most common gene copy ratio class (a duplicate pair in one genome and a single copy in the other), mean combined expression of duplicates closely matched the 2:1 ratio predicted for a doubling of expression (Fig. 1b).We conclude that increased transcript dosage of gene duplicates is more common in A. marina than has been observed in some eukaryotes, for which expression reduction is a more likely outcome (Qian et al. 2010;Lan and Pritchard 2016). Still, there was great variation in the expression response of individual duplicate pairs (Fig. 1a and b), and duplicates with different expression fates can be found within blocks of functionally related genes.For example, iron acquisition gene duplicates and novel gene content that are physically clustered on plasmid pREB1 of A. marina MBIC11017 (supplementary fig.S3, Supplementary Material online) are associated with faster iron uptake and growth under conditions of low iron availability (Gallagher and Miller 2018).The present analysis revealed cases of both increased transcript dosage and expression reduction among duplicated iron transporter and siderophore genes (supplementary fig.S3, Supplementary Material online).Moreover, recently duplicated feoAB paralogs (d S = 0.11), involved in the transport of ferrous iron, exhibited increased transcript dosage under some conditions but expression reduction following iron addition, compared with single-copy expression in strain CCMEE 5410 (supplementary fig.S3, Supplementary Material online). This result emphasizes the potential context-dependence of dosage benefits in different physiological states (or tissues) following duplication.Few A. marina duplicates are in tandem and most are on different genetic elements (∼3%; Miller et al. 2011); this physical separation may promote the rapid evolution of such differential regulation, compared with tandem duplicates that physically share cis-regulatory machinery. Variation in expression responses among duplicate pairs may in part reflect duplicate age.Because A. marina duplicates are born with identical flanking DNA (or nearly so) to the parental copy (Miller et al. 2011; unpublished data), we may expect recent paralogs to be more likely to exhibit similar expression levels.This was indeed the case, particularly for duplicates with d S < 0.2 in MBIC11017 and d S < 0.1 in CCMEE 5410 (Fig. 2a).More recent duplicates also tended to be more lowly expressed (supplementary fig.S4, Supplementary Material online).This observed trend of lower expression of recent duplicates may generally reflect reduced selection for removal from the genome due to a low metabolic burden; however, in some cases, it also could be a selectively favored mechanism for overcoming the potentially deleterious stochastic fluctuations in the cellular levels of lowly expressed gene products by increasing average expression (Bar-Even et al. 2006).We also observed that equal expression of duplicates can persist over long periods of time (Fig. 2a), as has also been observed in other organisms (Lan and Pritchard 2016). Asymmetric expression of duplicates (i.e. for which one copy was the major expressed copy in at least two conditions and was never the minor copy) could evolve rapidly but was generally more common for more divergent paralogs (Fig. 2a).Increased transcript dosage could also involve the asymmetric expression of duplicates (16 duplicate pairs with d S < 2 in MBIC11017 and 14 pairs in CCMEE 5410), indicating that the regulation of expression resulting in a dosage increase could be more complicated than simply the equal expression of duplicates.In some cases (particularly among younger duplicates for CCMEE 5410), the minor copies of a duplicate pair were not expressed and are likely destined to be purged from the genome (supplementary fig.S5, Supplementary Material online); however, similar to what has been observed for humans (Lan and Pritchard 2016), on average the minor copy makes a meaningful contribution to expression in both strains (57% of major copy expression for MBIC11017% vs. 27% for CCMEE 5410 for duplicates with d S < 0.5).Furthermore, possible functional partitioning of duplicates (i.e.different copies were the major copy in at least one condition each) in response to these conditions was rare Duplicate Gene Expression and Retention Mechanisms over this time scale of divergence (Fig. 2a).Physical separation of duplicates on different genetic elements, therefore, did not appear to make neofunctionalization or subfunctionalization an intrinsically more likely outcome, as has been proposed for mammals (Lan and Pritchard 2016). Although this overall pattern is broadly similar for both strains, at the level of the individual duplicate pair, we observed that the expression fates of ancestrally shared duplicates are not necessarily deterministic.Excluding transposable elements, there are 70 ancestrally shared duplicate pairs with d S < 5 that have been retained by both strains.Although most exhibit similar expression patterns between strains, there are several (N = 7) examples of differential regulation.For example, in CCMEE 5410, paralogs of the gluconeogenesis enzyme phosphoenolpyruvate synthase are potentially functionally partitioned, with parental copy transcription predominating during growth and recovery and the daughter plasmid copy more highly expressed during starvation; in MBIC11017, by contrast, the parental copy is the major copy under all conditions (Fig. 2b).The other six cases involve asymmetric expression in one strain and equal expression in the other.Further, for duplicates that are asymmetrically expressed in both strains, in five cases the identity of the major copy (e.g.chromosomal parental copy vs. plasmid daughter copy) differed between strains (supplementary table S2, Supplementary Material online). Below, we consider two cases of how the evolution of duplicate gene expression impacts Acaryochloris biology through its contributions to the origin of a new trait and to organismal response to environmental change, respectively. Evolution of a Novel A. marina Light-harvesting Apparatus Acaryochloris marina MBIC11017 plasmid pREB3 possesses cpc genes required to synthesize and assemble phycocyanin (PC), an accessory light-harvesting phycobiliprotein (Swingley et al. 2008).These genes were recently acquired by horizontal transfer, and many were subsequently duplicated (Fig. 3a; Ulrich et al. 2021).In addition, pREB3 has a 3 to 4× higher copy number compared with the chromosome and other plasmids of the A. marina MBIC11017 genome (Fig. 3b), indicative of more frequent replication during the bacterial cell cycle.We made a similar observation for both copy number and expression of plasmid p6 in CCMEE 5410 (supplementary fig.S6, Supplementary Material online).An increase in plasmid cellular copy number therefore represents an alternative mechanism of increasing gene copy dosage and expression in bacteria, along with gene duplication. PC is composed of heterodimers of α and β peptides (encoded by cpcAB) that aggregate to form a rod of hexamers (i.e. a trimer of heterodimers) in association with linker proteins CpcC and CpcD (MacColl 1998).Acaryochloris marina MBIC11017 produces a novel four-hexamer PC rod (Chen et al. 2009;Bar-Zvi et al. 2018;Liu et al. 2019) that efficiently transfers energy to photosystem II (Hu et al. 1999) and is anchored to the thylakoid membrane or the photosynthetic reaction center itself by CpcL via a C-terminal hydrophobic segment (Watanabe et al. 2014).Acaryochloris marina acquired two divergent copies each of cpcA and cpcB, all subsequently duplicated (Ulrich et al. 2021); divergent CpcA and CpcB paralogs can cooccur in a rod, and this structural heterogeneity may be responsible for its red-shifted fluorescence emission that facilitates energy transfer to Chl d (Bar-Zvi et al. 2018). Several PC genes exhibited asymmetric expression: cpcB1, cpcB2, and cpcD duplicate pair copies were regulated similarly (Fig. 3c; supplementary fig.S7, Supplementary Material online), with respective major copies located in close physical proximity (Fig. 3a).As expected, these genes exhibited peak expression during growth.In addition, ycf27 paralogs C0086 and C0101 exhibited similar expression levels during growth, but C0086 is strongly induced during starvation and recovery (Fig. 3d).Ycf27 proteins are OmpR-family DNA binding response regulators; in Synechocystis PCC 6803, one of the functions of Ycf27 homologs RpaA and RpaB is to regulate the coupling and relative energy transfer between phycobiliproteins and the two photosystems (Ashby and Mullineaux 1999).The ycf27 paralogs on pREB3 belong to a gene family including rpaB (supplementary fig.S8, Supplementary Material online), and C0101 appears to be a recombinant between a chromosomal copy and a plasmid copy following duplication. Finally, we observed possible functional partitioning for paralogs of cpcL, two of which (C0092 and C0102) are respectively co-transcribed with ycf27 duplicates C0086 and C0101.While two of the copies exhibited similar expression and were responsible for the majority of transcripts during growth, the third (C0092) was the majority copy during starvation and recovery (Fig. 3e).While the latter copy more closely resembles other cyanobacterial CpcL proteins in both length and hydropathy, the former are distinguished by a hydrophilic, serine-rich insertion of more than 30 amino acids between the linker domain and the C-terminal hydrophobic tail (supplementary fig.S9, Supplementary Material online).Linker proteins impact both the structure and spectral properties of the lightharvesting apparatus (David et al. 2011); consequently, this shift in expression in response to changes in physiological state potentially alters the nature of the interaction between PC rods and the photosynthetic apparatus.Rods are physically attached to PSII in growing cells of A. marina MBIC11017 (Hu et al. 1999), whereas they preferentially associate with PSI in other CpcL-producing cyanobacteria (Kondo et al. 2005(Kondo et al. , 2007;;Watanabe et al. 2014).Therefore, one possibility is that the production of different CpcL proteins influences the tendency of PC to associate with different photosystems.Future work will seek to identify whether the observed divergence in expression of these cpcL genes, together with co-transcribed rpaB paralogs, impacts the distribution of energy transfer from PC to the different photosystems in different physiological states. Expression Divergence of recA Duplicates The bacterial recombinase RecA is a multifunctional protein involved in homologous recombination, DNA damage repair, activation of error-prone DNA polymerase activity, and the regulation of gene expression through its coprotease activity (Miller and Kokjohn 1990).Members of Acaryochloris are extraordinary for their number of paralogs of this archetypal "single-copy" gene (Swingley et al. 2008;Miller et al. 2011).Evolution of these proteins has been marked by bursts of positively selected amino acid substitutions (Miller et al. 2011), which suggests that some copies may have diverged in function.Some of these predate the split between A. marina and sister taxon A. thomasi RCC1774, which does not produce Chl d; by contrast, other, plasmid-borne duplicates are more recent and idiosyncratic to individual A. marina strains (Fig. 4a). We first addressed whether recA paralogs have retained recombinase activity.recA deletion mutants of E. coli exhibit a growth rate defect and chromosomal loss (Capaldo et al. 1974;Skarstad and Boye 1993), which stems from the loss of recombinase activity required to repair stalled or collapsed replication forks that can arise during DNA replication (Cox et al. 2000).We introduced four MBIC11017 recA genes with CCMEE 5410 orthologs (the three chromosomal copies and plasmid copy B0414; Fig. 4a) into an E. coli strain with a recA deletion via a plasmid carrying a rhamnose-inducible promoter.In the presence of rhamnose, these either partially or fully complemented the recA deletion (supplementary fig.S10, Supplementary Material online), indicating that these paralogs have recombinase activity. Next, to investigate whether these recA paralogs have diverged in expression, we first examined their transcription patterns in our RNAseq data set.Comparing expression for starvation and recovery conditions with that of growing cells, we observed that orthologs of most copies were down-regulated in MBIC11017 during starvation and recovery and, conversely, up-regulated in CCMEE 5410 (Fig. 4b).By contrast, the basal copies in the phylogeny (Fig. 4a; orthologs recA 3550 in MBIC11017 and recA 4441 in CCMEE 5410) were more similarly expressed (Fig. 4b).The resulting differences between strains in the relative transcript abundance of paralogs may indicate divergence in paralog function and/or subtle differences in physiological state between strains. Our analyses of publicly available RNAseq data for strain MBIC11017 (Hernández-Prieto et al. 2016, 2018) corroborate expression divergence of the basal copy and the other paralogs.With the exception of recA 3550, copies were strongly up-regulated by hypoxia (Fig. 4c), which induces DNA damage, replication arrest and recA expression in mycobacteria (Gill et al. 2009;Gorna et al. 2010;Prasad et al. 2019); in addition, recA 3550 was uniquely down- regulated under hyperoxia, a condition expected to produce high levels of reactive oxygen species (ROS).Expression profiles of the other recA paralogs were similar overall (Fig. 4b and c), but recA A0092 alone exhibited decreased expression during a shift in light quality from white light (absorbed primarily by PC) to far-red light, which is absorbed directly by Chl d. Finally, we conducted qPCR assays for representative MBIC11017 recA paralogs in cells exposed to either UV radiation or hydrogen peroxide.In E. coli, induction of recA expression is a signature of the SOS response to DNA damage (Casaregola et al. 1982); however, recA is downregulated by UV radiation in the cyanobacteria that have been studied (Domain et al. 2004;Kolowrat et al. 2010).However, we found that recA B04014 was the only one of the tested copies with reduced expression in response to UV radiation (supplementary fig.S11, Supplementary Material online).We predicted that the ROS hydrogen peroxide would elicit a specific decline in expression of recA 3550, as observed for hyperoxia (Fig. 4c), which was the case (supplementary fig.S11, Supplementary Material online). Together, these results for several environmental conditions show that divergence of gene expression among both ancient (e.g.basal vs. other copies) and recent duplicates (A0092 and D0276) contribute to recA expression patterns in A. marina.Consequently, although all recA copies are constitutively expressed, the stoichiometry of different recA transcripts is highly dynamic in response to environmental change, as expected during potential specialization on different sub-functions.Future studies will aim to use in vitro assays with purified A. marina RecA proteins to better resolve the nature of possible functional divergence among paralogs. Concluding Remarks Case studies of both lab-evolved and naturally occurring bacteria have highlighted the adaptive potential of increased transcript dosage following gene duplication, but its general importance compared with other mechanisms of duplicate retention has remained unclear.For two strains of Acaryochloris with high loads of recent duplicates, we showed that increased duplicate transcript dosage is more prevalent than what has been observed in examined eukaryotic genomes, for which expression reduction appears to be the primary mechanism of initial duplicate retention.Many of these duplicates are ultimately purged from the genome (Miller et al. 2011); this could be for several reasons, including the transcript dosage imbalances that duplication can create, or changes in whether selection favors maintenance of more than a single copy of a gene.However, increased transcript dosage can persist for long of time in A. marina.Mean d S of orthologs between the two strains is ∼0.3; using Bayesian relaxed clock analyses, Sánchez-Baracaldo (2015) estimated this split to have occurred ∼46 MYA.Therefore, most duplicates in our data set have persisted for millions of years without deletion.By contrast, deletion rates for gene duplicates in bacteria have been estimated to be high in the absence of selection to maintain them (Reams and Neidle 2003;Reams et al. 2010).In addition, even the recent A. marina paralogs experience strong purifying selection (Miller et al. 2011).We consequently propose that increased transcript dosage may be an important mechanism of initial duplicate retention in these bacteria.Nonetheless, expression divergence of paralogs can also evolve quickly, including the emergence of possible functional partitioning through changes in the regulation of expression.Although rare, the latter can play an important role in Acaryochloris diversification, as illustrated by both the regulation of genes involved in the production of the light-harvesting phycobiliprotein phycocyanin and the differential expression of recA paralogs. Identification of Duplications and Grouping of Paralogs We used ParaHunter (Miller et al. 2022) to identify gene duplicates.All genes with > 50% amino acid identity and > 50% sequence length overlap were grouped together in clusters; most clusters were composed of two genes.Multiple sequence alignments were generated using Muscle (Edgar 2004), and, from these, codon alignments were made using PAL2NAL (Suyama et al. 2006).We then used CODEML from the PAML software package (Yang 2007) to estimate d S and d N values.Codon alignments yielding d S estimates greater than 5 were excluded from further analyses. Estimation and Comparison of Expression Levels To account for RNAseq reads that potentially map well to more than one gene copy, we used a combination of Bowtie2 and BLASTN to discriminate between uniquely mapping reads and those that map equally well to more than one gene copy.Specifically, we used Bowtie2 to identify the total amount of reads mapping to each cluster of paralogous genes.Next, we used BLASTN to identify reads that match with 100% sequence identity to more than one gene in each cluster of paralogs; these ambiguously mapping reads were excluded from analysis.Expression values for each specific gene/paralog in a cluster (for paralog-vs.-paralogcomparisons) were not calculated in paralog clusters where more than 10% of the reads were removed due to ambiguity.Ambiguously mapping reads were included in the estimation of bulk-cluster gene expression levels (i.e. the total amount of transcripts generated from all paralogs of a specific gene) for estimating transcript dosage of paralogs. We quantified gene expression as Transcripts Per Million (TPM) to normalize for gene length and the sequencing depth of each RNA-sequenced library.We next used ANOVA to identify differential expression between paralog pairs and to detect interactions between paralog pairs across the three different experimental conditions.Paralogs with a significant interaction showed differences in expression across conditions (e.g.functional partitioning).To minimize data noise in paralog-versus-paralog comparisons, we required a minimum TPM of 10 in all three conditions.To minimize false positives during ANOVA testing, we required at least one read mapping to each of the paralogs in at least 2 of the 5 experimental replicates. A. marina recA experiments and phylogenetics For cloning of recA genes, we added a multiple cloning site to Addgene plasmid 40779 (resulting in plasmid pRHA) in order to have recA genes under the control of a rhamnose promoter.The multiple cloning site was introduced using gBlocks from New England Biolabs.We PCR-amplified four recA copies from A. marina strain MBIC11017 (AM1_3550, AM1_5031, AM1_5483, AM1_B0414) as well as the recA genes from E. coli MG1655 and Cyanothece sp.strain PCC 7425.Strains and primers used are listed in supplementary table S3, Supplementary Material online.All genes were cloned into the SpeI and NotI sites of pRHA that had been introduced with the primers.All clones were sequence verified.We introduce the empty vector (as a control) to the E. coli strains that either carry the deletion (NoRecA) or have an intact chromosomal copy of recA (ChrRecA).Each vector carrying a cloned recA copy was introduced into the rec-deletion strain (denoted as 3550, 5031, 5483, B0414, respectively, along with RecAe for E. coli recA). To verify recA expression, we performed RT-PCR for each cloned copy of recA after rhamnose induction.Cells grown overnight in LB broth with ampicillin and 0.2% glucose were inoculated into fresh LB with ampicillin and 0.2% rhamnose.After 3 h of growth at 37˚C with agitation, 1 mL of cells was pelleted and stored at −80˚C until further processing.A Qiagen RNeasy kit was used to extract RNA.We generated cDNA using Maxima First Strand cDNA synthesis kit (Thermo Scientific).We used primers designed specific to each recA copy for detection of each transcript. Growth of E. coli strains was measured in 96-well plates using a Synergy HT plate reader (BioTek).Each strain was grown overnight LB with ampicillin (100 µg mL −1 ) and 0.2% (w/v) glucose.These cultures were used at 5% (v/v) to inoculate wells (to a total volume of 200 µL) with LB with ampicillin and 0.2% rhamnose.These cells were grown at 37˚C for 4.5 h, and optical densities of wells were monitored at 600 nm.Doubling times were estimated for four biological replicates and three independent experiments for the exponential growth phase by: (T f -T 0 )*log2/ log(OD f /OD 0 ), where T f and T 0 correspond to the last point at which the cells were growing exponentially (determined by plotting the growth curve on a semi log plot) and the first point at which the cells entered exponential phase, respectively, and OD f and OD 0 correspond to the OD 600 reading at the T f and T 0 , respectively.To compare growth rates among strains, we performed t-tests with a Benjamini-Hochberg False Discovery Rate-adjusted P value of 5% estimated with JMP software version 16.2.0(SAS Institute Inc., Cary, NC). Acaryochloris marina MBIC11017 cultures were grown in FeMBG-11 medium (IOBG-11 supplemented with iron (III) monosodium salt) at 30 °C with continuous illumination from cool fluorescent lights at ∼20 µmol photons m −2 s −1 and mild agitation.All cultures were grown to an OD 750 of ∼0.15 to 0.20 in 300 mL and split in half to generate the control and experimental cultures.For the H 2 O 2 treatment, we exposed cells to 3 mM H 2 O 2 for one hour before harvesting cells.For the UV treatment, the cultures were exposed to 300 J m −2 using a BioRad GS Gene Linker, followed by 1 h recovery in the dark (to avoid photoreactivation) before harvesting the cells.For harvesting cells, we filtered cells onto 0.6 µm pore Isopore membrane filters (Millipore), followed by flash freezing.Total RNA was isolated using a Direct-zol RNA mini-prep kit (Zymo Research).We added a bead beating step with the kit's TRI-reagent before proceeding according to the manufacturer's protocol.Each prep was checked for genomic DNA contamination using PCR before proceeding to cDNA synthesis; any prep found to be contaminated was treated with additional DNase, cleanup, and another round of PCR.We generated cDNA using a Maxima First Strand cDNA Synthesis Kit with dsDNase (Thermo Scientific) and 50 ng of RNA.We measured relative expression by qPCR using a Stratagene Mx3000p (Agilent) and DyNAmo Flash SYBR Green qPCR kit (Thermo Scientific). MxPro QPCR software was used to calculate Ct values.We used the comparative Ct method to estimate relative expression levels (Livak and Schmittgen 2001;Schmittgen and Livak 2008).Each sample was normalized to the average expression of reference genes petB and ilvD.Normalized expression values for control and treatment samples were then used to estimate relative expression.All statistical analyses were performed using the R statistical environment on the raw ΔΔCt values before log transforming for fold-change calculation. FIG. 1 FIG. 1.-Increased transcript dosage predominates for A. marina gene duplicates (d S < 5 in both panels).a) Stoichiometric relationship between gene copy number in A. marina genomes and expression pooled for three experimental conditions (ratios are CCMEE 5410/MBIC11017).b) Gene expression ratios for singletons in both genomes (ratio is CCMEE 5410/MBIC11017), duplicate pairs in CCMEE 5410 and duplicate pairs in MBIC11017, respectively, during growth (G), starvation (S), and recovery (R). FIG. 2 FIG. 2.-Expression responses of duplicate pairs.a) Number of duplicate pairs assigned to different expression categories as a function of duplicate age estimated as synonymous divergence: No difference in expression (blue), asymmetric expression (gray) or possible functional partitioning (yellow).b) The fates of ancestrally shared duplicates are not necessarily deterministic: PEP synthase expression is asymmetrically expressed in MBIC11017 but functionally partitioned in CCMEE 5410 during growth (G), starvation (S) and recovery (R).Parental chromosomal copy (green); daughter plasmid copy (blue).The respective pairs have experienced similar selective constraints: d N /d S of 0.054 (CCMEE 5410) and 0.060 (MBIC11017). FIG. 3.-Asymmetric gene expression and functional partitioning of duplicates for the novel A. marina MBIC11017 phycobilisome.a) Gene maps of A. marina MBIC11017 plasmid pREB3 regions containing duplicated genes involved in phycobiliprotein synthesis and its regulation.Filled circles next to selected genes are color-coded to indicate expression values in panels c-e and supplementary fig.S7, Supplementary Material online.b) Distribution of coverage depth for Illumina reads that uniquely map to the strain MBIC11017 reference chromosome (purple) or plasmid pREB3 (blue), respectively.Gene expression (uniquely mapped reads) during growth (G), starvation (S) and recovery (R) for duplicated copies of c) cpcB1, d) ycf27, and e) cpcL. FIG. 4 . FIG. 4.-Differential expression of recA paralogs.a) RecA amino acid phylogeny reconstructed with IQtree by maximum likelihood using a LG + R3 model of sequence evolution.MBIC11017 and CCMEE 5410 sequences are in purple and green, respectively, and plasmid copies are in the gray box.R-1774 sequences are from the A. thomasi RCC1774 genome.Ultrafast bootstrap support greater than 50% for 1,000 bootstrap replicates is indicated at bifurcations.The tree was outgroup-rooted with sequences for Cyanothece PCC 7425, Geitlerinema PCC 7407 and Microcoleus FACHB-672.Scale bar is 0.05 amino acid substitutions per site.CCMEE 5410 copy 7214 is interrupted by a IS256 family transposase; although unresolved in the phylogeny, it appears to be recent duplicate (following the split with MBIC11017, rather than an ancestrally shared ortholog of MBIC11017 copy E0124) based on its gene order conservation with CCMEE 5410 copy 8051 and MBIC11017 copy B0414.b) Differential expression heat map of A. marina MBIC11017 and CCMEE 5410 recA paralogs for starvation and recovery conditions compared with growing cells; c) Differential expression heat map of A. marina MBIC11017 recA paralogs for differences in light quality and oxygen availability.Light quality is the difference in expression in far-red light versus white light (data from Hernández-Prieto et al. 2018); microoxia and hyperoxia are compared with normoxia (data from Hernández-Prieto et al. 2016). . MBIC11017 and CCMEE 5410 have 796 and 730 duplicate pairs with d S < 5, respectively; by comparison, most bacterial genomes have far fewer duplicates (mean = 102 pairs for a random sample of ∼2,400 bacterial genomes; median = 58; supplementary fig.S1, Supplementary Material online).Duplicated genes in A. marina tend to be comparatively recent: frequency distributions based on synonymous nucleotide divergence (d S ) are skewed toward the youngest age classes, with a majority of duplicate pairs having d S < 1 (supplementary fig.S2, Supplementary Material online;
2024-04-27T15:17:36.790Z
2024-04-26T00:00:00.000
{ "year": 2024, "sha1": "4c9d3704d707a743e0ff798024e2cc1f9b93cf90", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bf16db3a2c843e41aa6557438e153ddf45bf1ac9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
218938897
pes2o/s2orc
v3-fos-license
An investigation on the suitability of hydrated building lime from travertine limestone outcrop of Bogongo, South West of Cameroon In the present study, physico-chemical investigations were carried out on hydrated lime produced from the limestone of the travertine outcrop of Bogongo in the South West Region of Cameroon. The aim was to evaluate the suitability of that hydrated lime as building lime. The raw material was characterized and then fi red at 900 (cid:1) C. The fi red product was hydrated, dried and also characterized. Chemical and mineralogical analyses, density, BET speci fi c surface measurements and thermal analyses were performed. Results were compared to those for an EN 459-1 CL 90-S industrial commercial hydrated lime. It has been shown that, hydrated lime production using the raw material from the Bogongo travertine could yield products with almost similar physico-chemical properties compared to imported CL 90 S hydrated lime, thus could have positive consequence in the commercial exploitation of the Bogongo travertine limestone outcrop. © Introduction Lime is a material that has been used as an essential binder for the production of mortars and plasters used in structures of different civilizations since antiquity [1].Nowadays, the world consumption of lime is driven by the growing demand in chemical, industrial, metallurgical, environmental and construction applications [2].Lime has also prominent applications in soil stabilization, coatings and lightweight insulation materials [3][4][5].According to BS EN 459-1-2015 standard, lime refers to calcium oxide or hydroxide, calcium-magnesium oxide or hydroxide produced by calcination of naturally occurring calcium carbonate such as limestone, chalk and shells, or naturally occurring magnesium carbonate like dolomitic limestone and dolomite [6][7][8].Although limestone deposits can be found in many parts of the world, only a small portion is pure enough for industrial lime manufacturing.Depending on the purity of the raw material, there are two families of lime: the first is air lime, which combines and hardens with atmospheric carbon dioxide, and the hydraulic lime.It is divided into two sub-families: calcium lime (CL) and dolomitic lime (DL).Quicklime is an air lime in the form of calcium oxide that reacts exothermally after contact with moisture or water.The second family is hydrated lime which is obtained by the controlled slaking of quicklime.It can be available in powder (S), putty (S PL) or slurry milk (S ML).Based on BS EN 459-1-2015 standard, calcium and dolomitic lime types and their chemical (calculated on the basis of quicklime) and physical requirements are indicated in Table 1. Prior to the invention of Portland cement, lime had been used in mortar, external and internal plastering and rendering, foundations, flooring, infilling of walls and casing of water conduits for the construction of several historic buildings in all the continents of the World [9,10].With the discovery of Portland cement and the explosion of its use since World War II, the use of building lime was reduced to repair work.Since the beginning of the 21 century, lime as a building material is regaining interest in the scientific and technical community.The concern about the high amount of carbon dioxide (CO 2 ), a major greenhouse gas responsible of global warming, released by the lime industry has been mitigated by various capture techniques that can transform the CO 2 to raw materials for the production of energy [11].Numerous advantages of building lime have been documented compared to Portland cement.The less energy consumption during its production and the recovery of lime in masonry units after demolition places lime well ahead of Portland cement as a sustainable material [12,13].From limestone to the hardened product after contact with air, the transformation process of lime is in the form of a cycle commonly referred to as « lime cycle » (Fig. 1).The lime cycle is a slow and gradual process that increases the hardening of the material and favors the self-healing of cracks, improves its durability, provides breathing property of the building and frees the indoor environment from moisture with benefit of improved humidity regulation and pollutants removal [13].Also, the strength in lime based mortars or plaster has been improved with pozzolanic additives to produce cement-like material [14,15]. Travertine is a form of limestone deposited by mineral springs, especially hot springs.It is formed by rapid precipitation of calcium carbonate (CaCO 3 ).Sedimentological, pretrological and geochemical studies conducted on the Bogongo travertine deposits, situated in the South-West Region of Cameroon [16,17] showed that the material consisted of dense and hard limestone concretions with high calcite (CaCO 3 ) content (98.2 to 99.8 %), associated with detrital quartz (0.2 to 0.8 %).No existing study has been conducted to assess the suitability of the limestone from the Bogongo travertine for building lime production.Thus, the objective of the present research is to investigate the physico-chemical properties of hydrated lime obtained by calcining a sample of limestone from the Bogongo travertine deposit at 900 C, and compare the results with those of an industrial commercial hydrated lime of BS EN 459-1 CL 90-S type.This involved chemical and mineralogical characterizations, density, BET specific surface measurements and thermal analyses of hydrated lime. Materials The materials used in the present work were a limestone sample (EKD) from the Bogongo travertine deposit in the South-West Region of Cameroon as indicated in Fig. 2, an industrial hydrated lime EN 459-1 CL 90-S type produced by SB-Mercier in France and distilled water 2.2.Methods Preparation of test specimens The limestone sample was crushed to 500 mm using an impact mill and fired at a temperature rise rate of 10 C/minute in a laboratory Nabertherm electric oven at 900 C inside a crucible and soaking time of two hours.This production process was chosen to optimize the firing of limestone at a laboratory scale.It differs from the industrial process where vertical and rotary kilns make use of limestone pebbles with particles size between 30 to 100 mm.As soon as the cooling started, the crucible was removed from the oven and left to continue to cool in a desiccator to avoid the contact of lime with the atmospheric carbon dioxide.Excess distilled water (60% in weight of fired material) was then poured on the contents and mixed to form "lime putty" which was left to mature for 24 hours in a container having a lid to avoid contact with CO 2 from air.The "lime putty" was then dried in an oven for 24 hours at 110 C and thereafter crushed for 30 minutes in a ball mill to obtain the laboratory made hydrated lime powder (CHEKD) for characterizations. Physico-chemical characterizations In order to characterize the various materials used in the study, the following techniques were used: Inductively Coupled Plasma (ICP) analyses The Inductively Coupled Plasma (ICP) spectrometry method was used to chemically characterize the limestone sample (EKD), the commercial hydrated lime (CHX) and the laboratory made hydrated lime (CHEKD).The device was an Atomic Emission Spectrometer using induced argon plasma (ICP-AES).Samples were dissolved into solutions before analyses.Thus, 19 to 27 mg of dry powder sample having particle size less than 80 microns was introduced into the reactors made of Teflon tubes.Then 3 mL of hydrofluoric acid (HF) and 9 mL hydrochloric acid (HCl) were added.The reactors were then mounted in a CEM MARS 5 microwave oven equipped with pressure and temperature sensors.The reactors were heated to 180 C at a temperature rise rate of 12 C/minute using a soaking time of 20 minutes and pressure of 30 bars to allow the complete dissolution of the material.At the end, the liquid of the reactors was recovered in a 250 mL plastic flask.The rinsing of each reactor and its accessories made it possible to increase the volume of the solutions to 250 mL. For analyses, a small volume of the solution was introduced into the argon plasma (temperature of about 4200 to 6000 K).The passage of the solution in the hottest region of the plasma allowed the excitation of the electrons of the peripheral layers of the elements in the solution, which then left the hot zone and returned to their ground state by emitting a photon whose wavelength characterized the original element.This wavelength was then oriented towards a sensor that made it possible to identify the elements and then to quantify them by means of calibration curves previously produced.The measurements were made at a constant temperature of 23 C. X-Ray Diffraction (XRD) test The two starting materials (EKD and CHX) were subjected to the XRD test that allowed determining the crystalline phases present in the tested specimens.The equipment used was a Debye-Scherrer assembly equipped with a localized curved detector (INEL CPS 120 -Curved Position Detector) at the center of which the sample was placed.The X-ray source operated at 40 kV and 30 mA while the monochromatic radiation used had a wavelength of 1.540598 Å (copper Kα 1 ).The sample holder of the apparatus had two rotational movements and a translational movement.One of these rotations was to adjust the angle of incidence α of the beam while the other, controlled by a motor, was to rotate the sample during data acquisition around a perpendicular axis to its surface, thus ensuring a random distribution of the orientation of the crystallites.The translational movement helped to position the sample so that its surface intercepts the incident X-Ray beam on the axis of rotation of the assembly.The photons were diffracted in an angular range 2Ө of about 120 but during the exploitation of the curves 2Ө = 65 was considered. Absolute density measurements The absolute density is the ratio of the dry mass of sample to its actual volume of solid matter in a porous body.The measurements of the absolute density were carried out according to ISO 5018: 1983 [18] standard.The equipment used was an Accupyc1330 helium pycnometer.The determination of the volume of the sample (V ech ) makes it possible to calculate its density.By applying the Mariotte's law (equation 1), the principle of the technique is based on the measurement of the pressure P 1 which prevails in a calibrated chamber and the pressure P 2 of the cell which contains the sample.The cell volume (V cell ) and the expansion volume (V exp ) are constants given by the manufacturer.After calibrating the pycnometer with a steel ball of known mass and volume, a small amount of sample (dried at 110 C for 24 hours) was placed in a cell and then introduced into the apparatus.The helium pressure was set at 1.8 bars and the device automatically performed five volume measurements of particles and calculated the average volume (in cm 3 ).Using the dry mass (in grams) of the sample initially recorded in the apparatus, the average density was established and displayed in the device in g/cm 3 .BET specific surface measurement The specific surface is one of the properties that govern the reactivity of powders.The measurements were performed on CHX and CHEKD samples.The experimental determination of the BET specific surface is based on the principle of nitrogen adsorption at low temperature in materials.From a quantity of absorbate, the size of the adsorbed molecules and their possibilities of arrangement, it was possible to evaluate the surface on which the adsorbate molecules were fixed using the so-called Brunauer, Emmett and Teller (BET) calculation model.The BET method required a pre-treatment of the samples (degassing and drying between 150 and 300 C in order to evacuate the gases previously adsorbed).The apparatus that was used included a Desorb degassing device 2300 with three stations under nitrogen sweep and a Flowsorb II 2300 measuring device under a nitrogen/helium mixture also having a degassing station.Grains of powders previously dried for 24 hours in an oven at 110 C were introduced into cells and fixed at the degassing stations of the apparatus.The degassing was then carried out for 2 hours at 200 C. Once the degassing was completed, the cells were passed one after another to the previously calibrated measuring station to measure the amount of gas absorbed and to determine the surface of the powder.During the measurement, the cell was immersed in liquid nitrogen and the gas was introduced into the cell.The gas was adsorbed progressively until the saturation of grains occurred.When the equipment indicated a stable surface for grains, the sample was removed from the liquid nitrogen and the gas gradually desorbed from the grains and a new equilibrium established.The equipment indicated another surface for grains which was retained as S in m 2 .At the end of the process, the dry mass (m) in grams of the sample was determined and the ratio between S and m gave The BET specific of the grains in m 2 /g. Thermal analysis The thermogravimetric analysis (TGA) revealed and quantified the loss of material weight as a function of temperature.This technique was applied to EKD, CHX and CHEKD samples.The device used was a Linseis model.About 140 mg of sample was introduced into an alumina crucible and placed in the heating block of the apparatus.Once the device was turned on, the weight loss as a function of temperature of up to 1000 C was recorded and stored in the computer data base system.A "white test" was performed under the same conditions to establish the baseline.Differential thermal analysis (DTA) was also performed to sample EKD using DTA-TG Setsys 2400 equipment because peaks of energy displayed during thermal treatment of the raw material can give an indication on the optimal firing temperature of the raw material. Densities and BET specific surfaces The results of density and BET specific surface measurements are shown in Table 2.The high density (2.813 g/cm 3 ) for EKD compared to ordinary limestone (1.55 to 2.75 g/cm 3 ) could be an indication of the high compactness of the Bogongo travertine.The density of CHEKD is slightly greater than that of CHX.This difference could be explained by the difference in their chemical composition with CHEKD having higher SiO 2 , Al 2 O 3 and TiO 2 than CHX.Nevertheless the densities of the two hydrated limes are in the range of densities for common hydrated limes which are typically between 2.2 and 2.4 g/cm 3 .The grinding time applied to CHEKD permitted to produce fine particles with specific surface of 21.18 m 2 /g higher than that for the industrial hydrated lime (CHX) which was 19.12 m 2 /g.The specific surface of hydrated lime is mostly affected by the type of limestone used and its calcination and slaking processes [19].In industry, slurry detention, paste, ball mil and batch slackers produce hydrated lime without the need of the grinding process [19,20].The typical specific surface of hydrated lime ranges from 8 to 58 m 2 /g. Chemical and mineralogical analysis The results of chemical analysis (Table 3) indicated that EKD, CHX and CHEKD mainly consisted of CaO as major oxide with minor oxides made of SiO 2 , Al 2 O 3 , Fe 2 O 3 and MgO.Sample CHEKD contained slightly more SiO 2 than CHX, but CHX had more MgO than CHEKD.Small quantity of TiO 2 was detected in CHEKD.The presence of that oxide could be responsible for the yellowish color presented by CHEKD compared to the whitish color of CHX.Both samples CHX and CHEKD presented roughly lower loss on ignition compared to EKD.This could be mainly explained by the decomposition Ca(OH) 2 to form CaO and H 2 O during the thermal treatment, while CaCO 3 in EKD is transformed to CaO and CO 2 during heating.The heating process at 1000 C is responsible of about 44 % weight loss in pure CaCO 3 (equation 2).In the case of samples CHX and CHEKD, the major contribution to weight loss on ignition is from the release water molecules during the heating of Ca(OH) 2 in samples (equation 3).The Loss on ignition of CHX (29.34%) is slightly greater than 29 % expected if the hydrated lime was pure.The difference could be explained by the presence of small proportion of calcium carbonate (CaCO 3 ) produced by the reaction of a small part of Ca(OH) 2 with the carbon dioxide of air or from unburned limestone particles.The CaO, SiO 2 , Al 2 O 3 , Fe 2 O 3 and MgO content in lime is essential for their reactivity and their classification as building lime.They can be considered as hydraulic or non-hydraulic depending on their Hydraulic Index (HI) which is calculated using equation 4. The HI calculation applied on CHX and CHEKD gave 0.037 and 0.046 respectively and suggested the two hydrated lime samples to be non-hydraulic, since their HI were less than 0.1 [21].The sum of CaO, MgO and loss on ignition was 96.59 and 93.01 for CHX and CHEKD respectively.These values were greater than 90, further confirming the two hydrated limes as CL 90 S types according to the standard BS EN 459-1 [6]. The results of the XRD test for CHX and EKD are indicated in Figs. 3 and 4 respectively.In the case of CHX, calcium hydroxide and quartz were identified as the minerals present.Sample EKD was essentially made of calcite (CaCO 3 ) and quartz (SiO 2 ).After the thermal treatment at 900 C, it can be assumed that the product will be mainly consisted of CaO, explaining the higher loss on ignition of EKD.The other minor minerals phases including those containing Al 2 O 3 , Fe 2 O 3 and MgO were in small quantities not enough to present significant and exploitable DRX peaks. Thermal properties The thermal profiles for samples EKD, CHX and CHEKD are shown in Figs. 5 and 6.Fig. 5 did not only illustrated the thermal decomposition of phases that are present in the limestone, but also allowed the estimation of the temperature range at which the material must be calcined to convert its calcium carbonate (CaCO 3 ) to quicklime (CaO) as indicated by the endothermic reaction that started at about 700 C and finished at about 910 C with optimum decomposition of the material being at 883.6 C. The maximum weight loss was 41.3%, mainly due to the decomposition of CaCO 3 [22][23][24].The result confirmed that sample EKD essentially consisted of CaCO 3 .Consequently, by comparing the weight loss of pure CaCO 3 , calculated according to reaction (2), the percentage of CaCO 3 in EKD can be estimated at 94% if it is assumed that the contribution to weight loss was essentially due to the decomposition of CaCO 3 .This percentage is greater than that indicated by Bayiha et al. [25], but lower than that estimated by Tchouatcha et al. [16] or Bisse et al. [17].The proportion of impurities such as quartz (SiO 2 ) in the limestone sample can the estimated at 6%. The TG curves of CHX and CHEKD samples are shown in Fig. 6.The weight loss that started at about 300 C and ended at about 500 C with about 22% weight loss was due to the de-hydroxylation of the hydrated lime through the reaction illustrated by equation 3. The second weight loss between 500 C and 700 C) with about 25% weight loss can be due to the decomposition of impurities in the form of magnesium hydroxide, residual un-burnt calcium carbonate from EKD or carbonate formed in the calcined material after its contact with carbon dioxide of air [26].Thermogravimetric results confirmed the similarity between the industrial hydrated lime and that synthesized from EKD.It is then clear that hydrated lime production using raw material from the Bogongo travertine could yield products with almost similar physico-chemical properties compared to imported CL 90 S hydrated lime.This outcome would have beneficial social-economic dynamics in the Bogongo region and neighboring communities.For the industrial production of the lime, additional properties like available lime and soundness according to BS EN 459-2:2010 standard should be tested for conformity to BS EN 459-1 standard. Conclusion From investigations carried out on the physico-chemical suitability of hydrated lime from the Bogongo travertine as building lime, the following conclusions can be made: The limestone of the travertine contained about 94% CaCO 3 and 6% impurities mainly consisting of quartz; The production of hydrated lime using the raw material from the Bogongo travertine could yield products with almost similar physico-chemical properties compared to imported CL 90 S hydrated lime; The density measurement of the travertine sample suggested a very dense and compact material compared to ordinary limestone; The Bogongo limestone should be fired between 550 C to 910 C with optimum decomposition of the material being at about 880 C. The TiO 2 content of the Bogongo limestone could contribute to the yellowish color of the hydrated lime produced from the limestone; There is potential for social-economic development of the Bogongo region, for commercial activities occurring for the production of building lime from Bogongo travertine outcrop. Declaration of Competing Interest There is no conflict of interest. Table 1 Chemical requirements for some building limes according to BS EN 459-1:2015. Table 2 Absolute densities and BET specific surface of samples Table 3 Major oxides in the studied materials (nd = not detected, L.O.I.= Loss on ignition).
2020-05-07T09:14:27.220Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "6f49ff358e2baa13e3b55dcfcfbf5d478c3e573a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cscm.2020.e00369", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "95177866149e68c677e4e39a6c6d90ab548897d4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
20160515
pes2o/s2orc
v3-fos-license
Metacognitive Listening Strategies Used by Saudi EFL Medical Students The present study investigated the metacognitive listening strategies among Saudi EFL medical students. The participants were 104 males and females, randomly selected to fill in the Metacognitive Awareness Listening Questionnaire (MALQ), developed and validated Vandergrift Goh, Mareschal, and Tafaghodtari (2006). The results revealed that participants use problem-solving and direct attention strategies more frequently than other metacognitive listening strategies. On the other hand, mental translation and personal knowledge strategies were reported to be the least used strategies. The pedagogical implications of these findings are discussed. Introduction Listening is an indispensable skill that develops faster than speaking and often affects the development of reading and writing abilities in learning a new language (Oxford, 1993;Scarcella & Oxford, 1992).This is because one receives input through listening to instructions or explanations prior to responding orally or in writing.Acquiring listening skills is not an easy task since listeners are required to figure out the meaning from the oral input relying on their their prior knowledge of the world and of the target language (Byrnes, 1984;Nagle & Sanders, 1986;Young, 1997) and to retrieve information from their long-term memory and make their own interpretations of the spoken passages (Mendelsohn, 1994;Murphy, 1985;Young, 1997).Vandergrift (2003) declares that listening is a complicated and active process of interpretation in which listeners try to match what they hear with their prior knowledge.However, for L2 learners this would be a complex process since their memory have limited capacity of the target language (Richards, 1983) hence they are required to employ various listening strategies.These strategies, which have been developed based on O'Malley and Chamot's (1990) learning strategies classification, and classified as cognitive, metacognitive, and socio-affective strategies, are steps taken to contribute learners' acquisition, storage, retrieval, and use of information.Metacognitive strategies are utilised by L2 learners to increase their comprehension and L2 retention, and include planning, monitoring, evaluating and problem-solving; cognitive strategies are employed by listeners to cope with the material they learn or to apply specific techniques, such as repeating, imagery, inferencing, deduction, note-taking, elaboration, and translation.Socio-affective strategies are used by L2 learners to cooperate with their classmates, to ask the teacher for more clarification, or to apply specific techniques to decrease anxiety (O'Malley, Chamot, & Kupper, 1989;Vandergrift, 1997). Listening skills have been neglected by teachers and researchers (Field, 2008;Oxford, 1993).Nunan (2002) argues that listening skills are treated as a secondary skill and as a means to an end, rather than an end in itself.According to Graham (2003), listening in a foreign language is a complex but underestimated skill.It was not until the 1960s, which witnessed emphasis on oral language skills, that listening was given a boost (Nunan, 2002: 238).As listening plays an indispensable role in the speaking skills development and meaningful mental representations in the target language, researchers are required to pay much attention to listening skill.Moreover, the learning style that a listener prefers to solve problems is closely related to the conscious actions taken by the individual (Ehrman et al., 2003;Flowerdew & Miller 2005).The reason why the focus of this study is particularly on metacognitive listening strategies, rather than on any other type of strategy, is that this group is believed to play a vital role in facilitating language learning, for they 'oversee, regulate, or direct the language learning process' (Vandergrift, 1999: 170). Although listening strategies play a significant role in the development of foreign language proficiency, very limited studies have investigated the use of metacognitive listening strategies in the Saudi context.Therefore, this study aims to investigate the metacognitive listening strategies used by Saudi EFL medical students.As far as the researcher is able to ascertain, this is the first study of its kind to investigate Saudi medical students' use of learning strategies in general and listening strategies in particular. Metacognitive Strategies According to Flavell (1976) metacognition is a process in which a person is actively monitoring, controlling and arranging the cognitive process in order to attain cognitive goals.Metacognitive strategies, which reflect thinking about one's own thinking (Flavell, 1976), the individual's level of consciousness (Wenden, 1998), or the level of control over one's mental processes (Nelson, 1996), play a critical role in the cognitive processes of language as a means of communication.Flavell (1979) argues that metacognition includes both metacognitive knowledge and metacognitive experiences.The latter is defined as 'any conscious cognitive or affective experiences that accompany and pertain to any intellectual enterprise ' (p. 906) Vandergrift et al. (2006) as the third component of metacognition.This component 'builds on strategy knowledge,' yet it also includes 'awareness of when and how to use specific strategies' (Flavell, 2006: 89).In regard to these three components of metacognition, experience is 'an involuntary response,' whereas knowledge and strategy are 'amenable to instruction' (p.101). O'Malley and Chamot (1990) state that 'metacognitive strategies involve thinking about the learning process, planning for learning, monitoring the learning task, and evaluating how well one has learned' (p.137).Therefore, these strategies have an executive function.They are considered a mental tool and a sign of successful learning that occupies the position of a seventh sense (Birjandi, Mirhassani, & Abbasian, 2006).According to Harris (2003) metacognition is a guiding process to learning in which the learner is using strategies to plan, monitor and evaluate language use and language learning.Cohen and Dӧrnyei (2002), cited in Altuwairesh (2016), state that metacognitive strategies refer to 'those processes which learners consciously use in order to supervise or manage their language learning,' which 'allow learners to control their own cognition by planning what they will do, checking how it is going and then evaluating how it went' (p.181).Oxford (2001) says that this type of strategies helps learners 'manage themselves as learners, the general learning process, and specific learning tasks' (p.167).Since metacognitive strategies are related to such essential variables in learning, i.e. the learner, learning in general and particular learning tasks, it becomes evident why researchers argue for the importance of investigating this type of strategy.Furthermore, investing classroom time in them enables language teachers to equip their students with 'empowering' tools (Anderson, 2002).Anderson (2002) elaborates on this point by stating that 'the use of metacognitive strategies ignites one's thinking and can lead to more profound learning and improved performance, especially among learners who are struggling' (p. 2), all of which are aims of any language teacher and learner.Anderson (2002) argues that metacognition can be grouped into five major components, including preparing and planning for learning, selecting and using learning strategies, monitoring strategy use, orchestrating various strategies and evaluating strategy use and learning.This relates to some extent to the declarative and procedural knowledge involved in metacognition, mentioned by Chamot (2005), in which planning, monitoring and evaluating form the procedural knowledge, whereas selecting, using and orchestrating strategies form part of the declarative knowledge.Vandergrift et al. (2006) believe that listening metacognitive awareness comprises five factors which are planning evaluation, problem solving, translation, direct attention and personal knowledge. Previous Studies on Metacognitive Listening Strategies One of the earliest studies conducted in the area of second language listening strategies was done by O'Malley, Chamot and Küpper (1989).The researchers used think-aloud protocols to identify the listening strategies that 11 intermediate level high-school students used when performing a listening task.The study aimed to compare effective and ineffective listeners in order to find out any differences in the use of learning strategies that existed between the two groups of learners.Results demonstrate that whatever strategy any student uses varies depending on the phase in the listening comprehension process.The students used selective attention and self-monitoring in the perceptual processing stage, grouping and inferencing in the parsing stage, and elaboration in the utilisation stage.The study also found that effective listeners use strategies more successfully than their less effective peers.Wang (2002) conducted a study to investigate the listening comprehension strategies employed by EFL learners in Taiwan.The results revealed that metacognitive strategies are used frequently in the English listening process.The findings also indicated that EFL learners reported to use the self-management strategies and the monitoring strategy in metacognitive strategies to facilitate their listening comprehension.Bidabadi and Yamat (2013) found in their study that EFL learners used directed knowledge strategies more frequently than other listening strategies.This indicates that it was necessary that EFL learners focus on the listening texts, and that this kind of strategy could help them achieve listening comprehension.Vandergrift (2003) conducted a study which aimed to explore the type of strategies used and the relationship between listening strategy use and listening proficiency.The participants of the study were thirty-six junior high school students of French in Canada.Results indicate that students used all types of metacognitive strategies recognised in the literature, including: planning strategies, monitoring strategies and problem identification strategies.The only type of strategy not used was evaluation strategies.The study also revealed that the high proficient listeners used metacognitive strategies more frequently than the low proficient listeners.Thus, the study suggests that in order to enhance L2 listening performance, it is recommended to teach low proficient listeners how to use metacognitive strategies. Altuwairesh (2016) investigated the metacognitive listening strategies used by Saudi EFL female students when listening to texts in English.Two main research questions were explored in the study: (1) which of the five major types of metacognitive strategies do the participants use most when listening to English texts? and ( 2) what are the metacognitive listening strategies used most by the target group when listening to English texts?The Metacognitive Awareness Listening Questionnaire was used to arrive at answers to the two research questions.The participants were 82 students from the same cohort.Results reveal that the participants reported using problem-solving and directed attention strategies more frequently than the other metacognitive listening strategies; mental translation and personal knowledge strategies are the least used by the participants.The results give insight into the metacognitive listening strategies used by effective L2 listeners, with ample evidence provided from the literature available on the subject.Results of this study also demonstrate that many L2 learners do in fact perceive listening as difficult, thus, investing classroom time in developing learners' strategies is worthwhile. Subjects The subjects of this study included 104 male and female sixth-year medical students at the College of Medicine at University of Ha'il in Saudi Arabia.They enrolled for the spring term, 2015-2016.They are in their last year at the college and then they will be enrolled in an internship for one year to prepare them to start practicing medicine as general practitioners (GP).The reason behind targeting year-six students is because they have been exposed to the English language at the college for a minimum of six years, since the medium of instruction at the college of medicine is English and they have therefore gained experience in using language learning strategies, including listening strategies.Their age ranged between 23-27 years old.They were selected randomly to participate in the study and did so of their own free will. Instruments According to Vandergrift et al. (2006), among the various procedures used to investigate learners' metacognitive knowledge about listening, the most commonly used are diaries, interviews and questionnaires (p.436).However, Oxford (1996) states that "questionnaires are among the most efficient and comprehensive ways to assess frequency of language learning strategy use" (p.25).Questionnaires have been widely used in studies on listening strategies like Goh (2002), Vandergrift (2005), and Vogely (1995).Hence, the instrument used in this study was the Metacognitive Awareness Listening Questionnaire (MALQ) developed and validated by Vandergrift et al. (2006).According to Goh et al. (2013), this questionnaire is based on research and theories concerning L2 listening, specifically on Flavell's (1979) proposals about metacognition.It elicits an awareness of five distinct strategies: Personal knowledge, mental translation, direct attention, problem-solving, planning and evaluation.In other words, the questionnaire seeks information about the perception that students have of their use of strategies when engaged in a listening task, and also asks for information related to the personal knowledge and how confident they feel about listening in the target language.In more specific terms, directed attention strategies refer to the students' ability to concentrate on a specific task, mental translation strategies help students to translate the information heard in the L2 into their first language, planning and evaluation strategies are meant to guide students to prepare before listening and to evaluate their performance after listening, and problem-solving strategies help students to make inferences when they do not understand a certain word.Finally, personal knowledge shows students' self-confidence in L2 listening tasks.The MALQ has 21 items.Students have to respond to the 21 statements by rating their responses on a six-point Likert scale.According to Vandergrift et al. (2006), they chose a scale without a neutral point so that respondents could not hedge.The questionnaire was translated into Arabic in order to facilitate the students' understanding of the statements when soliciting the information.The questionnaire was administered by the researcher during a regular class period.The students were told that there were no right or wrong answers to the questions and their responses would be used for research purposes only.They were also informed that they have the right not to participate. The questionnaire items were analysed using SPSS 19 to calculate descriptive statistics (for example frequency counts and percentages).The internal reliability of the questionnaire calculated by Cronbach's alpha was at α =.80. Results and discussion Table1.Distribution of mean scores on MALQ and its subscales (n = 104) Statement Mean These differences were corroborated through one-way repeated measures ANOVA which showed significant differences in the use of types of metacognitive listening strategies by all the subjects (F = 117.649,p = .001).To know where the difference lies between the five types of strategies, Bonferroni adjusted multiple comparisons were performed.The results show that problem-solving strategies are used significantly more than all the other types (p = .001).Furthermore, direct attention strategies are used significantly more often than (p = .001)mental translation and personal knowledge strategies which, in turn, are the least frequently used metacognitive listening strategies in the study.On the other hand, no significant differences were found between other types.This result agrees with the other studies reviewed in the literature which found that problem-solving strategies are used more frequently than other types of metacognitive listening strategies (Chamot & Küpper, 1989;Graham, 2003;Vandergrift et al., 2006;Ratebi, 2013;Altuwairesh, 2016).This result indicates that Saudi EFL medical students resort to their repertoire of vocabulary and the general idea of the text to deduce the meaning of unknown words, use their experience and general knowledge in interpreting the text, adjust their interpretation upon realising that it is not correct, monitor the accuracy of the inferences for congruency with developing interpretation, and compare the developing interpretation with their knowledge of the topic (Vandergrift et al., 2006).In her study concerning the use of metacognitive listening strategies by Saudi EFL university students, Altuwairesh (2016) found that the subjects reported the use of problem-solving strategies more than other types of metacognitive listening strategies.According to Vandergrift et al. (2006), problem-solving 'represents a group of strategies used by listeners to inference… and to monitor these inferences' (p.450).Chamot and Küpper (1989), cited in Altuwairesh (2016), found that effective listeners reported using comprehension monitoring and problem identification strategies more frequently than ineffective ones.Both types of strategies are related to the factor of problem-solving in the MALQ, for inferencing is the way students deal with words or ideas that might cause them listening comprehension difficulties.Vandergrift (1997) conducted a study to investigate the listening strategies used by French L2 listeners.He found that 'comprehension monitoring appears to be the metacognitive strategy reported most often' (p.396).Moreover, Berne (2004) reviewed the findings of a number of studies that were concerned with differences between more and less proficient listeners.In regards to problem-solving strategies, she concluded that more proficient listeners are able to 'guess the meanings of words' and 'relate what they hear to previous experiences' whereas less proficient listeners 'make fewer inferences' and 'do not verify their assumptions' (p.525). Regarding the individual strategies which come under problem-solving strategies, all six strategies belonging to this factor have been reported among the most frequently used by the subjects.The most frequently used strategy in this factor is 'the use of known words to guess the meanings of unknown ones' with a mean of 5.21 and SD of .982,followed by 'using the general idea of the text to guess the meaning of unknown words' with a mean of 4.91 and SD of .864(M = 3.35).The remaining four strategies in this group did not show any significant means, which indicates that a large number of the target group did not favour the use of these strategies. Direct attention, which represents actions undertaken by listeners regarding concentration and staying on task, such as focusing harder when having difficulty understanding or getting back on track when losing concentration (Rost, 2002), ranked second with a mean frequency rating of 4.20.Vandergrift et al. (2006) define direct attention as 'strategies that listeners use to concentrate and stay on the task' (p.451).This result agrees with previous studies which found that direct attention strategies are used frequently by students.Ratebi (2013) found that his Iranian university students majoring in English reported a frequent use of direct attention strategies, which came second after problem-solving strategies.Moreover, in a recent study in a Saudi context, Altuwairesh (2016) found that Saudi EFL female students reported a high use of this factor in comparison to other factors. Under this factor belong four individual strategies, of which three have been reported to be frequently used by the subjects in this study.The only strategy which was not reported to be commonly used by the subjects is 'giving up and stopping listening when having difficulty of understanding'.Vogely (1995) conducted a study and found that more than half the participants reported using the strategy of recovering concentration upon losing it, which means, according to the researcher, that they are active listeners (p.47).The study also indicated that giving up listening when having a problem is not favoured by students (Vogely, 1995).This result also was reported by other researchers (Li, 2013;Ratebi, 2013;Altuwairesh, 2016). Planning and evaluation strategies are the third most frequent of the metacognitive listening strategies used by the subject in the current study with a mean frequency rate of 3.98.Vandergrift (2003) found that his participants reported high use of planning strategies.Goh and Taib (2006) also found similar results.(Vandergrift et al., 2006:450) argue that "the planning and evaluation factor represents the strategies listeners use to prepare themselves for listening, and to evaluate the results of their listening efforts".These items include strategies that related to setting a plan before listening, recalling texts similar to the one in hand, keeping a goal in mind during listening, periodically questioning one's degree of satisfaction with the level of understanding while listening, and finally after listening, reflecting on one's listening efforts and thinking of ways to make listening better next time.It should be mentioned here that out of these six strategies, three strategies were reported among the most frequently used by the subjects.Goh (2002) says, unlike monitoring strategies, planning and evaluation do not hamper listening and they consequently have a significant impact on overall listening.Further, the presence of planning and evaluation strategies is an indication that 'responsibility for learning shifts from the teacher to the student' (Vandergrift, 2002:571).Stepping back from real-time listening to reflect on the listening process helps learners 'understand and change learning behaviours'.As Anderson (2008) explains 'metacognition results in critical but healthy reflection and evaluation of thinking that may result in making specific changes in how learning is managed, and in the strategies chosen for this purpose' (p.99).He further comments on planning strategies by saying that 'taking time to prepare for learning and plan what needs to be accomplished makes a major difference in learning' (p.100). The least frequently used strategy was 'personal knowledge' with s mean frequency rating of 3.32.It includes items that assess the perceived difficulty of listening compared with the three other language skills, learners' linguistic confidence in second or foreign language listening, and the anxiety level experienced in second or foreign language listening (Sparks & Ganschow, 2001).It shows that Saudi EFL medical students have a low level of confidence and self-efficacy in listening comprehension and they perceive listening skills to be harder than other skills.Maybe it can be said that because Saudi EFL students consider listening as a difficult task to perform, they concentrate with difficulty and try to do their best in this regard. The second least used strategy was 'mental translation' with a mean frequency rating of 3.40.This factor includes strategies that should be avoided by L2 learners in order to become skillful listeners, and thus, a lower mean score is desirable.The three items this factor represents 'all tap the online mental translation strategy', which is 'an inefficient approach to listening comprehension' (Vandergrift et al., 2006: 450).Graham and Macaro (2008) also explain that translation is a type of bottom-up strategy which is 'the mark of ineffective listeners' (p.749).Furthermore, Vandergrift and Tafaghodtari (2010) believe that for them to be successful L2 listeners, students are required to overcome the compulsion to translate word for word, which they may face while listening.Vandergrift (2003) says that less translation is a strategy employed by more skilled listeners (p.458).Translation, he says, 'involves only surface mapping between languages [and] generally fails to activate conceptual processes' (Vandergrift, 2003, p. 486). Conclusion This study explored the metacognitive listening strategies used by Saudi EFL medical students.The findings reveal that the students reported using problem-solving and direct attention strategies more frequently than other strategies.The results also show that problem-solving strategies are used significantly more than all the other types.Furthermore, direct attention strategies are used significantly more often than mental translation and personal knowledge strategies which, in turn, are the least frequently used metacognitive listening strategies in the study.Based on these findings, it is important that EFL teachers help their students to use listening strategies such as mental translation and personal knowledge.Furthermore, EFL students should be encouraged to avoid word-for-word or key-word translation while listening.Literal translation, a commonly used practice in EFL classrooms, is probably attributed to students' attempts to compensate for the lack of exposure to L2 in authentic communication.Calis and Dikilitas (2012), for example, found that L2 learners with positive attitudes toward translation have believed translation was helpful in memorising L2 vocabulary.This, in turn, reflects a focus on form, rather than on meaning, in interaction mediated by L2. The findings of this study can convince language teachers to pay more attention to listening strategy instruction.Vandergrift et al. (2006) state that 'research on the effects of metacognitive instruction has provided preliminary evidence that performance, confidence, and motivation can be enhanced through classroom instruction (p.436).However, Macaro, Graham, and Vanderplank (2007) say that 'strategy instruction in the skill of listening is still very much in its infancy ' (p. 185).The aim of the strategy training is to increase learners' awareness about making decisions concerning their own strategy use to tackle language tasks.Strategy training implies that the learners can take control of their own learning by planning a goal, monitoring the processes, and evaluating the learning outcomes.This implies that nurturing learners' metacognition is the key to successful learning.Wenden (2002), Goh (2002), andVandergrift (1997) emphasise that learners' metacognitive processing is closely related to effective learning and is applicable to all learning contexts. To sum up, Berne (2004), cited in Altuwairesh (2016), states that 'listening comprehension strategies have been and continue to be a very fruitful area for researchers to explore' (p.52).Nevertheless, 'whilst there is a considerable body of literature exploring listening strategy use, the literature related to strategy instruction is more sparse, although there is an emerging research agenda' (Macaro, Graham, & Vanderplank, 2007, p. 165).Even though listening is now generally believed to play a vital role in second language acquisition and the facilitation of language learning, it is still considered 'a young field that merits greater research attention" (Vandergrift, 2003, p. 464). . Metacognitive experiences can activate strategies aimed at cognitive or metacognitive goals.Metacognitive knowledge, on the other hand, consists mainly of 'knowledge or beliefs about what factors or variables act and interact in what ways to affect the course and outcome of cognitive enterprises' (p.907).Flavell identifies three major categories of metacognitive knowledge, which are person, strategy knowledge and task.Besides metacognitive knowledge and metacognitive experience, strategy use is identified by Table 1 above includes the mean and standard deviation for 104 Saudi EFL medical students who participated in this study using MALQ and its subsections.The results show that the mean of the MALQ subscale ranges from 3.32 to 4.92, and the average score is 4.38 which indicates that the use of metacognitive listening strategies for Saudi EFL medical students was average, as each item had been measured on a 6-point Likert scale.The results also reveal that problem-solving strategies were reported by the subjects as being used more than other types of metacognitive listening strategies with mean frequency rating of 4.92.
2018-04-28T03:39:23.203Z
2017-01-15T00:00:00.000
{ "year": 2017, "sha1": "297a0a62a26f12eda7a46954080a3ea16f83aaed", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/elt/article/download/65785/35552", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "297a0a62a26f12eda7a46954080a3ea16f83aaed", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
221704420
pes2o/s2orc
v3-fos-license
Characterization and timing of gastrointestinal bleeding in continuous flow left ventricular assist device recipients Background and aims Heart failure is one of the leading causes of morbidity and mortality in the United States. The advent of left ventricular assist devices (LVAD) has improved the survival and quality of life in patients with end stage heart failure. Gastrointestinal bleeding (GIb) remains one of the limitations of LVADs. Methods A single center, retrospective review of records was performed for patients who underwent LVAD implantation between 2010 and 2015. All patients who survived more than 30 days were followed till March 2016 and are described below. Results A total of 79 patients were included in the study. The rate of GIb was 34.1% (27 patients) with a mean time to bleed of 267 days. Older patients were more likely to bleed. Upper GI bleeding was the source of bleeding in 54% patients. Arteriovenous malformations (AVM) were the source of bleeding in 74% bleeders and 80% of these patients had de novo AVM formation. 14/27 (51%) patients had a re-bleeding event. Thrombotic events were 4.5 times more likely to occur in patients who also had a GI bleed. Conclusions GI bleeding in LVAD patients is common with the source of bleeding more commonly being in the upper GI tract. GI bleeding may occur as early as 10 days post procedure, despite previous negative screening endoscopies. There is an increased risk of thrombotic events in patients who have experienced a GI bleed. Introduction Heart failure (HF) is a leading cause of hospitalization and death in the United States and one in five people with HF develop end stage heart disease [1]. Resultantly, left ventricular assist devices (LVADs) have demonstrated significant clinical utility through prolonging life in these patients and have evolved into a dual therapy option for patients as either a bridge to cardiac transplantation (BTT) [2,3,4,5,6,7] or a long term therapy option known as "destination therapy (DT)". Second generation continuous flow (CF) LVADs, have resulted in significant improvement in survival and quality of life in patients [8,9,10]. Despite the obvious benefits, gastrointestinal (GI) bleeding is one of the main complications of LVAD therapy [11,12], with a pooled prevalence of 23 % in patients with CF-LVADs [13]. The pathophysiology behind the increased GI bleeding risk appears multifactorial, however, the primary contributing factor is believed to result from the development of acquired (type 2A) von Willebrand syndrome (due to shearing of the vWF polymers into monomers causing a functional deficiency), unmasking of subclinical arteriovenous malformations [14], and ongoing need for systemic anticoagulation and antiplatelet therapy in CF LVAD recipients. The most common site of GI bleeding is generally in the upper GI tract with arteriovenous malformations being the most common etiology [15,16]. GI bleeding is often substantial in these patients, requiring multiple endoscopic procedures, reduction of anticoagulation and anti-platelet therapy. Given the expanding role of LVADs as BTT or DT, larger numbers of patients will be at risk of device associated adverse events, including GI bleeding for longer periods of time [17,18,19]. Thus far, identified patient related risk factors for GI bleeding with LVAD placement include older age, elevated INR, low platelet count and a history of prior GI bleeding [15]. While the associated risk of GI bleeding after LVAD implantation is clear, uncertainty remains around the timing of GI bleeding after implantation [13,15,16,20]. In our study, we aimed to determine the rate of GI bleeding in patients with LVADs and potential periods of high risk, risk factors predisposing to GI bleeding and risk of thrombotic events in patients with GI bleeding. Methods The study was performed at Saint Luke's Mid America Heart Institute, a tertiary cardiac care center with regional expertise in cardiac transplantation and mechanical assist device implantation. A retrospective review of hospital records was performed for patients who underwent LVAD implantation from 2010-2015 for either bridge to transplant or destination therapy. Patients were considered only if they had followed up in our institution for at least 1 year post LVAD implantation. Patients were considered to have GI bleeding if they had one or more of the following: guaiac positive stools with a drop of hemoglobin (Hb) > 2 g/ dL, hematemesis, hematochezia or melena, active bleeding or blood within the GI tract at time of endoscopy with transfusion of packed red blood cells. Recurrent bleeding was defined as more than one episode of GI bleeding after LVAD implantation. Upper and lower GI bleeding were defined as bleeding above and below the Ligament of Treitz respectively. Bleeding events were considered after seven days of LVAD placement to exclude peri-operative bleeding events. Patient records were followed until March 2016, or death, whichever event occurred first. This study was reviewed and approved by the Saint Luke's Hospital Institutional review board. Data collected about LVAD details included type of LVAD, indication for implantation; patient demographics including age and sex, comorbidities including diabetes, CKD and pre LVAD history of GI bleed; laboratory values including most recent pre-implant hemoglobin, platelet count, INR, creatinine level, time to first GI bleed (in days) post LVAD implant, pump speed at the time of the bleeding event, time to recurrent GI bleed (in days), thromboembolic events, lowest hemoglobin, platelets and INR at time of bleed, number of PRBC and FFP units transfused, endoscopic and radiologic procedures performed and measures to stop bleeding. Statistical analysis All statistical analysis was performed using STATA v 14.0. Continuous variables were compared using Student's t-test and categorical variables were compared using Chi square test. A p-value less than 0.05 was considered significant. All data is reported using 95% confidence interval (CI). Logistic regression analysis was performed to identify independent risk factors for bleeding. Kaplan Meier survival curves for GI bleeding were plotted for the entire time of follow-up. Gastrointestinal bleeding in LVAD subjects Of the 79 subjects included in the study, gastrointestinal bleeding occurred in 27 (34.1%) patients with a total of 46 bleeding events during 1,589 patient months (132.4 patient years) of follow up. The median time to first bleed was 60 days with 17 patients experiencing their first GI bleeding event within the first 90 days. DT LVAD recipients were around 6 times more likely to bleed as compared to BTT recipients (unadjusted OR 6.34, p ¼ 0.0020) when time on therapy was not adjusted for, however after adjustment for time, there was no significant difference between bleeding in DT versus BTT patients. Risk factors for bleeding Warfarin preoperatively increased the odds of bleeding by 5.68 fold (OR 5.68, p ¼ 0.031) and patients who had a bleeding event were on an average 10 years older than without bleeding events (69 vs. 59 years old). There was no significant association between sex, pump speed, underlying comorbidities, INR, platelets, hemoglobin or concomitant clopidogrel and risk of bleeding. Re-bleeding events Fourteen (52%) patients experienced re-bleeding events ( Table 2). Diabetes was a significant risk factor for re-bleeding in our patient population, with diabetic patients 3 times more likely to re-bleed compared to non-diabetic patients (RR 3.125 p ¼ 0.0031). Re-bleeders also required more pRBC transfusions (OR 1.77, p ¼ 0.03). There was no significant relationship between the risk of re-bleeding and age, sex, other underlying co-morbidities (hypertension, chronic kidney disease, GERD). The severity of bleeding, as measured by number of packed red cells transfused, was not associated with time of onset of bleeding (defined as less than 30 days, 30-180 days and more than 180 days) or underlying patient characteristics. Source of bleeding and therapeutic interventions In 25 (54%) events, bleeding was identified to be from upper GI source. 20 (74%) patients bled from arteriovenous malformations (AVM) and earliest time for bleed due to AVM was 10 days post implant ( Figure 1). 16 (80%) patients with AVMs detected on endoscopy post bleeding had prior upper and lower gastrointestinal (GI) endoscopies within 3 months prior to LVAD implantation with no AVMs reported (Table 3). A total of 71 endoscopic procedures were performed to evaluate GI bleeding, the preliminary approach being upper followed by lower GI endoscopies or both (Table 4). Subsequent approaches included capsule studies (n ¼ 13) and anterograde and retrograde double balloon enteroscopy (n ¼ 13) including both anterograde and retrograde. Bleeding was resolved by decreasing pump speed and temporarily withholding anticoagulation in 10 (37%) patients. 7 (26%) patients needed additional pharmacotherapy beyond endoscopic interventions to control their bleeding (5 were treated with Octreotide and 2 with additional Thalidomide). Thrombotic events There were a total of 16 thrombotic events (14 pump thromboses and 2 cerebrovascular accidents (CVAs). Out of these, 10 occurred in patients who had either had a GI bleeding event previously or a bleed afterwards. The median time to a thrombotic event after the first GI bleed was 310 days. There was an increased incidence of thrombotic events in GI bleeders, with thrombotic events 4.5 times more likely to occur in patients who also had a GI bleeding episode (OR 4.50, 95% CI 1.4 to 14.31, p ¼ 0.0106), though this was not temporally associated with the bleeding event. There was no difference between the means of lowest INRs at time of thrombotic event between subjects with or without GI bleeds. Mortality 34 (43%) out of the 79 patients died, 20 (60%) of these were nonbleeders, Bleeding or re-bleeding did not adversely impact survival in our study. Discussion Gastrointestinal bleeding in LVAD patients is a known adverse event and, in our study, occurred at a rate of 34.1%. Older age and use of Figure 1. Kaplan-Meier curve comparing rates of GI bleeding in patients with destination therapy and bridge to transplantation LVADs. warfarin were identified as significant risk factors for bleeding in these subjects. Re-bleeding remains a common occurrence in patients with CF-LVADs, with almost half of the patients with a bleeding event experiencing re-bleeding, with 7 (26%) patients up experiencing 3 or more rebleeding episodes. Uniquely, we report a dichotomous rate of bleeding with majority of the first bleeding events 17 (63%), occurring within the first 90 days after LVAD implantation with the earliest bleed as early as 10 days after device implantation. Finally, thrombotic events were 4 times more likely to occur in patients with a GI bleed. In addition to the above findings, we also demonstrate a very high rate (80%) of possible de novo GI tract AVM formation/unmasking of subclinical AVMs in GI bleeding patients, with 60% of the first bleeding events attributed to subclinical AVMs. The majority (74%) of patients in this study bled from AVMs that were discovered using an aggressive approach towards diagnosing obscure GI bleeding, including capsule endoscopies and subsequent double balloon enteroscopy in this patient cohort. The formation of AVMs appears to mostly likely be due to increased intraluminal pressure within the blood vessels and lowered pulse pressure leading to intestine wall ischemia and development of AVMs [21]. The rate and temporal trends of development of new or transformation of subclinical AVMs to clinically significant ones with CF-LVAD implantation have not been previously described and hence there is lack of consensus on treatment strategies to reduce re-bleeding events [12]. In this study, we report two distinct time periods during which GI bleeding is more likely to occur, one occurring within 3 months post- implantation and the second period happening later, at almost 2 years post implantation. Other studies have described various times to bleeding ranging from 65 to 128 days after LVAD implantation (16,20). It is possible that these distinct periods are due to different etiologies of GI bleeding, however further study is needed to confirm this. The approach to the management of GI bleeders in our patient population, started with temporarily withholding warfarin and decreasing pump speed (which increases left ventricular preload resulting in more left ventricular ejection thereby restoring some element of pulsatility to the flow). The preliminary diagnostic approach in patients who were stable, was to initially perform an upper GI endoscopy followed by lower GI endoscopy and a capsule endoscopy if the source of bleeding was not identified (obscure GI bleeding). This was then followed by double balloon enteroscopy (both anterograde and retrograde as needed) based on results from the capsule study. In cases where bleeding continued, long term octreotide therapy was used and in a small minority of patients thalidomide was used (Figure 2). Lowering the INR goal to 1.5 (baseline goal 2-3) in patients at high risk for bleeding was also carried out. Continued bleeding necessitated the discontinuation of warfarin in 2 of our patients. While there is no formalized comparison of this management strategy to establish efficacy, our approach remains in concordance with that described in literature elsewhere [13,15]. While our findings expand previous literature describing GI bleeding with CF-LVAD placement, there are several limitations to our study. First, given the single center retrospective design of the study, there may have been underreporting of events if the patients did not get periodically evaluated for GI bleeding or were admitted for a GI bleeding event elsewhere. Secondly, some patients were lost to follow up due to unclear reasons, at various times after implant, leading to loss of data. Finally, GI bleeding as defined by hematochezia or melena was not confirmed by medical staff and could have therefore been over reported in some cases. In summary, patients receiving second generation LVADs have a significant risk of GI bleeding, important risk factors for the same include age, LVAD implantation as destination therapy and preoperative warfarin and aspirin use. Around 50% patients with a bleeding event experience rebleeding and require multiple endoscopic procedures for the diagnosis and treatment of the bleeding. GI AVMs are the most common cause of the bleeding with de novo or newly unmasked subclinical AVMs identified in 80% patients with GI bleeding. The risk of thrombotic events, mainly stroke, is also elevated in patients who have experienced GI bleeding. Author Contribution statement Devika Kapuria: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Taiyeb Khumri, Sanjeev Aggarwal, Christopher Koh: Conceived and designed the experiments; Wrote the paper. Shariq Shamim, Rajiv Chhabra: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Pallavi Surana: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Salman Khan: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Nabil Al-Khalisi: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
2020-09-03T09:12:31.729Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "90f82055ae8ee83b66a76b534348155a9e1f7472", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2020.e04695", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0702bb4201f7ee196971c6e4f754b1a9e50379f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3184045
pes2o/s2orc
v3-fos-license
Massive parallel sequencing of mRNA in identification of unannotated salinity stress-inducible transcripts in rice (Oryza sativa L.) Background Microarray technology is limited to monitoring the expression of previously annotated genes that have corresponding probes on the array. Computationally annotated genes have not fully been validated, because ESTs and full-length cDNAs cannot cover entire transcribed regions. Here, mRNA-Seq (an Illumina cDNA sequencing application) was used to monitor whole mRNAs of salinity stress-treated rice tissues. Results Thirty-six-base-pair reads from whole mRNAs were mapped to the rice genomic sequence: 72.0% to 75.2% were mapped uniquely to the genome, and 5.0% to 5.7% bridged exons. From the piling up of short reads mapped on the genome, a series of programs (Bowtie, TopHat, and Cufflinks) comprehensively predicted 51,301 (shoot) and 54,491 (root) transcripts, including 2,795 (shoot) and 3,082 (root) currently unannotated in the Rice Annotation Project database. Of these unannotated transcripts, 995 (shoot) and 1,052 (root) had ORFs similar to those encoding the amino acid sequences of functional proteins in a BLASTX search against UniProt and RefSeq databases. Among the unannotated genes, 213 (shoot) and 436 (root) were differentially expressed in response to salinity stress. Sequence-based and array-based measurements of the expression ratios of previously annotated genes were highly correlated. Conclusion Unannotated transcripts were identified on the basis of the piling up of mapped reads derived from mRNAs in rice. Some of these unannotated transcripts encoding putative functional proteins were expressed differentially in response to salinity stress. Background Gene expression profiling is accelerating our progress toward a comprehensive understanding of the genetic mechanisms that control responses to environmental stress. Microarray analysis was developed to obtain overall gene expression profiles in various plants. Microarray profiling and the recently introduced tag-based sequencing approaches are proven technologies for estimating gene expression. However, array-based technologies have critical limitations [1,2]. As most microarray probes are designed on the basis of gene annotation, arrays are limited to the analysis of transcripts from previously annotated genes of a sequenced accession of a species. Probes are designed to cover only a very small portion of a gene and so do not represent the whole structure of the gene. Moreover, computationally annotated genes have not fully been validated, because ESTs and full-length cDNAs (FL-cDNAs) cannot cover entire transcribed regions. It is therefore important to identify whole transcripts (including unannotated transcripts) for complete gene expression profiling. There is a need for the development of technologies beyond arrays. Sequencing-based approaches could overcome the limitations of array-based technologies. Following the rapid progress of massive parallel sequencing technology, whole mRNA sequencing has been used for gene expression profiling [3][4][5][6][7][8]. This sequencing involves mapping of the reads on known annotated gene models but cannot be used to identify novel genes. Recently, a series of programs have been developed for building gene models directly from the piling up of short reads: Bowtie efficiently maps short reads on genomic sequences [9]; TopHat concatenates adjacent exons and identifies reads that bridge exon junctions [10]; and Cufflinks [11] constructs gene models from the exons and bridging sequences predicted by Bowtie and TopHat and then calculates their abundances of these sequences. The use of this series of programs has the potential to discover new transcripts from mRNA-Seq (an Illumina cDNA sequencing application) but has only just begun [7,12]. In this study, we identified unannotated transcripts in rice on the basis of the piling up of mapped reads. As a model case, we give examples of salinity stress-inducible unannotated transcripts encoding putative functional proteins. For these purposes, we performed whole mRNA sequencing by using massive parallel sequencing technology. We also took advantage of various highquality genomic resources in rice, including the genomic sequence (International Rice Genome Sequencing Project [IRGSP] build 4.0), FL-cDNA sequences [13], the Rice Annotation Project database (RAP-DB: http:// rapdb.dna.affrc.go.jp/) [14,15], and a rice 44K microarray (Agilent Technologies, Palo Alto, CA, USA), in our analysis of rice transcriptomes. First, to estimate the scale of the transcriptomes in rice, we mapped 36-base-pair (bp) reads from the mRNA of salinity stress-treated rice tissues on the rice genome. The coverage of previously annotated regions or of the rice genome was then calculated. Second, we attempted to identify salinity stressinducible genes as a model system for gene expression profiling by mRNA-Seq. The number of mapped reads was counted and marked on the rice genome. Third, using the mRNA-Seq data, we used Bowtie, TopHat, and Cufflinks to construct gene models based on the piling up of short reads on the rice genome, and compared these with previous annotations and then characterized the unannotated transcripts. We conducted a BLASTX search for the unannotated transcripts, and we discuss the function of the encoded proteins. Fourth, to validate our sequence-based technology, we compared the results of quantification by the array-based and sequence-based approaches, and we discuss the advantages of the latter. This work contributes to the discovery of whole salinity stress-inducible transcripts without the need to rely on previous annotations. It should help to establish further sequence-based gene expression profiling in any organism. Mapping of 36-bp reads to the rice genome We performed rice transcriptome analysis at singlenucleotide resolution by using Illumina mRNA-Seq technology. Briefly, poly(A) RNAs from salinity stresstreated rice tissues were reverse-transcribed and sequenced ( Table 1). Millions of 36-bp reads were mapped to the rice genomic sequence (IRGSP Build 4.0), with at most two mismatches or 3 bp of indels allowed. To obtain many kinds of transcripts, data on nine technical replicates of the sequencing run of cDNA from the roots after salinity stress were accumulated. As the number of reads increased, the cumulative coverage of both the genome and the annotated transcribed region gradually approached a plateau (Figure 1a). Saturation of sequencing was also estimated on the basis of the fraction of genes that had reached their final RPKM (reads per kilobase of exon model per million mapped reads) [16]. As the number of reads increased, the fraction of highly expressed genes (RPKM ≥ 300) close to their final RPKM was almost unchanged, whereas those of genes with relatively low expression (RPKM 3-30) converged more slowly (Figure 1b). With four technical replicates (corresponding to about 27 to 35 million reads), 81.2% of genes with relatively low expression levels (RPKM 3-30) reached to within ± 5% of their final RPKM (Figure 1b). Thus, for further analysis, we adopted the summing of four technical replicates after filtration according to their base quality. Rice transcriptome analysis was based on response to salinity stress. mRNAs were prepared from the tissues of normal rice shoots and roots and from those subjected to 1 h of salinity stress. Of the 27 to 35 million quality-evaluated reads (Table 1; Total filtered reads), 72.0% to 75.2% were mapped uniquely to the rice genome (Table 1; Unique-genome); 5.0% to 5.7% of the reads bridged flanking exons (Table 1; Unique-bridged); 6.0% to 11.2% of the reads were repetitive sequences (Table 1; Multiple); and 10.1% to 16.7% had no match in the genome (Table 1; Unmapped). Thus, a total of 76.9% to 80.9% of the reads were mapped uniquely to the rice genome or to exon-exon junctions (Table 1; Unique-total). Of the unmapped reads, 26.1% had high levels of identity to sequences derived from sequencing adaptors (11.0%), contaminating organisms (8.2%), or ribosomal RNA (6.9%) (Additional file 1. Table S1). A few transcripts might have been transcribed from unsequenced genomic regions of rice [17]. However, most of the unmapped reads (71.5%) had no similarity to each other (data not shown). Our preliminary experiment showed that the ratio of these unmapped reads was higher with mRNA-Seq (10.1%-16.7%; Table 1; Unmapped) than with genomic sequencing (2.0%-3.1%; data not shown). Thus, part of the random sequences might have come from residual random primers used in cDNA synthesis. The common random sequences might have come from sequencing errors in the use of the Illumina sequencing technology. Identification of differentially expressed genes by mRNA-Seq mRNA-Seq quantifies the amount of transcripts on the basis of the number of sequence reads mapped on each gene. We adopted this method for transcript quantification by RPKM [16] and calculated the RPKM of each gene (Additional file 2: Table S2). RPKM quantification was distributed from 0 to over 10 4 . In shoots under normal conditions, the gene encoding ribulose bisphosphate carboxylase activase (AK104332) was expressed at extremely high levels (rpkm_0 hr_shoot = 10612.237). In roots under normal conditions, the gene for metallothionein (AK105219) was expressed at extremely high levels (rpkm_0 hr_root = 23661.149). The statistical mean and median were 19.78 and 3.399, respectively, in the shoot, and 18.705 and 4.241 in the root under normal conditions. We then comprehensively compared the RPKM of each gene in response to salinity stress (r = 0.95 in shoot and 0.94 in root; Figure 2). We used the G-test with a 1% false discovery rate (FDR) and identified 6,469 (in shoot) and 10,321 (in root) differentially expressed RAP2 genes. Of these, 3,050 (up, 1,651; down, 1,399) genes were commonly differentially expressed. The number of highly differentially expressed genes (>32×), such as those encoding bHLH-containing protein (AB040744) and amino acid transporter (J075191I06), was greater in the root (58 genes) than in the shoot (5 genes). Expression of genes previously identified under salinity stress [18]-i.e. OsTPP1 (AK103391), LIP9 (AY587109), OsABA2 (AK062655), OsMST3 (AK069202), WSI76 (AK107065), and MYBS3 (AK107134)-was induced in the root (> 2×). For a complete comparison see Additional file 2: Table S2. Figure 1 Accumulation of 36-bp reads to cover whole transcripts. (a) Cumulative coverage of rice genome and annotated region. Data from nine technical replicates of reads from roots after salinity stress were accumulated. Cumulative coverage was calculated by using reads uniquely mapped on the rice genome (black) or the RAP2 annotated region (white). As the number of reads increased, the cumulative coverage approached a plateau. (b) Robustness of the measurement of transcripts in four different expression classes. Saturation of sequencing was estimated on the basis of the fraction of RAP2 genes supported by FL-cDNA sequences that had reached their final RPKM (reads per kilobase of exon model per million mapped reads) [16]. Vertical axis indicates the fraction of genes for which the RPKM was within 5% of the final value, and horizontal axis indicates the cumulative number of uniquely mapped reads. The fraction of highly expressed genes was almost unchanged, whereas those of genes with relatively low expression converged slowly. N indicates the number of transcripts in each of the four classes. The distribution of mapped reads on the rice genome was graphed on a GBrowse [19] (Figure 3). For example, the OsTPP1 (for trehalose-6-phosphate phosphatase: TPP) gene (AK103391), which encodes a protein that synthesizes the abiotic stress-protectant trehalose [20,21], was expressed exclusively in the root after 1 h of salinity stress; RCc3 (AK109149), which was previously identified as a root-specific gene [22], was expressed only in the root with and without stress; AK058218 (similar to ZmGR1a in Zea mays) was expressed exclusively in the shoot; most of the neighboring genes were expressed evenly in all tissues used ( Figure 3). Constructing gene models by mRNA-seq Transcribed regions were identified on the basis of the piling up of mapped short reads through the programs Bowtie [9], TopHat [10], and Cufflinks [11]. In the shoot, 51,301 transcripts were predicted (RPKM ≥ 2, length ≥ 100 bp) ( Table 2); 94.6% (48,506/51,301) of the predicted transcripts were mapped on previously annotated loci in RAP2 [14,15]; thus, the remaining 2,795 predicted transcripts were unannotated in RAP-DB (Table 2). In the root, 3,082 of the 54,491 predicted transcripts were mapped on unannotated regions ( Table 2). For example, the previously annotated gene AK243146, which is similar to DREB1B in Arabidopsis thaliana (GI: 3738226), was expressed after salinity stress and also predicted by Cufflinks (Root_CUFF. 214677.0); other exons were also predicted and connected by bridging sequences elucidated by TopHat (Root_CUFF. 214638.0) ( Figure 4a). Reads were also mapped on the extended parts of the ends of most 5' and 3' exons in previous gene models (Figure 4b, c). Of the transcripts mapped on previously annotated loci, 1,738 (shoot) and 2,297 (root) had not been supported by ESTs [23] or FL-cDNAs [13]. We attempted to predict the functions of unannotated transcripts by BLASTX search and longest-ORF search. In a BLASTX search against the UniProt and RefSeq sequences, of the predicted transcripts, 995 (shoot) and 1,052 (root) had ORFs similar to those encoding the amino acid sequences of functional proteins ( Table 2). Of the remaining unannotated transcripts, 1,670 (shoot) and 1,873 (root) had ORFs encoding at least 20 amino acids by longest-ORF search ( Table 2). Amino acid length was widely distributed: the mean and median were 125 and 77 amino acids in the shoot, and 123 and 74 in the root ( Figure 5). We used the G-test with a 1% FDR and identified 213 (up, 86; down, 127; in shoot) and 436 (up, 146; down, 290; in root) differentially expressed Cufflinks transcripts. Even though the lengths of Cufflinks transcripts were not completely identical between shoot and root, at least 55 differentially Figure 2 Comparison of RPKM of each gene after salinity stress. RPKM values for 29,389 RAP2 representative transcripts in the presence or absence of salinity stress were compared in the shoot (left) and root (right). For each gene, the RPKM (log 2 ) value without salinity stress is plotted on the horizontal axis, and the corresponding RPKM (log 2 ) value with stress is plotted on the vertical axis. Distributions of the number of transcripts are given outside the plot. The number of highly differentially expressed genes was greater in the root than in the shoot. Pearson's correlation coefficient is given in the corner of each plot. expressed transcripts were common to the two tissues. In response to salinity stress, 5 (shoot) and 13 (root) unannotated transcripts were upregulated (≥2×) ( Table 3). These unannotated transcripts encoded, for example, proteins similar to indole-3-glycerol phosphate lyase and gibberellin 2-beta-dioxygenase (Table 3). Of the other differentially expressed genes (< 2×), Root_-CUFF.256193.0 was upregulated (1.9×); it encoded proteins similar to MSL2 (MscS-LIKE2) (Additional file 3: Table S3). For a complete list of unannotated transcripts see Additional file 3: Table S3. Comparison of sequence-based and array-based technologies for gene expression profiling Our sequence-based gene expression profiling was validated against array-based technology. First, signal intensity and RPKM from the same RNA materials were compared. These two independent measures of transcript abundance were correlated (r = 0.75-0.77), especially at moderately high signal intensities ( Figure 6). However, the correlation was not as strong at extremely high signal intensities (> log 2 32,768 = 15), suggesting that the array signal intensity was saturated but the RPKM was not (Figure 6, root). Next, the ratios of differentially expressed genes were compared. The ratio obtained from the array and the corresponding ratio obtained from RPKM was highly correlated over a broad range (r = 0.72 in shoot and 0.80 in root; Figure 7). The histogram was highest at log 2 1 (= 0), suggesting that most genes were expressed evenly both before and 1 h after salinity stress ( Figure 7). However, a few discrepancies were found: increased changes in the expression of 17 genes were found by using the array (> 4×), but not by using mRNA-Seq (< 2×); conversely, increased changes in the expression of 7 genes were found by using mRNA-Seq (> 4×), but not by using the array (<2×) (Additional file 4: Figure S1). To further examine these discrepancies, we used quantitative real-time polymerase chain reaction (qRT-PCR). The qRT-PCR results suggested that most of the former discrepancy was due to technical inaccuracy in the array experiments. However, qRT-PCR supported only three of the seven mRNA-Seq data in the latter discrepancy (Additional file 4: Figure S1). Despite these discrepancies, our sequence-based approach was generally valid as a gene expression profiling technology for use with previously annotated genes. Discussion Estimation of variation and abundance of whole transcripts in rice How many reads are required to cover whole transcripts in the rice cell? As the number of reads increased, the cumulative coverage approached a plateau (Figure 1). We summed four technical replicates (Table 1). RPKM is widely used to calculate the abundance of each transcript and is linear across a dynamic range [16]. The distribution of RPKM of rice genes ranged from 0 to over 10 4 ( Figure 2); genes involved in photosynthesis in the shoot or in regulation of physiological metals in the root were highly expressed, whereas about 30% of genes had RPKM < 1 (Additional file 2: Table S2). The saturation of sequencing in rice (Figure 1b) was almost the same as in a previous mammalian analysis [16]. According to that analysis, "one transcript in a cell corresponds to 1 to 3 RPKM" [16], so genes having RPKM < 1 might rarely be expressed. However, data on the RNA content of each rice cell are required to calculate the number of existing molecules of RNAs. As rice tissue contains cells of various sizes and types, the relationship between the number of existing molecules and their RPKM has not yet been accurately determined. When we used four technical replicates, about 20% of genes expressed at relatively low levels (RPKM 3-30) did not reach their final RPKM (Figure 1b), suggesting that these model settings were insufficient for calculating the real RPKM of genes expressed at low levels. Summing of the four technical replicates covered 70.1% of all annotated regions, corresponding to 15.8% of 389 Mb [24] of the rice genome (Figure 1a). This result suggests that these regions were transcriptionally active under the experimental conditions. Even though the cumulative coverage was close to a plateau, the coverage rose gradually; the accumulation of about 95 million reads covered 77.0% of annotated regions (Figure 1a), suggesting that some of the reads expressed at low levels were not sequenced. However, the gradual increase in coverage might have been due to the presence of contaminated genomic DNA or a very small amount of partly processed nuclear RNAs, because intron retention is the most prevalent alternative splicing form in rice [25], as it is in Arabidopsis thaliana [26]. Thus, we consider that the summing of four technical replicates of 36-bp reads, corresponding to a total of 1 Gbp of filtered sequences, covered almost all the transcripts in the rice cell under the experimental conditions, although more reads are required to obtain the final RPKM of genes expressed at relatively low levels. Identification of unannotated transcripts by mRNA sequencing mRNA-Seq provides information on whole transcribed genes without the need to rely on annotation (Figure 3), Cufflinks ID (Cufflinks_ID); total nucleotide length of each predicted transcript (NT_Length); RPKM without salinity stress (RPKM_0) or with salinity stress (RPKM_1); calculated ratio of RPKM (rpkm_1/rpkm_0; Ratio); number of amino acids encoded by putative ORF (AA_Length); and name of similar protein (Description) are listed. Differentially expressed Cufflinks transcripts were identified by the G-test with a 1% false discovery rate. Highly differentially expressed genes (ratio ≥2) derived from different loci that had ORFs predicted by BLASTX search are listed. "-" means a calculated ratio of infinity because the RPKM without salinity stress (RPKM_0) was 0. Detailed data for all unannotated transcripts are listed in Additional file 3: Table S3. whereas array technology is limited to providing data only on those previously annotated genes and on previously identified ESTs with no known homologies that have corresponding probes on the array. On the basis of the piling up of mapped reads, we predicted 2,795 (shoot) and 3,082 (root) currently unannotated transcripts in RAP-DB ( (Figure 4b, c). Extension of 5' exons might contribute to the making of a different start codon or the shifting of the reading frame of previously annotated genes. Extension of 3' UTRs might contribute to microRNA-mediated control of translation or post-transcriptional RNA metabolism [27,28]. For example, mRNA-Seq provided evidence of the existence of extended parts of previously annotated genes and of the differential regulation of their expression. AK240862, previously annotated as a non-protein-coding transcript, had additional predicted exons distal to the 5' end of the previous gene model, and it encoded an indole-3-glycerol phosphate lyase (Additional file 4: Figure S2). Two neighboring genes (AK072595, AK288107) were also similar to the indole-3-glycerol phosphate lyase gene, suggesting that all three genes were tandemly duplicated. Although all three genes were upregulated in response to salinity stress, their tissue specificities and expression levels differed (Additional file 4: Figure S2), suggesting that their functions diversified after gene duplication. mRNA-Seq also provided evidence of expression of computationally predicted genes. The existence of a number of genes computationally predicted in RAP-DB [15] has not been supported [15] by ESTs [23] or FL-cDNAs [13]. Here, 1,738 (shoot) and 2,297 (root) Figure 6 Comparison of quantification of gene expression by mRNA-Seq and microarray. For each gene, the normalized intensity (log 2 ) from the array is plotted on the horizontal axis, and the corresponding count of RPKM (log 2 ) is plotted on the vertical axis. Signal intensity is the average of that of Cy3 and Cy5 (dye-swap experiments). Pearson's correlation coefficient is given in the corner of each plot. transcripts identified by mRNA-Seq have been mapped on computationally predicted genes, the presence of which was not supported by experiments, suggesting the validity of the computationally predicted gene models in RAP-DB. We will use these sequence-based transcriptome analyses to improve RAP-DB. mRNA-Seq provided details of the bridging sequences between exons, suggesting the presence of splicing junctions, whereas array technology-including whole-genome tiling arrays [29]-provides no information on connecting exons. Because reads that bridge exon boundaries are not mapped directly to the genomic sequence, a mapping technique was required. As a first step, the enumeration of all theoretical splicing junctions within annotated transcripts allows the mapping of bridging reads [12,16,30] by using statistical models [31]. We found that 5.0% to 5.7% of reads formed primary bridges with previously annotated exons (Table 1, Unique-bridged); this was not sufficient to discover sequences bridging unannotated transcripts. Programs such as TopHat [10] and G-Mo. R-Se (Gene Modeling using RNA-Seq) [32] are designed to align reads to form potential splice junctions without relying on known splice sites. In this study, sequences flanking potential donor/acceptor splice sites were joined to form canonical (GT-AG) introns between neighboring (but not necessarily adjacent) islands by using TopHat [10]. Even though we used TopHat for our prediction, some of the predicted transcripts remained to be separated-unlike the case with the FL-cDNA sequences-because of the lack of sufficient bridging sequences between the exons (Additional file 4: Figure S3), suggesting that more bridging reads should be sequenced to connect predicted exons. Elongation of the length of each read may also enhance the chance to connect predicted exons. Sequence-based transcriptome analysis for capturing salinity stress-inducible genes in rice mRNA-Seq comprehensively identified salinity stressinducible genes. Unannotated transcripts had ORFs (Table 2) with a mean length of 123 amino acids (root) or 125 amino acids (shoot) (Figure 5), suggesting that these unannotated transcripts could encode functional proteins. Of the unannotated transcripts, 213 (shoot) and 436 (root) were differentially expressed in response to salinity stress ( Table 3, Additional file 3: Table S3). These unannotated transcripts encoded proteins associated with functions such as amino acid metabolism (indole-3-glycerol phosphate lyase) in response to abiotic stress [33], diterpenoid biosynthesis (gibberellin 2-beta-dioxygenase), and mechanosensitive ion channel (MSL2) function [34]. Mechanosensitive ion channels are gated directly by physical stimuli such as osmotic shock and transduce these stimuli into electrical signals [35]. mRNA-Seq also captured previously identified genes involved in salinity tolerance, namely those associated with trehalose synthesis (OsTPP1) Figure 7 Comparison of ratios of differential expression calculated by mRNA-Seq and microarray. For each gene from shoots (N = 14,575) and roots (N = 14,861), the ratio (log 2 ) obtained from the array is plotted on the horizontal axis; the corresponding ratio obtained from RPKM (log 2 ) is plotted on the vertical axis. The distributions of the number of transcripts are given outside the plot. Red line indicates where X = Y. Pearson's correlation coefficient is given in the corner of each plot. (Figure 3), dehydrin (LIP9), ABA synthesis (OsABA2), sugar transport (OsMST3), glycerol transferase (WSI76), and transcription factors similar to those of the DREB family (Additional file 2: Table S2). A substantial number of transcripts were exclusively upregulated only in the root ( Figure 2). As only the root was directly exposed to 1 h of salinity stress, it might take time to induce the expression of more genes in the shoot; OsTPP1 (Figure 3) might be expressed in the shoot after 10 h of exposure, as has been found in Yukihikari rice [36]. With these genes, Nipponbare may have the potential to be tolerant to salinity stress. Rice cultivars such as Nona Bokra and Pokkali are substantially more salinity tolerant than Nipponbare [37], suggesting that the genuine salinity stress tolerance gene might be missing in Nipponbare. The 23 Oryza species are geographically, physiologically, and genetically diverse [38], and many of the genes in cultivated rices have been selected by humans under field conditions, not by environmental stress. These essentially missing genes could serve as potential genetic resources for the improvement of cultivated crops. Sequencebased technology can be used to extract such missing genes by the piling-up of short reads on their own genomes without the need to rely on sequence similarity. Overcoming the technical inaccuracy Microarray technology has been used as a sophisticated platform for the expression profiling of previously annotated genes. However, as an array-based technology, evaluation of signal intensities close to background levels tends to cause artifacts in array analysis because of high levels of background noise and/or cross-hybridization [2]; moreover, hybridization efficiency might vary with the probes used, suggesting that the calculation of real molar concentrations is inaccurate. Whereas the Agilent rice 44K Array is designed to quantify 60-mer sequences at the 3'-end of transcripts, mRNA-Seq quantifies transcript abundance on the basis of the number of mapped sequences on the whole gene model. In our study, the two measures of transcript abundance ( Figure 6) and change ratios (Figure 7) were highly correlated, as in a previous report [6]. Moreover, for genes expressed at low or extremely high levels ( Figure 6, root) and for genes differentially expressed in arrays (Additional file 4: Figure S1a), mRNA-Seq seemed to be accurate. Therefore, mRNA-Seq measures the molar concentrations of genes accurately over a broad dynamic range. Biological replication is required for identifying differentially expressed genes through statistical analysis, as in array-based analysis. Unfortunately, with sequence-based transcriptome analysis there are greater costs than with microarrays for cDNA preparation and sequencing; this prevented us from performing further experiments. Illumina has improved its sequencing technology. Each read length has been continuously increased. Efficient base calling by using the latest Illumina data analysis pipeline software improved the quality and quantity of reads from the same raw image data. Controlled hydrolysis of RNA before cDNA synthesis substantially improved the uniformity of sequence coverage, as in a previous report [8]. These technical innovations in hardware and software will enable remarkable progress in reducing costs and in increasing the sensitivity of detection of sequences transcribed at low levels, the accuracy of quantification and detection of splice forms, and the prediction of the whole structures of transcripts. Sequence-based transcriptome analysis has recently been applied to various organisms: Arabidopsis thaliana [4,39], yeasts [40,41], Drosophila melanogaster [6], and human [5]. During this study, two types of rice transcriptome analysis were reported, focusing on the transcriptional differences in two rice subspecies and their reciprocal hybrids [42] and in eight organs from different developmental stages of Oryza sativa L. ssp. indica '93-11' [43]. We analyzed salinity stress-inducible transcripts and constructed gene models based on the pilling up of short reads by using the Cufflinks program. This approach should help to discover novel gene models without reliance on gene annotation. Conclusions Microarray-based gene expression profiling is limited to the analysis of annotated genes. In our mRNA-Seq analysis, unannotated salinity stress-inducible transcripts were identified on the basis of the piling up of mapped reads without reliance on gene annotation or FL-cDNA sequences. Some of these novel transcripts had ORFs encoding putative functional proteins and were differentially expressed in response to salinity stress. mRNA-Seq was valid as a gene expression profiling technology for quantifying the abundance of previously annotated genes. Our findings will contribute to improvement of our RAP-DB and to further sequence-based gene expression profiling in various organisms. Plant material and salt stress treatment Seeds of rice (Oryza sativa L. 'Nipponbare') were germinated in the dark at 28°C on a sterilized germination tray. Germinated seeds were evenly distributed on 96well PCR plates supported by a plastic container. Seeds were grown in a growth chamber at 28°C, as previously described [44]. After the seedlings had been grown for 7 days, they were transferred on their 96-well plates into containers filled with 150 mM NaCl solution, or with control solution, and placed at 28°C in a growth chamber for 1 h. Four kinds of tissue (normal shoot, normal root, shoot with 1-h salinity stress, or root with 1-h salinity stress) were collected and immediately frozen in liquid nitrogen. For RNA extraction from each treatment group, 10 plants were collected and mixed, to minimize the effect of transcriptome unevenness among plants. mRNA sequencing Total RNA was extracted by using an RNeasy Plant kit (Qiagen, Hilden, Germany). RNA quality was calculated with a Bioanalyzer 2100 algorithm (Agilent Technologies); high-quality (RNA Integrity Number > 8) RNA was used. Total RNA samples (10 μg) were subjected to cDNA construction for Illumina sequencing, in accordance with the protocol for the mRNA-Seq sample preparation kit (Illumina). Oligo(dT) magnetic beads were used to isolate poly (A) RNA from the total RNA samples. The mRNA was fragmented by heating at 94°C for 5 min. First-strand cDNA was synthesized using random hexamer primers for 10 min at 25°C, 50 min at 42°C, and 15 min at 70°C. After the first strand had been synthesized, dNTPs, RNaseH, and DNA polymerase I were added to synthesize second-strand DNA for 2.5 h at 16°C. The ends of double-stranded cDNA were repaired by using T4 DNA polymerase and Klenow DNA polymerase and phosphorylated by using T4 polynucleotide kinase. A single "A" base was added to the cDNA molecules by using Klenow exo-nuclease, and the fragments were ligated to the PE adapters. cDNAs with 200 ± 25-bp fragments were collected. The purified cDNA was amplified by 15 cycles of PCR for 10 s at 98°C, 30 s at 65°C, and 30 s at 72°C using PE1.0 and PE2.0 primers. Mapping of short reads, detection of bridging sequences, and prediction of transcripts Table 1. In our preliminary experiment, two independent sequencing runs using the same cDNA were highly correlated (r = 0.99). The default Illumina pipeline quality filter, which uses a threshold of CHASTITY ≥ 0.6, was used to identify clusters with low signal-to-noise ratios. CHASTITY is defined as "the ratio of the highest of the four (base-type) intensities to the sum of the highest two." Passed filter reads were mapped onto both the Nipponbare reference genome (IRGSP build 4.0) and the spliced exon junction (SEJ) sequences by SOAP ver. 1.11 [45], allowing up to 2 bp of mismatch or up to 3 bp of indels. SEJ sequences were constructed by concatenating the 40 bases at the 3' end of the upstream exon to the 40 bases at the 5' end of the downstream exon for all RAP2 transcripts [14,15] at a locus. To calculate the cumulative coverage of the genome or annotated regions, reads were mapped by BWA (Burrows-Wheeler Aligner) [46] with the default option. To predict transcripts, a series of programs-Bowtie [9], TopHat [10], and Cufflinks [11]was used. Briefly, mRNA-Seq reads were mapped against the whole reference genome (IRGSP build 4.0) by using Bowtie software. An initial consensus of exon sequences was extracted from the mapped reads. The reads that did not align to the genome but that were mapped to these potential junctions by TopHat were considered to bridge splice junctions. Cufflinks constructs gene models (RPKM ≥ 2, length ≥ 100 bp) on the basis of the exons and bridging sequences predicted by Bowtie and TopHat. ORFs were predicted by BLASTX search against UniProt (Swiss-Prot) and RefSeq (reviewed and validated) or by longest-ORF search (≥20 amino acids). Microarray analysis The same RNA material was shared for use in the Illumina sequencing and the microarray experiments and qRT-PCR analysis. The rice 44K oligo microarray (Agilent Technologies) contained approximately 44,000 60mer oligonucleotides synthesized on the basis of RAP annotation. For each microarray experiment, 400 ng of total RNAs was used for Cy3-or Cy5-labeled complementary RNA (cRNA) synthesis. DNA microarrays were hybridized for 16 h with 825 ng of Cy3-and Cy5-labeled probes from salinity-stressed or unstressed plants. The microarray experiment was repeated with color-swapping of Cy3 and Cy5. Agilent Feature Extraction Software (ver. 8.5.1.1) was used to quantify microarray images. Gene-Spring (ver. 10) software (Agilent Technologies) was used for background subtraction, LOWESS normalization, and extraction of normalized raw signal intensities for all probe sets from each array. Normalized raw signal intensities were compared with the corresponding RPKM. Parts of the signals were removed for further analysis if they were not positive, significant, or above background levels. The hybridization experiments and array scanning were performed at an open laboratory run by the DNA Bank of the National Institute of Agrobiological Sciences (http://www.dna.affrc.go.jp/). Quantitative RT-PCR (qRT-PCR) qRT-PCR primers were designed on the basis of the annotation of the RAP-DB (Additional file 5: Table S4). One microgram of total RNA was reverse-transcribed in a 20-μL reaction mixture of Transcriptor First Strand cDNA Synthesis Kit (Roche, Basel, Switzerland). qRT-PCR was performed in a 20-μL reaction mixture containing 2× SYBR Master Mix (Roche) and 1 μL of cDNA template (1:10 diluted). qRT-PCR of three technical replicates for each sample was performed using a LightCycler480 System with its relative quantification software (ver. 1.2) based on the delta-delta-Ct method (Roche). qRT-PCR was performed for 10 s at 95°C, 5 s at 55°C, and 10 s at 72°C. The detection threshold cycle for each reaction was normalized against the expression level of the ubiquitin gene.
2014-10-01T00:00:00.000Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "74ff99f270d76c4891b58d5251b0b23cfd7bcde1", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-11-683", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60300e39194dd207c6f208c21d8711902b60bbb6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248834391
pes2o/s2orc
v3-fos-license
Quasiparticle Self-Consistent GW -Bethe-Salpeter Calculations of the Low-Lying Excitations of the Photosystem II Reaction Center The GW -Bethe-Salpeter Equation (BSE) method is promising for calculating the low-lying excited states of molecular systems. However, so far it has only been applied to rather small molecules, and in the commonly implemented diagonal approximations to the electronic self-energy it depends on a mean-field starting point. We describe here an implementation of the self-consistent and starting-point independent quasiparticle self-consistent (qs GW )-BSE approach which is suitable for calculations on large molecules. We herein show that self-consistency in the eigenvalues only leads to an unfaithful description of certain excitonic states for Chlorophyll dimers while the qs GW -BSE excitation energies are in excellent agreement with experiment. We use the new implementation to calculate the lowest excitation energies of the six chromophores of the photosystem II (PSII) reaction center (RC) with nearly 2000 correlated electrons in total. Primary charge separation in the PSII RC occurs along the D1 branch via initial formation of Chl D1+ -Pheo D1- and subsequent hole transfer leading to P D1+ -Pheo D1- . We find the Chl D1+ -P D1- charge transfer (CT) state to be lowest excited state, but do not observe the Chl D1+ -Pheo D1- CT state at low energy. This is most likely to the neglect of the protein environment. Notwithstanding this discrepancy, our results are in closer agreement to experiment than the ones of previous calculations based on range-separated hybrid kernels which only predicted local excitations among the lowest excited states of the PSII RC. served. 18 To mitigate such errors, RSHs can be parametrized empirically for each system under investigation (as for example done in references in 19 and 20), but this makes them non-transferable and unreliable for general applications. More systematic parametrization procedures for range-separated functionals have been suggested as well 21-24 but these are labour-intensive and not readily available as "black-box" procedures. Turning to wave-function based methods for excited states, we find the second-order algebraic diagrammatic construction scheme (ADC(2)) 25,26 and coupled cluster [27][28][29][30][31] with approximate doubles (CC2) 32 easy to apply and reasonably cost-efficient. CC2 results are typically in good agreement with more involved methods like equation-of-motion (EOM) CC 33 with singles and doubles (EOM-CCSD) or similarity-transformed (ST) EOM 34,35 -CCSD. 36,37 For these methods we are aware of one study of a tetrameric model by Suomivuori et al. 38 using ADC (2) together with the spin-opposite-scaled 39 and reduced-virtual-space (RVS) 40 approximations. Unfortunately, they did not include the Pheophytin chromophores in their calculations, which are known to play a key role in the initial charge separation immediately after photoexcitation. 14, 41,42 This is potentially possible, but we note that most applications of wave-function based methods 16,18,43,44 focus on single chromophores. Utilizing subsystem methods [45][46][47][48][49][50][51] the applicability of these methods can be extended. In this family of methods one describes the full RC by an effective Hamiltonian with a limited amount of levels for each chromophore. The information needed to build such an effective Hamiltonian are the monomeric excitation energies as well as the inter-monomeric couplings. These parameters can be computed in a first principles manner with various electronic structure methods. [52][53][54] While the subsystem approach can be used with high-level monomer calculations, a drawback is that commonly used approximations to calculate the couplings between the chromophores are often not accurate enough. 15,40,55 In the current work we will therefore examine how large a system can be treated directly without having to resort to partitioning and subsystem methods. As the states of interest are the lowest energy ones, we thereby focus on a limited number of states, but describe them in a supermolecular fashion that fully acocounts for all intermolecular couplings of the chromophores. Our approach is based on the GW -BSE method that we will briefly summarize in the following. We first note that energy levels of the excitonic states correspond to the poles of the 2-particle generalized susceptibility. [56][57][58][59] This quantity can be obtained from the interacting single-particle Green's function G 1 and the electronic self-energy Σ, a non-local, non-Hermitian, and frequency dependent one-electron operator, via a Bethe-Salpeter equation (BSE). 60-62 G 1 is obtained from a Dyson equation with Σ as its kernel, while Σ itself depends implicitly on the 2-particle Green's function. [62][63][64] As obtaining the full generalized susceptibility requires N 6 operations, it is advantageous to decouple the BSE from the Dyson equation for G 1 . This is done by using an approximation to the self-energy which only depends on the density-density response. 65,66 A popular example is the GW approximation (GWA), with the screened Coulomb interaction W 67,68 calculated within the random phase approximation (RPA). 69 Typically, the Dyson equation for G 1 is solved within the GWA first. Only afterwards, the non-interacting 2-particle Green's function and the corresponding kernel in its zero-frequency limit are constructed and one solves for a few or all roots of the generalized susceptibility. [70][71][72] If only a few excitonic states are needed, one may thereby use computationally efficient iterative diagonalization techniques. 72,73 This procedure is known as the GW -BSE method and is increasingly applied to compute the lowest electronically excited states of molecular systems. 51,54, For such applications, the GW part is typically the computational bottleneck of a GW -BSE calculation. 87,89,101 The issue has been addressed over the last years: Many implementations of G 0 W 0 and evGW with reduced asymptotic scaling with system size have been developed [104][105][106][107][108][109][110][111][112][113][114] often producing results in excellent agreement with conventional GW implementations. 104,108,109 Another issue is related to the common approximations in solving the GW equations. Typical calculations start from a Kohn-Sham (KS)-DFT or HF Green's function followed by a perturbative update of the QP energies (G 0 W 0 ). 115,116 This procedure comes with the notable disadvantage that the outcome of such a calculation will heavily de-pend on the choice of the underlying exchange-correlation (XC) functional. 81,[117][118][119][120] Achieving self-consistency in the eigenvalues only (evGW ) can remove this dependence on the initial density functional approximation to a large extent but not completely. 87,101,121 Instead, one can also start from the full GW self-energy and take the Hermitian part only to arrive at a set of effective single-particle equations. 122,123 In QP self-consistent GW (qsGW ), then only the low-frequency limit of the self-energy is considered, [124][125][126] and the non-interacting G 1 closest to the GW G 1 is selected. 127 While this approach has been shown to be more accurate than G 0 W 0 and evGW for a wide range of molecular systems, 128 qsGW has until now rarely been used in molecular calculations. With only a few exceptions, 129,130 low-order scaling GW algorithms only target the screened Coulomb interaction, since this requires only evaluation of the diagonal elements of the self-energy. The computational cost for obtaining the full self energy is much larger, and most implementations therefore become inefficient if the full self-energy is required. To address this issue, we have recently presented a low-order scaling implementation of qsGW . 130 In the present work, we combine it with an efficient solver for the BSE, resulting in a fast, low-scaling, and starting-point independent implementation of the GW-BSE approach. The GW -BSE method has recently been shown to reproduce experimental low-lying excitation energies of Chls with high accuracy. 101 So far, it has only been applied to monomeric models of PSII. In this work, we will first give a brief account of the (low-scaling) implementation of the GW -BSE approach in section 2. In section 4 we first contrast the qsGW method to evGW for monomers and then confirm the excellent agreement with experiment and other quantum chemical calculations for both methods. We then use the qsGW -BSE implementation to calculate the low-lying excitation of the hexameric complex. Finally, section 4 summarizes and concludes this work. The GW -BSE formalism The interacting n-particle Green's functions corresponding to an N -electron system with ground state Ψ (N ) 0 are defined by (1) Here, T is the time-ordering operator,ψ is the field operator and a number 1 = (r 1 , σ 1 , t 1 ) collects space, spin-and time indices. The relevant cases are n = 1, 2. For the n = 2 case, we further restrict ourselves to the excitonic part only with t 3 = t 4 and t 1 = t 2 . By splitting the self-energy into Hermitian and anti-Hermitian part and discarding the latter one, we can restrict the solution of (2) to its QP part only. 122,123,135,136 We then have an effective single-particle problem and restricting the self-energy further to its static limit and transforming to the molecular orbital basis {φ n } n=1...N (in which the single-particle Hamiltonian is diagonal), we arrive at where the n are the single-particle energies. Solving eqs. (9) and (11)-(14) self-consistently is known as the qsGW approximation within the RPA. [124][125][126] After solving the qsGW equations self-consistently, we can then use the zero-frequency limit of the self-energy (13) in (6) in (4). As it is typically done, we also set δW δG ≈ 0. This is referred to as the qsGW -BSE approach. After Laplace transformation to the complex frequency plane, eq. (4) can be transformed into an eigenproblem in a basis of particle-hole states whose solution provides the Lehmann representation of L (see for example ref. Ω S is a neutral excitation energy, (X, Y) T S contains the expansion coefficients of the corresponding eigenvector and for a closed-shell system the matrix elements of A and B are respectively defined as where we have chosen to reserve the labels i, j, . . . for occupied and a, b, . . . for virtual orbitals. The QP energies entering the equations are the ones from (14). Implementation For our implementation of the qsGW methods we refer to our previous work. 107,130,139 We expand single-particle Green's functions and the self-energy in a basis of Slater type functions (primary basis) which is related to the MOs by while all quantities appearing in (12) are expanded in a basis of auxiliary fit functions (auxiliary basis). We then switch to the particle-hole basis to solve (15), whereby the matrix elements in (16) are expanded in the basis of MOs. Since we do not use the screened interaction at zero frequency in our GW implementation, we calculate the zero-frequency component of P from the imaginary time representation of the polarizability by and we then use (12) to obtain W (ω = 0). Replacing the matrix elements of the screened Coulomb interaction by the ones of the bare one in (16), and using the HF self-energy in (14), the TD-HF method is obtained. It is clear, that any solver which can be used to solve (15) in the TD-HF case, can also be used for GW -BSE. We use an extension of the Davidson algorithm 140 originally proposed by Stratmann and Scuseria. 73 It solves (4) by projecting the generalized problem on a sequence of orhonormal subspaces in which (19) is solved. k denotes the size of the nth subspace and the b k are linear combinations of particle-hole states. The vectors forming the subspace are then updated until the subspaces are converged. The procedure can be interpreted as an iterative optimization of the basis of particle-hole states, where the part which does not carry useful information (i.e. the particle-hole transitions which do not contribute to the low-lying excitons) is projected out. The time-determining step in the diagonalization is the projection of the eigenproblem in the full space on the subspaces. The term containing the bare Coulomb potential is easily evaluated following the procedure in 141. For the matrix elements of the screened interaction in the (n + 1)th subspace iteration, we define a column in the subspace labeled by s i , s j , . . . , s a , s b , . . . , respectively, as In the minus case, this is equivalent to the evaluation of the greater or lesser component of self-energy for a single imaginary time point. In the plus case, a similar algorithm can be used, but the resulting matrix needs to be antisymmetrized. We solve (21) in the basis of Slater functions and then transform to the subspace basis functions. For detailed working equations, we refer to appendix B. A key element in our approach is to use Pair-atomic density fitting (PADF) 107,142-146 to calculate the transformation from auxiliary basis to primary basis and back. in PADF, all the coefficients in the transformation matrix corresponding to auxiliary functions which are not centered on the same atoms as the primary basis functions are restricted to zero. While making the resulting basis transformation very efficient this also is an approximation which does not necessarily conserve important properties of the original matrices, like for example positive definiteness of he Coulomb potential. 145 These deficiencies can always be traced back to products of diffuse Slater functions which are difficult to expand in the auxiliary basis. To overcome these issues we introduce a projection technique to remove problematic linear dependencies from the primary basis which is described in appendix C. Computational Details All calculations have been performed with a locally modified development version of ADF2022.1 147,148 The GW implementation is the same as outlined in refs The lowest 12 eigenstates of (19) for the hexameric complex have been calculated using the DZ (double-ζ) basis set and Normal numerical quality with 12 imaginary time and frequency points each. For systems with up to n = 4 chromophores, we always calculate the lowest 3n eigenstates of (19), using TZP (triple-ζ + polarization) 150 77 we found this approximation to change the low-lying excitation energies by only around 10-20 meV compared to calculations including all particle-hole pairs. 151 We took into account scalar relativistic effects in the zeroth-order approximation. 152-154 The threshold s described in appendix C has been set to 5 × 10 −3 . If not stated otherwise, in all qsGW calculations we first perform a PBE0 calculation with 40 % exact exchange (PBEH40), which in our experience is a good preconditioner for qsGW and leads to fast convergence. 155 Aside from numerical inaccuracies, the final results are independent of this choice which we have verified in ref. 130 and which we will verify also for the case of Chla in the next section. We also performed evGW calculations based on the LDA and PBEH40 functionals (evGW @LDA, evGW @PBEH40). We terminate the evGW calculations if the HOMO QP energy difference between two subsequent iterations falls below 3 meV. For qsGW , we terminate the calculations when the Frobenius norm of the difference between the density matrices of two subsequent iterations falls below 5×10 −9 . 130 In all KS calculations we set the threshold below which we set eigenvalues of the inverse of the overlap matrix to zero during he canonical orthonormalization procedure to 5 × 10 −3 . To compare our method to the RSH TD-DFT approach, we also performed calculations using the CAMY-B3LYP kernel using the TZP basis set and Good numerical quality. We thereby also investigated the effect of the protein matrix using the conductor like screening Starting-point dependence As discussed in the introduction, a major advantage of qsGW over evGW is that the former doesn't depend on the choice of a DFT functional. To illustrate this, we report here vertical and the number of iterations needed for convergence is essentially the same, there is little advantage to be gained by using evGW instead of the more robust qsGW approach. considerably. Considering this difference, we note that STEOM-CCSD is not necessarily a reliable reference for qsGW . In STEOM-CCSD, a much larger number of diagrams is considered in the single-and two-particle Green's functions compared to GW . 171 QP approximations to GW approximate the effect of these diagrams instead by neglecting the vertex. 126 The diagrams contained in GW are not a subset of the ones contained in EOM-CCSD but only of the ones contained in EOM-CCSDT. 171 Accounting for triples (at least to some extent) is known to be of high importance for the reliable description of charged 172 and neutral excitations. 36,37,173 Consequently, STEOM-CCSD shows mean signed errors compared to EOM-CCSDT calculations of around 0.1 eV for a set of medium organic molecules, but errors can be as large as 0.5 eV in some cases. 36 Moreover, apart from the neglect to triple excitations, the DLPNO approximation can also introduce some artifacts. The pairs which are treated on the CC level are selected based on an MP2 calculation 165 which is not always reliable for systems with strongly screened electron-electron interactions. 174,175 Most strikingly, the VEEs Ω 3 and Ω 4 of the BSE calculation based on evGW @LDA are almost 0.2 eV lower than the ones based on evGW @PBEH40, and in the former calculation, the four lowest excited states are almost degenerate. The character of these excitations are In contrast to the GW -BSE VEEs, the CAMY-B3LYP-TD-DFT results for the crystal structures are in excellent agreement with the available experimental gas-phase data. 18, 163,179 In light of the factors just discussed, the excellent agreement of the CAMY-B3LYP-TD-DFT calculations is most likely due to an overestimation of the true VEEs (as shown in tables 4 and 3) which then cancels with the errors due to inadequate geometries Basis Set Errors Comparing the different GW -BSE methods, we find that evGW @PBEH40 and qsGW are always in close agreement, while the evGW @LDA-BE VEEs are typically a little higher. Overall, the results are in qualitative agreement to each other for D140. For D164, the splitting between both Q y VEEs (∆ Qy−Qy ) calculated using the evGW @PBEH40 and qsGW kernels are in good agreement. The value of 30-4m meV aligns better with experimental observations 177 and also the calculations of Suomivuori et al. 38 However, as for the calculations based on the geometry optimized structure, there is again a discrepancy between the evGW -BSE results obtained for the LDA and PBEH40 starting points. With ∆ Qy−Qy = 0.08 eV, evGW @LDA considerably overestimates the excitonic splitting and also the VEEs based on the LDA starting point are much higher than with the PBEH40 starting point. Character VEE transition weight f The most complete model of the PSII RC we consider in this work comprises six chromophores with 476 atoms in total. Its structure is shown in figure 3 together with the PBEH40/DZ frontier single-particle orbitals and the low-lying excitations are characterized in table 7. In our current implementation, using larger basis sets is complicated by the requirement to use large auxiliary fit sets which then leads to very large matrix representations of the RPA polarizability and the screened Coulomb interaction. As will be discussed in see section 4.5, storing them on disk and transferring them to the CPU is currently the bottleneck of our implementation. This complicates the calculation of the hexamer using the TZP basis set. Below, we will therefore also discuss results for tetramers for which the use of the TZP basis set was feasible. The lowest two excited states are almost degenerate. The lowest one is a local Q y excitation on Pheo D2 . This is expected, since the lowest VEE of the isolated Pheophytin chromophore has been predicted to be lower than the one of Chla. The following state has pronounced CT character and corresponds to the transfer of an electron from P D1 to Pheo D1 . The third and fourth excited state are nearly degenerate as well, but their is a considerable gap of 50 meV between the second and the third excited state. These states correspond to CT from Pheo D1 to P D1 and again to a local excitation on P D2 . This is interesting for many reasons: First, Frankcombe et al 11 calculated the low-lying excited states of the same hexameric complex without explicit consideration of the protein matrix as in our study, but they used TD-DFT with a RSH kernel. They did not find any low-lying CT state which could be related to charge separation, which is in clear disagreement with time-resolved spectroscopic experiments 41,42 showing that the primary electron transfer in the RC occurs from an exciton localized on Chl D1 to Pheo D1 , followed by a tranfer of the hole to P D1 . This would point to the mixing in of low-lying CT states with pronounced Chl D1 + -Pheo D1 and Chl D1 + -P D1 character in calculations of excitation energies. In our model calculations, we only find the second type of CT. Both Sirohiwal et al. 7 and Tamura et al. 14 identified also the first CT state in recent computational studies and both studies explicitly included the protein environment on a molecular mechanical level. In ref. 7, it was found that the protein environment is exclusively responsible for the unidirectional CT along the D1 branch and the occurrence of the Chl D1 + -Pheo D1 state. Therefore, the absence of the explicit protein environment in our work is most likely the reason why we do not observe this particular CT state. Table 8: The four lowest qsGW -BSE excited states of the T329 (Pheo D1 -Chl D1 -P D1 -P D2 ) and T328 (Chl D1 -P D1 -P D2 -Chl D2 ) models: The excitation energies Ω Besides the neglect of the protein environment, the use of the DZ basis set is another weak point in the six-chromophore model. It is not unlikely that the lack of polarization functions favours local excitations and one might therefore expect that using a larger basis set the character of the excitations might be different. To check this, we report here the Similarly to the hexamer, the lowest excited state of T329 is a CT state from P + D2 -Pheo D1 -. Note, that the distinction between D1 and D2 is not necessarily meaningful in the absence of the explicit protein environment. Also, it was found in ref. 14 that an exciton can also form initially in the D2 branch which is then subsequently transferred to the D1 branch. Two aspects are of importance here: First, there is no indication of CT from Chl D2 to Pheo D1 among the first four excited states of T329, which also validates our results for the hexamer. We also repeated the calculations for T329 with the DZ basis set. With the much smaller basis set, the P D2 --Pheo D1 + CT is the energetically lowest state, followed by P D2 + -Pheo D1 -. Second, the P D2 + -Pheo D1 state is 30 meV lower in energy than the lowest excited state of T328. The lowest two excited states of T328 are dominated by particle-hole transitions, with the hole located on the outer monomers (P D1 and P D1 ) and the particle located on the inner ones (Chl D1 and Chl D2 ), and the other way round for the next pair. This result is different from the one by Suomivuori et al. 38 who found two of these excitations to be delocalized over the P D1 /P D1 pair and the other two to be localized on Chl D1 and Chl D2 , respectively. In conclusion, in the absence of the protein environment, the P D1 + /P D2 + -Pheo D1 -CT is the lowest excited state in the PSII RC, independent of the basis set. Conclusions We have calculated the low-lying excited states of the RC in PSII using qsGW -BSE. So far, GW -BSE calculations have been limited to rather small systems. 87,94,101 We presented here a new implementation of the method which enables its routine application to much larger systems. As opposed to a recently developed simplified GW -BSE scheme, 180 our implementation does not introduce any empirical approximations to the matrix elements of the BSE Hamiltonian. Provided a low-order scaling implementation of the GW method and iterative solver for large eigenproblems is available, the proposed algorithm is easy to implement. We calculated the 12 lowest excited states of the complete complex of six chromophores in the PSII RC with almost 2000 correlated electrons on the DZ level. Owing to the small basis set, the calculation could be performed in less than 2 days on a single compute node. We have also calculated the 12 lowest excited states of a tetrameric complex with around 1300 correlated electrons with a TZ + polarization basis set. With around 6 days of wall time, the latter calculation is far more expensive, even though the system is 50 % smaller. Low-order scaling implementations like ours which rely on sparsity in the primary basis usually do not scale well with the size of the basis set. Finite basis set correction techniques for many-body perturbation theory might therefore be a promising solution to circumvent this problem. 162,[181][182][183] Applications to larger systems with polarized basis sets are currently complicated by the requirement to store large matrices on disk. This problem could be overcome by using a very large number of nodes which would enable us to keep them in direct memory. qsGW -BSE is a theoretically more rigorous variant of the GW -BSE method than evGW -BSE, since it is independent of a mean-field reference calculation. We have shown here explicitly for Chla dimers that evGW -BSE might lead to different excitations for different starting points. This is in contrast to the generally good agreement for monomers 101 A Electrochromatic shifts In this appendix, we quantify the electrochromatic shift of the excitation energies of two monomeric and dimeric as well as one tetrameric model of the PSII RC due to solvent B Calculating the BSE Hamiltonian The most time-consuming step in the solution of the BSE is to build the matrix elements of the 2-particle Hamiltonian, eq. (21). Let us denote with the matrix K (±) , a column of A ± B as defined in (21), in the primary basis. Within the density fitting method, we expand products of atomic orbitals in a basis of auxiliary functions. To introduce the PADF variant of this technique, we label atomic orbitals as µ, ν, κ, λ, auxiliary functions as α, β, γ, δ and atomic centers as A, B, C . . . . We also define the convention that µ, α ∈ A, ν, β ∈ B, κ, γ ∈ C and λ, δ ∈ D, i.e. µ and α are only labelling functions centered on atom A, and so on. The PADF expansion of the products of AOs can then be written as where the factor of 1/2 in case A = B is introduced to facilitate evaluation with the same algorithm while avoiding double counting. Let us write (21) in the primary basis as Inserting (22), the contribution to K (±) for all atom pairs (A, B) is where In these and in the following quantities the matrices are restricted to the primary basis functions centered on the atoms denoted by the indices in the superscripts. Defining the intermediates and We can then write where in the + case b is symmetric, and antisymmetric otherwise. These are the working equations with which (21) is implemented. They are similar to the ones for the self-energy, outlined in ref. 146. C Elimination of diffuse functions from the primary basis In addition to the usual canonical orthonormalization 186 during the SCF prior to the qsGW calculation we herein introduce an additional step in order to improve the numerical stability of our algorithm. To project out too diffuse functions from the primary basis we first diagonalize the overlap matrix of primary basis functions S, We then remove a column u i from the transformation matrix if the corresponding eigenvalue λ i is smaller than some predefined threshold s . We then define and use this projector to transform all matrices in the primary basis, the Green's functions, the self-energy contributions as well as the matrices defined in (21) according to where K would be the original exchange-like matrix in the primary basis including the diffuse part. This transformation is not necessary if a very large auxiliary basis set is used and is switched off in that case.
2022-05-18T06:47:12.913Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "32b0b7b49ce92ce43946c2e33f86f02f9a16b088", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "32b0b7b49ce92ce43946c2e33f86f02f9a16b088", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219764824
pes2o/s2orc
v3-fos-license
Food Consumption Practices of Men and Women across Rural-Urban Interface of South Indian Megacity Bangalore Background: Food consumption practices involving dietary diversity, healthy and unhealthy practices have greater influence on nutritional and health status of the individual. Men and women always behave differently and have different consumption pattern due to various factors. Urbanization gradients along rural-urban interface of Bangalore mega city helps for comparative study of these factors. Aims: To compare food consumption practices between men and women across rural-urban interface of Bangalore, India. Original Research Article Geetha et al.; EJNFS, 12(5): 1-9, 2020; Article no.EJNFS.56629 2 Methodology: Men (n=150) and women (n=150) from 300 middle income households in the ruralurban interface of Bangalore, were surveyed for dietary diversity score (DDS), healthy and unhealthy dietary practices and response to questions on health and nutrition. Results: Findings revealed that, least DDS was recorded in transition area among both men (48.0%) and women (47.7%). In rural maximum difference for healthy habit score was existed between men (50.8%) and women (44.0%). Average unhealthy habits score was more among women in rural (33.2%) and transition (35.4%) areas, whereas in urban, men had higher score (41.8%). Health and nutrition aspects indicated, fasting on religious belief was more practiced by women in transition area (56%). Consumption of health supplements was more among women, especially in urban (34%). Conclusion: It can be concluded that, women have poor food consumption practices compared to men. Even though women are observed to be more health conscious than men, their dietary habits are compounded with various factors such as socio-cultural, occupational and urbanization. In this regard nutrition programmes must be strengthened to decrease risk factors for non-communicable diseases and to improve overall health of the individuals. INTRODUCTION The relationship between our foods, the nutrients present in them and our health are complex, but has significant and far-reaching influence on individuals and society. In recent days, changing diets and dietary habits place an increasing burden on healthcare systems. Eating a wellbalanced diet, with adequate nutrients and appropriate calories, is a fundamental requirement for continued health. An appropriate diet contributes to healthy development, healthy ageing and greater resilience against disease [1]. Similarly, a poor or inappropriate diet places people at greater risk of infection and a range of chronic illnesses including cancer, type 2 diabetes and cardiovascular disease [2]. Unhealthy dietary practices, sedentary lifestyle and obesity have emerged as major risk factors of NCDs. All these risk factors are lifestyle related and are influenced by change from rural to urban lifestyle. Even in rural areas with modernization and advent of mass media, there is gradual shift to urbanized lifestyle [2]. Male and female always behave differently and have different consumptions pattern. Food choice is a complex human behaviour and is controlled by many interrelating factors ranging from biological mechanism and genetic profiles to social and cultural factors [3]. This has influence on food preferences and eating styles. For example, female consume less calories than male which shows that females tend to eat in a more feminine style [4]. Women consume more fruit and vegetables, legumes, and whole foods, but they also consume more sweets and cakes. Men tend to have more fat and protein rich foods and to drink more wine, beer, spirits and sweet carbonated drinks. In general, they show dietary behaviors potentially favouring over weight and obesity [5]. Literatures indicate that urbanization is one of the major causes of nutrition transition, which is governed by various factors such as dietary intake, food consumption and sociocultural practices [6]. This transition is root cause for increasing overweight and obesity. In recent days not only men, women are also at great risk of non-communicable diseases. Individual's diet and healthy practices decides their risks for noncommunicable diseases. The study of healthy and unhealthy practices among men and women, directly relates to their health status and risks for non-communicable diseases. This study along rural-urban gradient will correlates, its relationship with extent of urbanization. Hence, this study, carried out with the objective to compare food consumption practices between men and women across rural-urban interface of Bangalore. MATERIALS AND METHODS Methodological steps followed to carryout present investigation are as follows: Selection of Localities Rural-urban interface of the Bangalore comprises two transects, (north and south transects), which are defined as a common space for interdisciplinary research. The northern transect (N-transect) is a rectangular strip of 5 km width and 50 km length, the lower part of this transect cuts into urban Bangalore, and the upper part contains rural villages. The Southern transect (Stransect) is a polygon covering a total area of 300 km 2 . Rural-Urban interface was further divided into three sub regions viz., Rural, Transition and Urban areas based on the simplified Survey Stratification Index (SSI) by following the logic of the Urban-Rural Index which considered distance to the city centre (Vidhana Soudha) and percentage of built-up area [7]. This classification of regions, formed basis for selection of 300 middle income households based on purposive random sampling, in the rural-urban interface of Bangalore. Among these households 50 men and 50 women were interviewed for healthy and unhealthy habits from each gradient (rural, transition and urban). Data Collection A questionnaire was developed and pretested among selected localities for standardization. Data from men and women was collected through personal interviews, on dietary diversity, healthy and unhealthy habits. Individual's response towards, foods, health and nutrition related aspects was also collected and compared between men and women. Dietary Diversity Dietary diversity is the sum of the number of different food groups consumed over a given reference period [8]. It is considered as a proxy to food security. Dietary Diversity Scores (DDS) were calculated by summing the number of food groups consumed by the household members over the 24-hour recall period and expressed in percentage. Healthy and Unhealthy Habits A structured questionnaire comprised of questions related to ten healthy and unhealthy habits, related to routine activities were included and scores were recorded and expressed interms of percentage based on individual's responses. Health and Nutrition Apart from healthy and unhealthy habits, men and women were interviewed for their responses towards questions related to health and nutrition aspects, which are indirectly related to life style and risk factors for non-communicable disease. Statistical Analysis The collected data was pooled and analyzed by application of student 't' test to draw inferences based on study objectives. RESULTS AND DISCUSSION Results are presented under following headings: Dietary Diversity Across the rural-urban gradient, least DDS was recorded in transition area among men (48.0%), however among women it was in both rural and transition (47.7%). Findings revealed that, DDS score of men and women in rural was, 48.5 and 47.7 per cent respectively. It was observed that majority of the men consumed pulses, meat/fish/chicken, milk and milk products whereas, more women consumed green leafy vegetables and fruits. Quite same findings were evident in transition, but in urban, DDS score for male was slightly high (49.0%) compared to female (48.3%). It was surprising to note that, both men and women dietary diversity was below 50 per cent. Irrespective of gender, people mainly consumed five food groups (such as) cereals, pulses, oils, fats and sugars. But only few of them consumed protective food groups, which are most essential to regulate body mechanisms as they are good sources of vitamins and minerals. A research study conducted in selected South African towns, reported that, the peri-urban populations had limited dietary intake and were more food insecure because of high levels of poverty, unemployment, and lack of land. Peri-urban dwellers are therefore more sensitive to changes in incomes and food prices because they lack safety nets to absorb income or price shocks as they purchase more, rather than growing their own food. This compromises dietary diversity as they have limited access to diverse foods [9]. In the present study quite similar observations were made, as the least DDS score was observed in transition area both among male and female. Two main reasons were identified for slight differences in DDS score of men and women. First one is, irrespective of gradients (rural, transition and urban areas) it was observed that influence of socio-cultural practices on food choices and consumption (Such as, fasting on religious belief, avoiding certain foods during menstruation, compromising when the quantity of nutritious food prepared is less, with other foods, serving first men the family etc.) was more among women compared to men. Second one is occupational status, most of the men were employed and tend to consume outside food, whereas majority of the women were housewives and restricted with household food. Healthy Habit Score In rural maximum difference for healthy habit score was existed between men (50.8%) and women (44.0%). In rural 68 per cent of men were consuming meals on time regularly, whereas this was only 48 per cent among women. Milk and milk products were consumed by 90 per cent of rural men, however it was 78 per cent among rural women. Both in rural and urban, response of men towards daily consumption of vegetables was more (rural=86%, urban=90%) compared to women (rural=74%, urban=74%). It was noticed that, foods with additional health benefits (fenugreek seeds, flax seeds, green tea etc.) was consumed by greater number of women (20%) in urban compared to men (8.0%). In transition almost same score for healthy habits was obtained for both men and women. Among urban men healthy habit score (50.0%) recorded was slightly higher, compared to women (48.4%). However, these differences were statistically non significant irrespective of study areas and between the genders. A study on gender differences in food choices reported that, women were more likely than men to avoid high-fat foods, eating fruit and fiber, and limiting salt (to a lesser extent) in almost all of the 23 countries. They were also more likely to be dieting and attached greater importance to healthy eating. Gender differences in food choices therefore appear to be partly attributed to women's greater weight control involvement and partly to their stronger beliefs in healthy eating [10]. These findings correlate with present investigation (except in transition). Women who 5 Fig. 2. Dietary diversity score comparison between male and female across the rural interface of Bangalore In rural maximum difference for healthy habit score was existed between men (50.8%) and women (44.0%). In rural 68 per cent of men were consuming meals on time regularly, whereas this was only 48 per cent among women. Milk and y 90 per cent of rural men, however it was 78 per cent among rural women. Both in rural and urban, response of men towards daily consumption of vegetables was more (rural=86%, urban=90%) compared to women (rural=74%, urban=74%). It was noticed with additional health benefits (fenugreek seeds, flax seeds, green tea etc.) was consumed by greater number of women (20%) in urban compared to men (8.0%). In transition almost same score for healthy habits was obtained for both men and women. Among n men healthy habit score (50.0%) recorded was slightly higher, compared to women (48.4%). However, these differences were statistically non significant irrespective of study areas and A study on gender differences in food choices women were more likely than men fat foods, eating fruit and fiber, and limiting salt (to a lesser extent) in almost all of the 23 countries. They were also more likely to be dieting and attached greater importance to ating. Gender differences in food choices therefore appear to be partly attributed to women's greater weight control involvement and partly to their stronger beliefs in healthy eating These findings correlate with present nsition). Women who preferred whole fruits over fruit juice, responded positively towards consumption of foods with additional health benefits and for involvement in exercise. These responses were found to be more among women and found to be increased towards urban. Indirectly, this indicates that, urbanized lifestyle and related health issues, may increase health consciousness among women compared to men. A study on gender differences in eating behaviour reported that, eating behaviour shows differences between men and women and it is controlled by social, biological and familial factors. Healthy eating behaviour is very important for both men and women, to avoid problems of obesity and overweight. Family members, friends, media and behavioural control of individual are the main factors to develop healthy eating behaviour [6]. Present study revealed similar healthy habit pattern among men and women across rural-urban gradient. Fig. 2. Dietary diversity score comparison between male and female across the rural-urban preferred whole fruits over fruit juice, responded positively towards consumption of foods with additional health benefits and for involvement in exercise. These responses were found to be more among women and found to be increased rds urban. Indirectly, this indicates that, urbanized lifestyle and related health issues, may increase health consciousness among women A study on gender differences in eating behaviour reported that, eating tween men and women and it is controlled by social, biological and familial factors. Healthy eating behaviour is very important for both men and women, to avoid problems of obesity and overweight. Family members, friends, media and behavioural control ndividual are the main factors to develop healthy eating behaviour [6]. Present study revealed similar healthy habit pattern among urban gradient. In urban, consumption of outside food (46.0%), very late dinner (54.0%) and non-food habits (38.0%) are reasons noticed for higher average unhealthy score among men, which was significantly more (F= 5.46**) compared to transition and rural men. Statistically significant difference for average unhealthy habit score between men and women noticed only in urban (t=1.96 * ). Gender differences are influenced by socio-demographic factors in different countries. These differences may be more consistent among less educated and rural subgroups because of traditional beliefs. On the other hand, the differences tend to be lower in developed countries [11]. These statements may be true for gender differences with respect to unhealthy habit score, in present study. Certain unhealthy food consumption practices among women, in rural and transition, were influenced by sociocultural practices and traditional beliefs. However, few unhealthy practices influenced by urbanized lifestyle were acquired more by urban men than women. Health and Nutrition Aspects Health and nutrition aspects indicated, among the study regions, majority of the women knew about type of foods consumed for good health. In support to this most of the women had altered their regular food habits especially in transition (32%) and urban (38%) areas. Fasting on religious belief was more practiced by transition women (56%), compared to rural (26%) and urban (28%), which was statistically significant (χ2= 6.205*). Consumption of health supplements was significantly more (χ2=10.270**) among women, especially in urban (34%) compared to men. Rural women preference for preparation of foods at home was least considered (54.5%). Morbidity in past two months was more among women and majority was observed in transition area. Number of individuals stressed due to various reasons was more among men in rural, whereas in transition areas it was women. Almost equal response was obtained by men and women in urban area for their stress condition. Consumption of tea or coffee was more among men along the ruralurban gradient. However these observations were found to be non significant across ruralurban interface among men and women. These observations reveal that, even though women are known about healthy foods, morbidity was more among women and their food preferences were less considered compared to men. It was also noticed that, majority of women have altered their regular foods and were taking health supplements, which are generally practiced due to health conditions. Especially in transition area, majority of women responded that, they are stressed and about their morbid conditions in past two months. This may be correlated to their frequent fasting on religious believes and other unhealthy practices. According to a study, noncommunicable diseases, that taken all together represent the first cause of death worldwide, are greatly influenced by individual behavior as regards, in particular, dietary habits and physical activity. These factors are both greatly influenced by gender, which is, consequently, a main determinant of human health [12]. Gender roles are socially constructed: the behaviors, activities, and attributes considered appropriate for men and women are specific to a given society. Answering the question of why women are more likely than men to be malnourished requires a gender analysis [13]. Present study reveals, food consumption practices of women is influenced by their socially constructed roles and cultural patterns. Nutritional status is affected when; socially constructed gender roles of men and women interact with their biological roles. Additionally, a recent study highlights that, gender has been recognized as an important factor that influences lifestyle habits and consequently, the onset and course of chronic diseases [14]. CONCLUSION Present study reveals that, food consumption practices of men and women differ only in few aspects across rural-urban interface of Bangalore. Unhealthy habits among men increases with urbanization. Unhealthy food habits are highest among rural women which is gradually decreased towards urban, indicating urbanized environment and related health problems are increasing health consciousness especially among women. Whereas, reverse scenario is evident among men, exhibited by increased unhealthy habits towards urban. Overall it can be concluded that, food consumption practices of men and women are poor with one or the other aspects considered. Further, comparison of dietary intake and physical activity and family history of NCDs of both men and women is of added value to future research.
2020-06-04T09:03:52.504Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "d6c442c4ff5c03a35e3c60cb59e4c979a5e8790a", "oa_license": null, "oa_url": "https://www.journalejnfs.com/index.php/EJNFS/article/download/30223/56705", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "316a97f388988576a4f23fee4dec3a05080cda23", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
195246809
pes2o/s2orc
v3-fos-license
Dosing and formulation of antenatal corticosteroids for fetal lung maturation and gene expression in rhesus macaques Antenatal corticosteroids (ANS) are the major intervention to decrease respiratory distress syndrome and mortality from premature birth and are standard of care. The use of ANS is expanding to include new indications and gestational ages, although the recommended dosing was never optimized. The most widely used treatment is two intramuscular doses of a 1:1 mixture of betamethasone-phosphate (Beta-P) and betamethasone-acetate (Beta-Ac) – the clinical drug. We tested in a primate model the efficacy of the slow release Beta-Ac alone for enhancing fetal lung maturation and to reduce fetal corticosteroid exposure and potential toxic effects. Pregnant rhesus macaques at 127 days of gestation (80% of term) were treated with either the clinical drug (0.25 mg/kg) or Beta-Ac (0.125 mg/kg). Beta-Ac alone increased lung compliance and surfactant concentration in the fetal lung equivalently to the clinical drug. By transcriptome analyses the early suppression of genes associated with immune responses and developmental pathways were less affected by Beta-Ac than the clinical drug. Promoter and regulatory analysis prediction identified differentially expressed genes targeted by the glucocorticoid receptor in the lung. At 5 days the clinical drug suppressed genes associated with neuronal development and differentiation in the fetal hippocampus compared to control, while low dose Beta-Ac alone did not. A low dose ANS treatment with Beta-Ac should be assessed for efficacy in human trials. ANS exposure in late preterm newborns (with gestational ages between 34 0 and 36 6 weeks) 3 . In experimental studies ANS caused hippocampal degeneration and HPA axis dysfunction in macaques [11][12][13] and decreased fetal and brain growth in sheep and rats [14][15][16] . These findings were consistent with observational studies showing that ANS impaired fetal growth 17 and decreased neuronal density in the hippocampus of newborns 18 . As with any medication, treatment strategies should maximize the benefits and minimize potential toxic effects. ANS are recommended by the World Health Organization to be given to the mother as intramuscular (IM) dexamethasone-phosphate (Dex-P), or a combination of equal parts of Beta-P and Beta-Ac (clinical drug) 19 . Phosphate and acetate formulations have distinct pharmacokinetic profiles and effects 20,21 . Phosphate preparations are rapidly dephosphorylated resulting in early high plasma concentrations and short half-lives in the mother and fetus. Beta-Ac is slowly deacetylated resulting in low peak plasma levels and long half-life 20 . In fetal sheep the clinical drug was superior to Dex-P or Beta-P alone for causing physiologic and biochemical maturation of the lung 22 . In contrast, a single weight-based dose of Beta-Ac improved maturation comparably to the clinical drug, with a decreased fetal exposure to Beta 23 . Any pharmacological intervention for treating a pregnant woman to benefit the fetus should be considered high risk. To translate a new ANS therapy from animal models to humans, a validation in primates is desirable. We evaluated a lower dosing strategy using Beta-Ac for fetal lung maturation and transcriptional effects on the fetal lung and brain in the Rhesus macaque. Results Beta-Ac enhances fetal lung maturation at 5 days. To compare Beta-Ac with the clinical drug for fetal lung maturation, pregnant Rhesus macaques were treated with a single IM injection of either Beta-Ac (0.125 mg/kg or 0.06 mg/kg), saline (control) or the clinical drug (0.25 mg/kg) and delivered 5 days after treatment (Table 1). ACS induce fetal lung maturation by a combination of increased surfactant production, structural changes, and improved water clearance resulting in increased lung compliance. Both the clinical drug and Beta-Ac (0.125 mg/kg) improved static lung compliance (Fig. 1A,B). Beta-Ac 0.06 mg/kg inconsistently increased lung compliance with 3 out of 5 treated animals having pressure-volume curves similar to control. The main lipid component of surfactant is saturated phosphatidylcholine (SatPC), which can be used as a marker of lung surfactant content and indicator of biochemical lung maturation 24 . Both Beta-Ac (0.125 mg/kg) and the clinical drug increased the SatPC in the bronchoalveolar lavage fluid compared to control, while the increase with Beta-Ac (0.06 mg/kg) was not significant (Fig. 1C). Confocal microscopy of immunofluorescent staining of the fetal lung for the epithelial cell marker TTF-1, smooth muscle actin (SMA), and type II alveolar cell markers pro-SPC, and ABCA3 showed no differences in the relative numbers of cells positive for these markers 5 days after treatment (Supplemental Fig. 1). Further, there were no differences in the proportion of cells expressing the cell cycle marker Ki-67. Transcriptomic effects of the clinical drug and Beta-Ac on the fetal lung. RNA-sequencing analyses were performed on whole lung tissue from fetuses delivered 4 hours and 5 days after treatment with the clinical drug and 6 hours and 5 days after Beta-Ac (Fig. 2). Principal component analysis of transcriptomic changes separated animals 4 h after treatment with the clinical drug, 6 h after Beta-Ac, and control animals, with the animals treated with the clinical drug being the most distant from the others ( Fig. 2A). In contrast, by 5 days the clinical drug and Beta-Ac overlapped (Fig. 2B). Further, the correlation heatmap showed the most distinct transcriptome profiles for animals treated with the clinical drug 4 h prior to delivery (Fig. 2C). Differential expression analysis was performed using the EdgeR package on R with differentially expressed genes determined using p-value < 0.05, q-value < 0.1 and fold-change > 1.5. At the times of high fetal plasma concentration the clinical drug caused differential expression of 1,779 genes while Beta-Ac caused the differential expression of only 393 genes, of which 372 were common to both treatments. The common differentially expressed genes had similar magnitude of fold changes, with a Pearson correlation value of 0.87 (Fig. 3A,B). In contrast, after 5 days the clinical drug and Beta-Ac differentially expressed a similar number of genes (318 and 419 genes, respectively), with 185 genes differentially expressed in common. These commonly regulated genes at 5 days also had a similar magnitude of differential expression and highly correlated log fold-change values (Fig. 3A,C). By transcription factor binding site prediction and protein regulation or interaction prediction, several of the common top regulated genes at both timepoints have either a glucocorticoid receptor enhancer motif or are predicted to be regulated or interact with the NR3C1 receptor. Several common regulated genes are also reported in the literature to be associated with "lung expression", "respiratory disease", and "lung cell line" (Fig. 3A). www.nature.com/scientificreports www.nature.com/scientificreports/ Genes differentially regulated by either of the ANS treatments were associated with several similar biological processes. At the early time point induced genes were associated with "cellular localization", "developmental process", "ion and protein transport" while suppressed genes were associated with "cellular morphogenesis", "chemotaxis", and "immune responses". Despite the similarities, there were large differences between biological processes and pathways that were differentially regulated in the lung by the clinical drug and Beta-Ac relative to control at the time of peak drug concentration in the fetal plasma (Fig. 4). The clinical drug had larger inhibitory effects on immune system processes and development, including suppression of Th1, Th2 and Th17 lymphocyte differentiation, lymphocyte proliferation and activation, and cytokine signaling. The clinical drug also modulated biological processes related to organ development and morphogenesis, which were only weakly associated with Beta-Ac treatment. Specifically, genes inhibited by the clinical drug were strongly enriched for angiogenesis and vascular development, endothelial cell proliferation, epithelial tube morphogenesis, and canonical Wnt signaling. Genes differentially expressed by the clinical drug at the early time point may not be contributing to lung maturational responses as the 2 treatments resulted in similar improvement in lung gas volume and similar RNA profiles at 5 days. Transcriptomic effects in the fetal hippocampus at 5 days. To evaluate potentially toxic effects of ANS on the fetal brain development we compared the transcriptome of the fetal hippocampus 5 days after ANS treatment. The principal component analysis and the correlation heatmap demonstrate that control animals cluster separately and distant from animals treated with the clinical drug (Fig. 5A,B). In contrast, animals treated with Beta-Ac 0.125 mg/kg are interspersed between the other groups but with wide variation among samples. By differential expression analysis 1612 genes were differentially regulated (788 induced, 824 suppressed) in the hippocampus by the clinical drug at 5 days compared to control (Fig. 5C). There were no statistical differences between gene expression levels in the Beta-Ac treated animals compared to control. Genes induced by the clinical drug were associated with the regulation of synapse maturation and semaphorin interactions, among others. Suppressed genes were associated with regulation of neurogenesis, neuron, projection development, cellular morphogenesis and nervous system development (Figs 6 and 7). Discussion Since 1972, use of ANS for women at risk of preterm labor at 24-34 weeks gestation decreased the incidence of respiratory distress syndrome and mortality in preterm newborns, but the formulation and dosing was never optimized by pharmacokinetic analyses or for safety 2,25 . In fact, the expanding use of ANS beyond the 24 to 34 week gestational has identified previously unrecognized risks such as hypoglycemia in late preterm infants and increased mortality in middle and low-resource countries 3,9 . We report that fetal lung maturation in the primate can be achieved with a single weight-based dose of Beta-Ac that avoided fetal exposure to higher Beta levels from the Beta-P component of the clinical drug. Beta-Ac minimized early transcriptional changes in the fetal lung associated with immunity and morphogenesis, and at 5 days in the fetal hippocampus associated with nervous system development despite similar pulmonary maturation. As even routine short-term use of corticosteroids is associated with risks, and complications are dose related 26 , the lowest fetal exposure sufficient to get the maturational benefit should be the goal. Different steroid formulations and dosing intervals are used around the world based on a total dose of 24 mg with minimal experimental or clinical data to support equivalency 25 . We previously showed with fetal sheep that a single dose of Beta-Ac that was 25% of the standard 2-dose treatment with the clinical drug yielded similar improvements in gas exchange and lung compliance 2 days after treatment 23 . This treatment strategy in sheep exposes the fetus to a peak drug concentration that is 10% of the peak drug concentration with the clinical drug, while providing continuous fetal exposure to betamethasone for 24 hours 23 . In the primate, treatments with Beta-Ac alone and the clinical drug resulted in similar mechanical induction of lung maturation as demonstrated by the pressure-volume curves and similar increase in the surfactant content, measured by the SatPC concentration in the BALF. Increased surfactant production by the premature lung decreases the severity and incidence of RDS in preterm newborns. Moreover, there were no differences between the lung transcriptomes 5 days after treatment. Interestingly, the proportions of cells expressing the epithelial marker TTF-1 or the type 2 alveolar cell markers, ABCA3 and SPC, were similar among treatment groups and control at 5 days, suggesting that ANS induced transient changes in expression of mRNA but not persistent cellular differentiation to increase the number of surfactant producing cells. This finding is consistent with the observation that the effects of ANS are transient and clinical trials have not consistently demonstrated benefit in decreasing RDS beyond 7 days after treatment 1,27 . The early transcriptomic changes in the lung 4 to 6 hours after ACS exposure offers new insight into the complex signaling mechanism of lung maturation induced by corticosteroids. While most studies have focused on the effects of ACS on the endpoints of increased surfactant production and improved lung structure here we show that ACS has large effects on developmental, vascularization, cell signaling, and cell cycle pathways. These multiple effects likely all contribute to the improved lung function seen in preterm newborns after ACS treatment. More interestingly, 5 days after ACs treatment, transcriptional changes were limited and were associated with ion transport and cytoskeletal organization. These changes could be associated with continued structural and tissue water balance maturation. However, most of the effects of ACS on gene expression had disappeared by 5 days. We provide new information that the high peak drug level from the clinical drug was associated with more suppression of lung immune responses, angiogenesis and developmental pathways based on transcriptome analyses. While the clinical drug regulated a larger number of genes and pathways than Beta-Ac, the genes that were commonly differentially expressed had similar magnitudes of change, indicating that the additional steroid exposure may not be contributing to more maturational signaling. Changes in expression of other genes may www.nature.com/scientificreports www.nature.com/scientificreports/ only have harmful harmful side effects. While in high resource environments ANS has not been associated with increased risk of maternal or neonatal infection or sepsis 28 , in the subgroup of patients with preterm rupture of membranes the risk of chorioamnionitis was increased with repeated treatments with ANS 29 . In middle and low-resource countries the increased infant mortality associated with ANS treatment may be caused by increased the risk of infection 10 . Due to cost and availability, the international cluster-randomized trial used Dex-P given as 4 intramuscular doses of 6 mg every 12 hours 9 . This dosing strategy, provides fetal exposure to ANS for greater than 48 hours but with 4 high peak fetal plasma levels that are not necessary in animal models 22,30 . Avoiding high fetal concentrations of corticosteroids should minimize the treatment effects on the maternal and fetal immune responses and the risk of perinatal infection. More concerning are the reports of direct effects of glucocorticoids on the fetal brain. Preterm newborns exposed to glucocorticoids had decreased number of neurons in the hippocampus 12 , consistent with a previous report of increased apoptosis of neuron in the hippocampus of macaques after treatment with glucocorticoids 31 . We found that fetal exposure to the clinical drug resulted in suppression of genes associated with neurogenesis and nervous system development 5 days after treatment. There were no differences in the hippocampus transcriptome between animals treated with Beta-Ac compared to control but we did observe a wide variability in the Beta-Ac group with some animals clustering with controls and others with the clinical drug. This variability could be due to individual variations regarding drug metabolism affecting the fetal exposure to the treatment or genetic variants affecting the molecular response to corticosteroids. In our limited sample size the sex of the animal did not seem to affect the response. The most recent meta-analysis of antenatal corticosteroids showed www.nature.com/scientificreports www.nature.com/scientificreports/ a trend towards reduced neurodevelopmental impairment after a single course of ANS in infants less than 34 weeks gestation in high resource countries 28 . There are no data on neurodevelopmental outcomes for late preterm infants where the clinical benefits of ANS are small and may not outweigh the risks. This benefit to risk ratio may be even less for elective C-sections. Even more problematic are reports of increased renal disease, obesity and metabolic syndrome at advanced ages in sheep and baboons exposed as fetuses to ANS [32][33][34] . These effects cannot be evaluated in human populations being treated with ANS today and may be at long-term risk of fetal effects on adult outcomes. Here we demonstrate that a clinically relevant dose of ANS used for fetal lung maturation caused profound and early changes in transcriptional networks that control lung development and immunity and persistent changes on brain development pathways. Many of the changes can be avoided by low-dose Beta-Ac while preserving the physiological maturational effects in a nonhuman primate model. This strategy should be considered for clinical trials to optimize ANS treatment in preterm infants and decrease potential toxic effects. Methods Animals. The Institutional Animal Care and Use Committee at the University of California Davis approved all animal procedures, which were performed at the California National Primate Research Center according to the approved protocol. Time-mated pregnant Rhesus macaques were given the clinical drug as intramuscular Celestone Soluspan ® 0.25 mg/kg (6 mg/ml containing 3 mg/mL betamethasone as Beta-P and 3 mg/mL of Beta-Ac; Merck Sharp & Dohme, Kenilworth, NJ), 0.125 mg/kg Beta-Ac (a gift from Merck Sharp & Dohme, Kenilworth, NJ), 0.06 mg/kg Beta-Ac or saline prior to preterm delivery at 132 ± 2 days gestational age (term is 165 days). To investigate the early transcriptional effects, fetuses were delivered at the time of peak fetal blood Beta levels based on measurements in fetal sheep 23 : 4 h after the clinical drug and 6 h after Beta-Ac (n = 3 animals/ group). Lung samples were frozen for RNA-sequencing. To assess the maturational effects of the interventions, other groups of fetuses were treated 5 days before delivery at 132 ± 2 days of gestation (n = 5-8 animals/group). After delivery, pressure-volume curves were measured with a syringe and pressure manometer by inflating the lungs to 40 cm H 2 O pressure and followed by deflation with measurements of lung volumes. The right upper lobe of the fetal lung was inflation fixed with formalin at 30 cm H 2 O pressure for histology; tissue samples from the right lower lobe of fetal lung and the hippocampus were snap frozen for RNA-sequencing Saturated phosphatidylcholine and cortisol measurements. Alveolar lavage fluid was recovered from the left lung and lipids were extracted with chloroform-methanol (2:1). Saturated phosphatidylcholine (SatPC) was isolated after exposure to osmium tetroxide and quantified by phosphorus assay as previously described 35 . Cord blood plasma cortisol levels were measured using an ELISA kit (EA65; Oxford Biomedical Research, Rochester Hills, MI). Immunofluorescence and confocal microscopy. Sections from paraffin-embedded tissues underwent heat-assisted antigen retrieval with citrate buffer (pH 6.0), followed by blocking with donkey or goat serum and incubation with primary antibodies overnight ( Table 2). The following day, sections were incubated with species-specific Alexa Fluor antibody (Life Technologies, Carlsbad, CA), followed by DAPI (Life technologies, Carlsbad, CA, dilution 1:2000). Sections were mounted with ProLong Gold (Life technologies, Carlsbad, CA). Stained slides were imaged by confocal microscopy for co-localization of fluorescent antibodies at 40x magnification, 1024 × 1024 pixels resolution on a Nikon Eclipse A1RSi inverted microscope (Nikon Instruments Inc., Melville, NY). Confocal images were analyzed using Nikon NIS Elements software (Nikon Instruments Inc., Melville, NY), for object count and colocalization. Statistical analyses. Statistical analyses of morphological and immunofluorescence data were performed with GraphPad Prism software (Carlsbad, CA). Values for continuous variables were compared by t-test or ANOVA followed by Holm-Sidak post-hoc analysis for multiple comparisons. Data are presented as bars with individual data points and standard deviation. RNA isolation and sequencing. Total RNA was extracted from frozen lung tissues using the RNeasy Universal Mini Kit (Qiagen, Valencia, CA) according to the manufacturer's instructions. RNA quality and integrity were verified using the Agilent 2100 Bioanalyzer (Agilent, Agilent Technologies, Santa Clara, CA). www.nature.com/scientificreports www.nature.com/scientificreports/ RNA-sequencing was performed by the Cincinnati Children's Hospital Medical Center DNA Sequencing and Genotyping Core with a read depth of 20-30 million reads per sample for 75 bp paired-end reads. The raw sequence reads in FASTQ format were aligned to the Rhesus (Macaca mulatta) genome build MMUL1.0 using Bowtie 2 36 . Reads were counted using featureCounts 37 . After checking data quality, raw read counts were filtered to exclude genes with low expression (<7 reads) and normalized using the trimmed mean of M values method 38 . Differential expression analyses comparing treatment groups to control and between each other were performed using EdgeR 39 followed by false discovery rate adjustment using Storey's method 40 . Genes were considered differentially expressed based on their fold-change relative to control (= or >1.5), p-value (<0.05) and q-value (<0.1). Antibody Cellular marker Species Dilution Functional enrichment and pathway analysis. Differentially expressed genes were used for functional enrichment analysis of Gene Ontology and pathway terms using the ToppCluster web server 41 . Only unique terms associated with either induced or suppressed genes and at least 2 genes are reported. Negative log p-values represent terms associated with suppressed gene expression and positive log p-values are associated with induced gene expression. Promoter GRE cis-element was scanned using the Msig-DB motif gene sets within 4 kb around their transcription starting sites (http://software.broadinstitute.org/gsea/msigdb). The evidence that NR3C1 regulates or interacts with genes in the top hits list was obtained via literature mining using Genomatix co-citation database (Genomatix Inc.) and IPA knowledge base (Ingenuity Pathway Analysis, QIAGEN). Annotation of genes expressed in the lung or associated with respiratory disease were collected from IPA knowledge base. Data Availability The gene expression data discussed in this publication have been deposited in NCBI's Gene Expression Omnibus and are accessible through GEO Series Accession Number GSE118438 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE118438).
2019-06-22T13:41:38.410Z
2019-06-21T00:00:00.000
{ "year": 2019, "sha1": "b39ec259a92bf72778400e5b8b4bfa056a1c1e81", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-45171-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b39ec259a92bf72778400e5b8b4bfa056a1c1e81", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271085989
pes2o/s2orc
v3-fos-license
Mechanistic insights into the conversion of flavin adenine dinucleotide (FAD) to 8-formyl FAD in formate oxidase: a combined experimental and in-silico study Formate oxidase (FOx), which contains 8-formyl flavin adenine dinucleotide (FAD), exhibits a distinct advantage in utilizing ambient oxygen molecules for the oxidation of formic acid compared to other glucose-methanol-choline (GMC) oxidoreductase enzymes that contain only the standard FAD cofactor. The FOx-mediated conversion of FAD to 8-formyl FAD results in an approximate 10-fold increase in formate oxidase activity. However, the mechanistic details underlying the autocatalytic formation of 8-formyl FAD are still not well understood, which impedes further utilization of FOx. In this study, we employ molecular dynamics simulation, QM/MM umbrella sampling simulation, enzyme activity assay, site-directed mutagenesis, and spectroscopic analysis to elucidate the oxidation mechanism of FAD to 8-formyl FAD. Our results reveal that a catalytic water molecule, rather than any catalytic amino acids, serves as a general base to deprotonate the C8 methyl group on FAD, thus facilitating the formation of a quinone-methide tautomer intermediate. An oxygen molecule subsequently oxidizes this intermediate, resulting in a C8 methyl hydroperoxide anion that is protonated and dissociated to form OHC-RP and OH−. During the oxidation of FAD to 8-formyl FAD, the energy barrier for the rate-limiting step is calculated to be 22.8 kcal/mol, which corresponds to the required 14-hour transformation time observed experimentally. Further, the elucidated oxidation mechanism reveals that the autocatalytic formation of 8-formyl FAD depends on the proximal arginine and serine residues, R87 and S94, respectively. Enzymatic activity assay validates that the mutation of R87 to lysine reduces the kcat value to 75% of the wild-type, while the mutation to histidine results in a complete loss of activity. Similarly, the mutant S94I also leads to the deactivation of enzyme. This dependency arises because the nucleophilic OH− group and the quinone-methide tautomer intermediate are stabilized through the noncovalent interaction provided by R87 and S94. These findings not only explain the mechanistic details of each reaction step but also clarify the functional role of R87 and S94 during the oxidative maturation of 8-formyl FAD, thereby providing crucial theoretical support for the development of novel flavoenzymes with enhanced redox properties. Supplementary Information The online version contains supplementary material available at 10.1186/s40643-024-00782-4. Introduction Formic acid, characterized by its high gravimetric energy density and substantial reduction equivalents, is a fundamental raw material in the chemical industry.Formate oxidase (FOx) from Aspergillus oryzae is a formate-specific flavoprotein oxidase that belongs to the glucosemethanol-choline (GMC) oxidoreductase superfamily (Doubayashi et al. 2011;Maeda et al. 2009a).FOx exhibits considerable biocatalytic potential by efficiently utilizing ambient oxygen to oxidize formic acid, concurrently producing hydrogen peroxide (H 2 O 2 ) in situ.This reaction is known to enhance the efficiency of H 2 O 2 utilization when coupled with monooxygenases or peroxidases (Robbins et al. 2018a).The high atom economy of FOxcatalyzed reactions and the straightforward separation of the byproduct, CO 2 , render it well suitable for H 2 O 2dependent biosynthetic reactions (Li et al. 2022).Furthermore, during the catalytic cycle, the flavin cofactor in FOx is reduced to hydroquinone, a form recently shown to be a highly controllable and effective radical source for light-driven enzyme-catalyzed reversible addition-fragmentation chain transfer (RAFT) polymerization.The high turnover rate of FOx holds promise for its extensive application in industrial biocatalysis (Heath and Turner 2022;Lao et al. 2022;Tao et al. 2024). FOx exhibits distinct kinetic characteristics compared to its counterparts within the GMC superfamily (Heuts et al. 2009).Specifically, the turnover rate of FOx gradually increases over tens of hours following the purification (Doubayashi et al. 2019).Recent studies have reported that FOx contains a rare 8-formyl flavin adenine dinucleotide (8-formyl FAD) cofactor, instead of FAD found in other GMC superfamily members (Maeda et al. 2009b;Wongnate and Chaiyen 2013).This covalently modified cofactor is generated in situ from FAD through self-oxidation mechanism.Consequently, it is hypothesized that the oxidative maturation of the cofactor contributes to the progressive enhancement of the FOx turnover number (Robbins et al. 2017).Notably, FOx is not the only protein reported to contain 8-formyl FAD (Edmondson 1974).The heterodimeric human electron-transferring flavoprotein (hETF), isolated under pH 8.5, has shown a conversion of its cofactor from FAD to 8-formyl FAD, accompanied by 50% reduction in turnover number (Augustin et al. 2018).Additionally, site-directed mutagenesis studies of lactate oxidase (LOx) have demonstrated that the R286L-LOx variant contains 8-formyl flavin mononucleotide (8-formyl FMN), but this variant exhibits almost no activity (Yorita et al. 2000).Thus, among all known enzymes containing 8-formyl FAD, the 8-formylation modification appears to only enhance the catalytic ability of FOx.Redesigning the active sites of FAD-dependent enzymes to accommodate 8-formyl FAD presents an optimistic and potential strategy for enhancing the catalytic function.However, it is crucial to assess whether the microenvironment within enzyme's active pocket can facilitate the formation of 8-formyl FAD, and equally important to ensure that 8-formyl FAD possesses the appropriate charge distribution and conformation to realize its catalytic ability.Therefore, elucidating the mechanism behind the formation and functional engagement of 8-formyl FAD in FOx will undoubtedly facilitate the development of flavoprotein oxidases with enhanced oxidative capability. Extensive investigations using enzyme kinetics, sitedirected mutagenesis, and spectroscopic analysis have sought to elucidate the formation mechanism of 8-formyl FAD cofactor in FOx and its catalytic mode (Robbins et al. 2017(Robbins et al. , 2018b;;Willot et al. 2020).Despite these efforts, the precise mechanistic details are still not fully understood.A central debate focuses on the potential roles of specific amino acids in the FAD autoxidation reaction and their contributions to this process.A hypothesis suggests that the deprotonation of methyl group at the C8 atom of FAD by a general base facilitates the formation of a quinone-methide tautomeric intermediate.This intermediate is then considered to be stabilized through a covalent linkage formed by residue S94 at the C8 atom of 8-formyl FAD.The process culminates in the generation of 8-formyl FAD through a series of proton and electron transfer, where water molecules and general bases facilitate the proton transfer, and oxygen molecules act as electron acceptors (Augustin et al. 2018;Heuts et al. 2009).However, two critical questions remain unresolved in this model.Firstly, the identity of the general base remains unclear-is it the residue R87 or a hydroxide ion in solution?Notably, crystallographic analysis has shown that three residues, S94, R87 and L97, are positioned within 10.0 Å of the C8 atom (Maeda et al. 2010).Secondly, the mechanism by which the deprotonated state of S94 is achieved requires further clarification, as this state is necessary for covalent binding to FAD (Blay and Pei 2019). An alternative reaction pathway for 8-formylation within the oxidation mechanism of riboflavin-5'-phosphate to 8-demethyl-8-aminoriboflavin-5'-phosphate (AFP) catalyzed by AFP synthase has been proposed (Konjik et al. 2017).In this model, the formylated flavin transiently appears as an intermediate, with hydroxide ions acting as general bases to deprotonate the methyl group at the C8 atom of flavin.Residues near the O4 atom are suggested to stabilize the quinone-methide tautomeric intermediate.Subsequent oxidation by oxygen molecules directly yields the 8-hydroperoxide anion of riboflavin-5'-phosphate, which ultimately converts into 8-formyl riboflavin-5'-phosphate in the presence of hydroxide ions and water at the active site.Although this alternative pathway offers a more streamlined reaction mode, it fails to address whether residues near the C8 atom are involved in the reaction process.Therefore, getting deeper insight into the 8-formyl FAD formation through a more rigorous analysis of the involved processes will improve the utilization of this cofactor and deepen our understanding of flavin cofactor modification. In this study, we conducted a comprehensive investigation into the mechanism of FAD oxidation to 8-formyl FAD in FOx by integrating QM/MM umbrella sampling (US) simulation, molecular dynamics (MD) simulation, and site-directed mutagenesis.Specifically, our analysis meticulously evaluated the reaction coordinates (RCs) associated with the residues R87 and S94, which are directly involved in the oxidation process, as well as RCs for reactions not involving these residues directly.Additionally, we investigated the functional roles of R87 and S94 using site-directed mutagenesis and MD simulation.Our comprehensive analysis elucidated all intermediates in the oxidative maturation of 8-formyl FAD, quantified the free energy barrier for each reaction step, and identified the critical rate-limiting step in the pathway.Using the QM/MM US method, the essential role of 8-formyl FAD was elucidated in the catalytic functionality of FOx.Overall, our findings provide new insights into the formylation modification of FAD and enhance our understanding of flavin modification.These results also offer theoretical guidance for the development of modified flavin-dependent oxidases, which are crucial for biomanufacturing applications. The central role of 8-formyl FAD in FOx-mediated conversion of formate to CO 2 To assess whether FAD bound to FOx could self-catalytically convert to 8-formyl FAD in vitro, we monitored the UV-visible absorption spectra of the freshly purified enzyme over time (Fig. 1A).A comparison of the absorption spectrum of wild-type (WT) FOx after 14 h of incubation with that of fresh purification revealed a maximal increase of absorbance at 512 nm, and peaks at shorter wavelengths exhibited a blue shift (Fig. 1B).These results indicated the conversion of FAD to 8-formyl FAD within the active site of FOx without the addition of exogenous compounds.The formate oxidizing activity of WT FOx, as shown by the k cat value, increased from 78 ± 5.2 to 102 ± 5.7 s − 1 over 14 h (Table S1).After approximately 24 h, the enzyme began to aggregate and precipitate, resulting in a marked decrease in formate oxidizing activity. To elucidate the differences in the efficiency of formate oxidation catalyzed by FOx bound to FAD and 8-formyl FAD, we subsequently performed QM/MM US simulation.We defined two reaction coordinates, RC1 and RC1', as the distances from the hydrogen atom of formate to the N5 atom of 8-formyl FAD and FAD cofactors, respectively (Table S2).The free energy profiles and the two-electron oxidation mechanism are depicted in Fig. 2. It was observed that the hydride transfer from formate to 8-formyl FAD surmounts an energy barrier of 15.1 kcal mol − 1 , which is lower than the 17.8 kcal mol − 1 barrier for the FAD cofactor.Applying Eyring's equation, and using the k cat value for FOx with 8-formyl FAD bound (102 ± 5.7 s − 1 ), we estimated an activation energy of Fig. 1 Time dependence of 8-formyl FAD formation in WT FOx.(A) UVvisible absorption spectra ranging from 300 to 600 nm were recorded at various time points: 0 h (black), 2 h (red), 4 h (blue), 14 h (green), and 22 h (purple) following cell lysis.(B) The changes in the absorption spectrum of WT FOx between the initial measurement at 0 h and after 14 h post-cell lysis approximately 15.0 kcal mol − 1 , closely matching the calculated energy barrier of 15.1 kcal mol − 1 .Furthermore, the computed free energy for FOx with FAD bound (17.8 kcal mol − 1 ) generally coincides with the estimated energy (16.5 kcal mol − 1 ) derived from the k cat value (11 ± 2.0 s − 1 ) (Robbins et al. 2017).Thus, the kinetic data align well with the computational energy values, demonstrating that FOx containing 8-formyl FAD is more active in the oxidation of formate than that containing FAD. Mechanism of self-catalytic conversion of FAD to 8-formyl FAD Structural data and mutagenesis experiments on AFP synthase (RosB) have elucidated the roseoflavin biosynthesis pathway and proposed a corresponding oxidative reaction mechanism (Konjik et al. 2017).This mechanism reveals that the C8 methyl proton of riboflavin-5'-phosphate (RP) does not interact directly with any catalytic amino acids.Instead, it is directly transferred to a deprotonated water molecule within an anion hole.This catalytic water molecule, acting as a general base, deprotonates the C8 methyl group on the isoalloxazine ring of RP, leading to the formation of a quinone-methide tautomer intermediate.The intermediate is stabilized by hydrogen bond interactions between N1 and O2 of the isoalloxazine ring and the positively charged residues of RosB.An O 2 molecule subsequently oxidizes the quinone-methide intermediate, resulting in a C8 methyl hydroperoxide anion that is protonated by a nearby H 2 O molecule.Then this C8 methyl hydroperoxide is dissociated to form OHC-RP and OH − .Building on the proposed oxidative reaction mechanism for RosB, we investigated the mechanistic pathways of self-catalytic conversion of FAD to 8-formyl FAD within the noncovalent FOx: FAD complex through DFT-based QM/MM US simulation.The corresponding reaction coordinates Fig. 2 The free energy barriers associated with the hydride transfer from formate to 8-formyl FAD (A) and to FAD (B), reflecting the differences in the efficiency of formate oxidation catalyzed by FOx bound to 8-formyl FAD and FAD are depicted in Table S3.Initially, US simulation analyzed the deprotonation of the C8 methyl group on the isoalloxazine ring of FAD.In this FAD → INT1 simula- tion, the catalytic OH − in the anion hole acts as a general base, abstracting the C8 methyl proton to form the reduced flavin species, overcoming an energy barrier of 21.5 kcal mol − 1 (Fig. 3A, D, and E).Following the formation of this intermediate ( INT1), a nearby O 2 molecule readily attacks the quinone-methide tautomer (INT1 → INT2) (Fig. 3B, D, and F).The resulting C8 methyl hydroperoxide anion is then protonated by a nearby water molecule, leading to the dissociation into OHC-FAD and OH − across reaction coordinates RC3 and RC4 (INT2 → 8-fFAD).These two sub-steps were characterized by a feasible energy barrier of 22.8 kcal mol − 1 (Fig. 3C, D, G, and H). Role of R87 in the mechanism of FAD conversion to 8-formyl FAD Analysis of the FOx catalytic site revealed that the arginine residue, R87, is located in van der Waals contact with the C8 methyl group on the isoalloxazine ring of FAD.Thus, we explored an alternative mechanism where the deprotonation of C8 methyl group occurs by transferring a proton to the deprotonated R87, facilitated by a nearby catalytic OH − along reaction coordinates RC1' and RC2' (Fig. 4).In these coordinated sub-steps, the transition state (TS) is characterized by an energy of 30.6 kcal mol − 1 at distances of approximately 1.0 and 1.2 Å for the reaction coordinates of R87 and C8 methyl deprotonations, respectively.Notably, this TS energy is 9.0 kcal mol − 1 higher than that observed in the FAD → INT1 simulation, where R87 is not directly involved, suggesting that a hydroxide ion in the anion hole is more favorable as a general base to initiate the reaction. The autocatalytic formation of 8-formyl FAD is dependent on R87, as evidenced by the R87A variant of FOx, which contained only FAD as indicated by a single species eluting at 4 min in the LC-MS chromatogram (Robbins et al. 2017).Consequently, further analysis was conducted to clarify R87's role in the formation of 8-formyl FAD.The UV-visible absorption spectra of freshly purified R87H and R87K variant of FOx were monitored over time, revealing the presence of both FAD and 8-formyl FAD in the R87K variant, while only FAD was detected in the R87H variant (Fig. S1).Thus, the R87H mutation impairs the capacity for self-catalytic conversion of FAD to 8-formyl FAD.To elucidate the underlying molecular mechanism, molecular dynamics simulation was performed on the WT, R87K and R87H FOx systems.The MD-equilibrated structure of the WT system revealed a well-organized OH − chain stabilized between the side chain of R87 and the C8 methyl group of FAD, as demonstrated by the two interaction distances R87:HN • • • OH − and OH − • • • FAD: H C8 methyl , each less than ∼ 4.0 Å (Fig. 5A, D, and E).In the R87K system simulation, the distance of K87:HN • • • OH − increased by about 1.0 Å compared to the WT system, thereby reducing the likelihood of OH − attacking the C8 methyl group (Fig. 5B, D, and E).Consequently, the efficiency of self-catalytic conversion of FAD to 8-formyl FAD in the R87K FOx is significantly slower than the autoxidation observed in WT FOx (Fig. S1).This decreased efficiency is also reflected in the reduced k cat value for R87K FOx (76 ± 0.1 s − 1 at 14 h) compared to that of WT FOx (102 ± 5.7 s −1 at 14 h), as depicted in Table S4.The mutant R87H results in a shorter side chain of H87, which fails to properly position the OH − group near the FAD: H C8 methyl atom.This misalignment leads to a dramatic decrease in the concentration of hydroxyl radicals within the catalytic center, substantially inhibiting the deprotonation of C8 methyl group of FAD (Fig. 5C-E).Consequently, the formate oxidizing activity is nearly lost in the R87H FOx (Table S5).These findings suggest that R87 does not act as an active site base for proton deprotonation in the formation of the quinone-methide tautomer intermediate.Instead, R87 likely plays a critical role in the stabilization of general base OH − group in the catalytic center during the formation of 8-formyl FAD. Role of S94 in the mechanism of FAD conversion to 8-formyl FAD Prior to the full reduction of the FAD intermediate, the quinone-methide tautomer may be stabilized by forming a covalent linkage with catalytic residues in FOx (Robbins et al. 2017).In this mechanism, a serine residue (S94) adjacent to the C8 region of the isoalloxazine ring is proposed to perform a nucleophilic attack on this quinonemethide tautomer.Then, we investigated the formation of the covalent intermediate by exploring INT1' → INT2' process along the reaction coordinates RC3', RC4', and RC5'.The computed free energy barrier exceeded 30.0 kcal mol − 1 , indicating that the stabilization through the covalent linkage between S94 and FAD is impossible (Fig. 4).Another mechanism stabilizing the intermediate involves a glutamine (Q556) near the N(1)C(2) = O(2) region of the isoalloxazine ring, which could stabilize the negative charge on the pyrimidine moiety of the intermediate.Additionally, a hydrogen bonding network between S94 and two phosphoryl groups of FAD might be essential for the stable FAD binding.To further investigate the role of S94 in 8-formyl FAD formation, we mutated S94 to isoleucine (I94), which has a similarly sized side chain, and purified the S94I variant under the same condition as WT FOx.The purified S94I variant protein, obtained from the elution with 200 mM imidazole, exhibited a colorless appearance.Characteristic absorption at 370 and 450 nm associated with the cofactor FAD was absent in this fraction (Fig. S2).Furthermore, the formate oxidase activity of the S94I variant was significantly reduced compared to WT FOx (Table S6).These observations suggest that the reduced FAD binding capability of S94I variant may be caused by the absence of a hydroxyl group at I94.We then mutated S94 to threonine (T94), remaining the hydroxyl group on the side chain, and found that the S94T variant does bind flavins, albeit partially.The purified protein solution was yellow in color, with characteristic UV-visible absorption peaks at 370 and 450 nm (Fig. S3).To further study the corresponding mechanism, molecular dynamics simulation was performed for the WT, S94T and S94I FOx systems.In the simulation of WT and S94T FOx systems, the distances of S94: HG • • • FAD: O1A and T94: HG1 • • • FAD: O1A were centered at ∼ 1.8 Å, indicating the stable binding of FAD to the catalytic center through hydrogen-bonding interaction between the hydroxyl group of S94/T94 and the oxygen atom of the phosphoryl moiety on FAD (Fig. 6). In comparison to the WT and S94T systems, a higher degree of distance fluctuation was detected between I94:HG12 and FAD: O1A atoms in the S94I system (Fig. 6).These experimental and computational results demonstrated that the residues with hydroxyl side chains, such as S94 and T94, facilitate the binding of flavins into the catalytic center. Compared to the high activity of WT FOx (k cat of 102 ± 5.7 s − 1 ), the S94T variant exhibited a significantly reduced k cat value of 4 ± 0.1 s − 1 at atmospheric oxygen concentration after 14 h (Tables S1 and S7), indicating a substantial decrease in formate oxidase activity.To further elucidate the critical role of S94 residue in maintaining formate oxidase activity, we monitored the orientation of the nucleophilic OH − group during MD simulation.Notably, the attacking distance between the catalytic OH − and the H C8 methyl group atom of FAD in S94T FOx was significantly longer than that in WT FOx, resulting in decreased efficiency of proton transfer via the nucleophilic attack on the C8 methyl group of FAD (Fig. 7A).Additionally, we observed that the R87 residue, The changes in the distances between R87/K87/H87:HN and OH − (D), as well as between OH − and FAD: H C8 methyl (E) throughout the 100-ns MD simulation which plays a crucial role in stabilizing the general base of OH − group, exhibited minimal differences in hydrogen bonding with OH − between the WT and S94T FOx systems (Fig. 7B).Interestingly, the distance between the C8 methyl group of FAD and the CB atom of T94 was 1.5 Å longer than the corresponding distance in WT FOx system (Fig. 7C).This additional methyl side chain in T94 introduces steric hindrance in the active center, causing the isoalloxazine ring of FAD to deviate slightly from the stably bound catalytic OH − group.The deviation slows down the initial step of proton transfer in the formation of 8-formyl FAD by increasing the distance required for the nucleophilic attack.Overall, we can conclude that the hydroxyl group of S94/T94 residues facilitates the FAD binding in the active center; however, the methyl side chain of T94 spatially impedes the proton transfer from the C8 methyl group of FAD to a general base, thus affecting the catalytic efficiency. Discussion Flavoprotein enzymes are extensively utilized in enzymecatalyzed chemical synthesis due to the critical role of flavin cofactors in electron and proton transfer (Linke et al. 2023;Sun et al. 2022).The cofactor in FOx differs from those in other members of the GMC superfamily. In the presence of oxygen, FOx's FAD cofactor spontaneously oxidizes to 8-formyl FAD (Doubayashi et al. 2011).Although this oxidation process is slow, it results in a notable increase in the enzyme's k cat value during the gradual oxidation of FAD to 8-formyl FAD.FOx is distinctive not only because it contains the rare 8-formyl FAD but also as the sole enzyme identified where 8-formyl FAD could enhance the catalytic activity (Robbins et al. 2017).For enzyme molecules, there are numerous factors which can enhance the catalytic efficiency, including the alteration in the microenvironment's charge or pH, the reduction in the transition state energy, and the improved induced-fit ability between the active site and substrates.However, how 8-formyl FAD in FOx enhances the catalytic efficiency has not been sufficiently elucidated.In our study, we explored why 8-formyl FAD, rather than FAD, executes the catalytic function of FOx.Using QM/MM US simulation, we calculated the energy barrier for the deprotonation of formate substrate and found that 8-formyl FAD lowers this barrier by 2.7 kcal mol − 1 compared to FAD.The finding suggests that 8-formyl FAD within the active site of FOx, exhibits a stronger oxidizing capability and more readily accepts the electrons from formate. However, when flavin cofactors are free in the environment, it is difficult to produce functional modifications.This highlights the importance of the enzyme's active site environment for the maturation and functional expression of flavin modifications (Leys and Scrutton 2016).Within the environment, critical non-covalent interactions occur between the cofactor and the side chains or backbones of amino acids, as well as the interactions with molecular oxygen.In previous research on formate oxidase, it was hypothesized that a general base could deprotonate the C8 methyl group on FAD during the formation of 8-formyl FAD.Nevertheless, the specific identity of this base remained unresolved.Meanwhile, it was suggested that the active site might interact covalently with this deprotonated methyl group, thereby stabilizing the intermediate, though this residue also remained unidentified.Clearly, these investigations did not comprehensively reveal the formation mechanism of 8-formyl FAD in FOx.Thus, our study elucidated the mechanism of each reaction step within the active site of FOx, specifically identifying the general base responsible for deprotonating the C8 methyl group as a catalytic hydroxide ion (OH − ) situated near the active center.Subsequently, an adjacent O 2 molecule attacks the quinone-methide tautomer, further stabilizing the deprotonated FAD intermediate. Moreover, through detailed examination of the crystal structure, we identified two polar amino acid residues, specifically R87 and S94, located within 5 Å of the 8-formyl FAD's C8 methyl group.These polar residues Fig. 6 (A) Hydrogen-bonding interaction between the hydroxyl group of S94 and the oxygen atom of the phosphoryl moiety on FAD.(B) Changes in the distances between S94:HG and FAD: O1A, T94:HG1 and FAD: O1A, as well as I94:HG12 and FAD: O1A, observed over 100-ns MD simulation generally play crucial roles in the catalytic function of enzymes, with arginine involved in acid-base catalysis and serine in proton transfer.We hypothesized that these residues probably played either a direct or indirect role in the catalytic cycle.Our results indicated that R87 could stabilize the basic environment around the isoalloxazine ring, rather than directly participate in proton transfer.S94 was essential for the hydrogen-bonding network that stabilized the cofactor, and the mutation of S94 to a non-hydroxyl residue prevented the cofactor binding.Additionally, we found that the oxidative maturation of the cofactor was facilitated by Q556, which stabilized the negative charge of intermediates. The enhanced catalytic properties of 8-formyl FAD could lead to the development of new biocatalysts with superior redox characteristics.For the application in other flavoenzymes, it is essential to replace residues around the C8 methyl group with suitable basic amino acids and ensure the presence of amino acids that can stabilize negatively charged intermediates near the isoalloxazine ring.Furthermore, 8-formyl FAD exhibits the absorption peaks at longer wavelengths, indicating that it requires lower excitation compared to FAD.When flavoenzymes dependent on 8-formyl FAD are used instead of those relying on FAD in photoenzymatic systems, they can be activated with shorter-wavelength blue light.This approach not only reduces the photodamage to proteins but also enhances the long-term viability of photoenzymatic systems.Consequently, it is a viable strategy to substitute natural FAD with 8-formyl FAD in photoenzymatic reactions. Conclusions In this study, we systematically explored the specific reaction mechanism involved in the self-conversion of FAD to 8-formyl FAD.This process involves several steps: the deprotonation of the C8 methyl group on the isoalloxazine ring of FAD (FAD → INT1), the formation of the C8 methyl hydroperoxide anion (INT1 → INT2), and the dissociation into OHC-FAD and OH − groups (INT2 → 8-fFAD).In the FAD → INT1 reaction, a catalytic water molecule serves as a general base to deprotonate the C8 methyl group of FAD, forming a quinone-methide tautomer intermediate.The proximal R87 residue stabilizes the OH − group, thereby facilitating the necessary proton transfer.Subsequently, the intermediate is further stabilized by a hydrogen bonding network involving S94 and two phosphoryl groups of FAD.Following this stabilization, a nearby O 2 molecule quickly attacks the quinone-methide tautomer (INT1 → INT2).The result- ing C8 methyl hydroperoxide anion is then protonated by a nearby water molecule, leading to its dissociation into OHC-FAD and OH − groups (INT2 → 8-fFAD).Our results reveal that although residues R87 and S94 are not directly involved in the chemical reactions, the autocatalytic formation of 8-formyl FAD depends on these residues due to the noncovalent interactions they facilitate, which stabilize the nucleophilic OH − group and the quinone-methide tautomer intermediate. Materials E. coli Trans5α and E. coli BL21(DE3) strains, along with ProteinRuler® IV, were purchased from TransGen Biotech (Beijing, China).The QuickMutation Site-Directed Mutagenesis Kit was purchased from Beyotime Biotechnology (Shanghai, China).The Plasmid Mini Kit was purchased from Omega (Guangzhou, China).The BCA Protein Quantification Kit was purchased from Sparkjade Biotechnology (Shandong, China).The One-Step PAGE Gel Fast Preparation Kit was purchased from Vazyme Biotech (Nanjing, China).The Ni-Sepharose 6FF column was purchased from Solarbio Science & Technology (Beijing, China).Sodium formate and sodium acetate were purchased from Yuanye Bio-Technology (Shanghai, China). Cloning, expression and purification of FOx The DNA sequence encoding FOx was synthesized by Hanbio Biotechnology (Shanghai, China) and subsequently cloned into the NdeI-HindIII restriction sites of pET28a(+) expression vector.The recombinant plasmid was transformed into E. coli Trans5α for sequence verification.After confirming the insertion of recombinant FOx gene, pET28a(+)-FOx was transformed into E. coli BL21(DE3) expression host.The E. coli BL21(DE3) cells were cultured at 37 ℃ with shaking at 180 rpm in LB medium supplemented with 50 µg/mL kanamycin.The expression was induced with 0.1 mM IPTG at an OD 600 of 0.6-0.8, and the culture was continued at 20 ℃ for 7 h under the same shaking conditions.The cells were harvested by centrifugation and frozen at -20 ℃.For the purification of FOx, the cells were resuspended in 50 mM phosphate-buffered saline (PBS, pH 7.5) and disrupted by sonication.The cell lysate was clarified by centrifugation, and the supernatant was applied to a Ni-Sepharose 6FF column pre-equilibrated with 50 mM PBS (pH 7.5).Bound proteins were eluted with the same buffer containing 200 mM imidazole, and the protein concentration was determined using BCA Protein Quantification Kit.All the procedures were performed under light-protected conditions, either at 4 ℃ or on ice. Construction of variant proteins The R87H, R87K, S94I and S94T FOx variants were engineered using the QuickMutation Site-Directed Mutagenesis Kit, with primers listed in Table S8.All mutated plasmids were confirmed by DNA sequencing analysis performed by Comate Bioscience Co. (Changchun, China), and those with the correct sequences were transformed into E. coli BL21(DE3) competent cells and stored at -80 ℃.Each variant was purified following the protocol as described in the section "Cloning, expression and purification of FOx". Spectrum measurement The UV-visible absorption spectra of purified FOx wild type and its variants were recorded at 4 ℃ after the incubation for 0, 2, 4, 14, 22, and 36 h using Shimadzu UV-2700 spectrophotometer (Shanghai, China).Measurements were conducted in a 1-cm quartz cuvette at a low scanning speed. FOx activity assay The activity of FOx was determined using the horseradish peroxidase-based ABTS method, with sodium formate and ABTS as substrates.The reaction mixture containing ABTS (5 mM), horseradish peroxidase (10 U) and varying concentrations of sodium formate (0-400 mM), was prepared in 50 mM sodium acetate buffer (pH 5.0). 1 mL total volume of the mixture was used for each assay.Reactions were performed at 25 ℃, and changes in the absorbance at 420 nm were measured over the first 30 s following the initiation of the reaction at 0, 14, 24 and 36 h.The molar extinction coefficient for ABTS at 420 nm is 36,000 M − 1 cm − 1 .The reaction rates and kinetic parameters for FOx under different concentrations of sodium formate were calculated using the Michaelis-Menten equation and turnover number calculation equation (Srinivasan, 2022). where V 0 is the initial reaction rate, K m is the Michaelis constant, V max is the maximum reaction rate, [S] is the substrate concentration, k cat is the turnover number, and [E] is the concentration of FOx enzyme. Calculation details of MD simulation To elucidate the role of specific residues in the catalytic mechanism of FOx, a series of model systems were utilized, including WT FOx and its mutants R87K, R87H, S94I and S94T (Table S9).The initial geometries for the simulation were derived by modifying the crystal structure of the FOx monomer (PDB ID: 3Q9T) (Doubayashi et al. 2011).All water molecules in the original crystal structure were removed, and hydrogen atoms were added to the 8-formyl FAD cofactor.These variants were generated using the Mutagenesis tool in PyMOL software. The MD simulation of FOx systems was conducted using the GPU-accelerated version of AMBER 2020 (Götz et al. 2012).The force field parameters for FAD, 8-formyl FAD, formate, oxygen, and hydroxide ions were generated using the Generalized Amber Force Field (GAFF) (Özpınar et al. 2010).Atom charges for these molecules were obtained through RESP charge fitting, based on electrostatic potentials calculated at the B3LYP/6-311G(d, p) level of theory (Scott and Radom 1996).Standard residues were modeled using the Amber ff19SB force field (Tian et al. 2020).Hydrogens were added to model structures using the LEaP module of AMBER, and each complex was solvated in a periodic boundary box filled with TIP3P water, neutralized with Na + counterions (Mark and Nilsson 2001).The minimum distance between the enzyme surface and the edge of water box was set to 10.0 Å.All bonds involving hydrogen atoms were constrained using the SHAKE algorithm (Kräutler et al. 2001).Long-range electrostatics were managed with the particle mesh Ewald (PME) algorithm (Darden et al. 1993), with Lennard-Jones and electrostatic interaction cutoffs set at 9.0 Å.The system's energy minimization with the steepest descent algorithm and conjugate gradient algorithm was executed to eliminate atomic collisions in the initial structure.Subsequently, the systems were gradually heated from 0 K to 303 K in an NVT ensemble using Berendsen temperature coupling (Berendsen et al. 1984), and then equilibrated in an NPT ensemble employing the Parrinello-Rahman pressure coupling (Berendsen et al. 1984).Production MD simulations was conducted for 100 ns under these conditions.The MD simulation trajectories were visualized and analyzed using the CPPTRAJ module (Roe and Cheatham 2013) and VMD software (Humphrey et al. 1996). Calculation details of QM/MM US simulation To compare the catalytic activities of 8-formyl FAD and FAD cofactors, two additional systems were derived from the original WT FOx model.In the first system, formate was docked into the active site of WT FOx bound to 8-formyl FAD.In the second system, the cofactor was converted to FAD while retaining the configuration established in the first system.Additionally, to evaluate the feasibility of two proposed mechanistic schemes for the formation of 8-formyl FAD, corresponding models were developed based on the WT FOx system.For mechanistic Scheme I, as illustrated in Fig. 3, the cofactor was configured as FAD.To support the multi-step proton transfer proposed in Scheme І, four hydroxide ions were placed proximal to the C8 atom of FAD, alongside two oxygen molecules and four water molecules near the N5 atom.For mechanistic Scheme II, depicted in Fig. 4, a model was constructed with a water molecule, an oxygen molecule, and a hydroxide ion situated near the C8 atom. The QM/MM US simulation (Kästner et al. 2006) was performed using the sander module implemented in Amber 20 (Götz et al. 2012).In evaluating the differential catalytic capabilities of 8-formyl FAD versus FAD, the quantum mechanics (QM) component of each system included the formate substrate, sidechain residues Y99, E394, F405, V421, F512, H513, R556 and the cofactor FAD/8-formyl FAD, excluding the phosphate group.For the analysis of mechanistic Scheme I, which is related to the formation mechanism of 8-formyl FAD, the QM component included sidechain residues S94, L97, N98, Y99, T101, R556, N558, along with a water molecule, an oxygen molecule, a hydroxide ion, and the cofactor FAD, also excluding the phosphate group.Correspondingly, in the calculation for mechanistic Scheme II, the QM component comprised sidechain residues R87, S94, L97, N98, Y99, T101, R556, N558, four water molecules, two oxygen molecules, four hydroxide ions, and the cofactor FAD without the phosphate group.Each QM/MM US simulation employed the second-order density functional tight binding level (DFTB2) theory (Seabra et al. 2007;Elstner et al. 1998;Niehaus et al. 2001) to characterize the QM component.The interface between the QM and molecular mechanics (MM) components was treated with hydrogen atoms to ensure smooth boundary conditions.The remainder of the FOx structure was assigned to the MM component and described using the Amber ff19SB force field (Tian et al. 2020).Long-range electrostatic interactions between QM and MM parts, as well as within the QM part, were computed using the particle mesh Ewald (PME) algorithm (Darden et al. 1993). The QM/MM US simulation was employed to explore postulated reaction pathways and calculate the reaction free energy profile (FEP) (De Vivo et al. 2008;Rosta et al. 2011).The initial structure for each QM/MM US simulation was derived from a frame captured during MD simulation.The sampling windows along the reaction coordinate (RC) path were evenly spaced at 0.1 Å intervals, and each window was subjected to a harmonic potential of 200.0 kcal mol − 1 .The first structure of each RC represented the structure in the last window of the preceding RC.Results from the US simulation were analyzed using the weighted histogram analysis method (WHAM) to obtain unbiased free energy profiles (Kumar et al. 1992).Additionally, minimum energy paths were investigated using the minimum energy path surface analysis (MEPSA) program (Marcos-Alcalde et al. 2015).Distances between atoms and representative snapshots from these analyses were visualized using the VMD (Humphrey et al. 1996) and PyMOL software. Fig. 3 Fig. 3 Free energy profiles for the deprotonation of the C8 methyl group on the isoalloxazine ring of FAD catalyzed by the catalytic OH − group (A), the formation of the C8 methyl hydroperoxide anion (B), and the protonation of the resulting hydroperoxide anion to generate OHC-FAD and OH − groups (C).(D) Mechanistic Scheme I for the self-catalytic conversion of FAD to 8-formyl FAD.(E-H) The structures of FAD, INT1, INT2 and 8-fFAD present the formation mechanism of 8-formyl FAD Fig. 4 Fig. 4 (A) Mechanistic Scheme II for the self-catalytic conversion of FAD to 8-formyl FAD.Free energy profiles for the deprotonation of C8 methyl group of FAD catalyzed by the catalytic R87 residue (B), and for the formation of the covalent intermediate resulting from the attack by S94 on the quinonemethide tautomer (C) Fig. 5 Fig. 5 Crucial interactions facilitating the attack by OH − in the anion hole on the C8 methyl group for WT FOx (A) and its mutants R87K (B) and R87H (C).The changes in the distances between R87/K87/H87:HN and OH − (D), as well as between OH − and FAD: H C8 methyl (E) throughout the 100-ns MD simulation Fig. 7 Fig. 7 (A) Changes in the attacking distance between the catalytic OH − and the H C8 methyl group atom of FAD in both WT and S94T FOx.(B) Changes in the hydrogen bonding distance between R87 and the catalytic OH − in both WT and S94T FOx.(C) Changes in the distance between the C8 methyl group of FAD and the CB atom of S94/T94 in both WT and S94T FOx.(D) Visualization of key interaction in both WT and S94T FOx
2024-07-11T06:15:49.980Z
2024-07-10T00:00:00.000
{ "year": 2024, "sha1": "4f57f0992ad41ecb267f0d859ad06f99f1e5e15e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "739395b08decfae8e28e466c1e7bdb001f65cac5", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
115755071
pes2o/s2orc
v3-fos-license
Ambiguities regarding the relationship between office lighting and subjective alertness_ An exploratory field study in a Dutch office landscape The current field study investigated the ambiguities regarding the relationship between office lighting and subjective alertness. In laboratory studies, light-induced effects were demonstrated. Field studies are essential to prove the validity of these results and the potential recommendations for lighting in future buildings. Therefore, lighting measurements and subjective health data were gathered in a Dutch office environment. Health data was collected by questionnaires and includes data on functional health, wellbeing and alertness. Multiple general, environmental, and personal variables were identified as confounders for the relationship between light and alertness. For six out of the total 46 participants a statistically significant correlation was found between horizontal illuminance (Ehor) and subjective alertness. Further research needs to incorporate a larger sample size and more potential confounders for the relationship between Ehor and alertness. Further research including these recommendations may explain why certain people respond to light while others do not. Introduction Light entering the eyes reaches the rods and cones on the retina which then stimulate vision. In addition to these two photoreceptors (i.e., rods and cones), a third photoreceptor was discovered approximately fifteen years ago [1], the so called intrinsically photosensitive retinal ganglion cell (ipRGc). These ganglion cells capture light (i.e., effective irradiances) which has entered through the eyes and these cells initiate processes in both the Image-Forming (IF) and Non-Image-Forming (NIF) centres of the brain. Previous studies indicated effects of light on human's health and well-being [2][3][4][5]. These effects can be acute (short-term) or circadian (long-term) effects. Acute effects are, for example, alerting effects or distraction due to glare or flicker. Circadian effects are caused due to exposure to a lighting condition for a certain period of time and are, for example, the regulation of hormones or the organisation of the biological clock. The production of the hormone melatonin is one example of a hormone which is influenced by light exposure. Zeitzer et al. [6] developed dose-response curves in order to determine a relation between light and melatonin. A mismatch between light exposure and individuals day/night rhythm can lead to a disrupted circadian system [3]. This disruption is associated with poor health and a lower work performance [3]. In addition, office lighting is often demonstrated to directly affect work performance [7][8][9]. Demonstrated direct and indirect effects of (office) lighting on health and work performance highlight the importance of the most appropriate light exposure at the right moment of time. An individual's daily light exposure consists of contributions from daylight and electric light sources. One of the current challenges is to determine the individual's need for light to enhance their health. Since individuals differ in experiences, sensitivity, and preferences, each individual has different responses to light exposure [10]. Therefore, it is recommended to investigate the relationship between light and health based on personal lighting conditions [11,12]. The relationship between (either general or personal) light exposure and occupational health is investigated in multiple studies [7,[13][14][15][16]. The experiments took place in laboratories, in simulated office rooms, or in realistic office buildings. The majority of the experimental studies included in the review of van Duijnhoven et al. [17], was performed under laboratory conditions whereas employees may react and behave differently in a real work environment. The actual effects of office light exposure on an employee's health need to be investigated and validated in real office environments. In order to investigate the relationship between office lighting and any outcome measure (e.g., occupational health, subjective alertness), the lighting environment needs to be identified. Identifying a lighting environment comprises multiple lighting measurements. Illuminances and correlated colour temperatures are the most common measures to map a certain lighting situation [17]. Besides these two light parameters, the CIE proposed a protocol for describing lighting in an indoor environment including people, context, lighting systems and components, room surface light levels and distribution, task details, task area light distribution, high-luminance areas, modelling, colour appearance, and dynamic effects [18]. In addition, light measurements can be performed continuously (once per set interval, e.g. 1 s or 1 min) or at specific moments during the day. In addition, measurements can be performed person-bound or location-bound [11]. Furthermore, light measurements can be performed inside or outside. To the authors' knowledge and based on the literature review [17], this is the first field study which investigates the relationship between personal lighting conditions lighting and subjective alertness (SA), both measured at the same timestamp. No intervention to the lighting system was introduced in this study. All participants were exposed to their regular lighting environment. The study described in this research paper included continuous location-bound measurements to identify the indoor lighting environment and questionnaires to gather information about the health outcome measures (e.g., SA). The study was conducted as part of a larger research project investigating the potential impact of office lighting on occupational health in office landscapes. The aim of this experiment was to investigate the ambiguities regarding the relationship between office lighting and SA. It was expected that the investigation of this relationship in a field study would be challenging due to multiple potential confounders. Another aim of this study was to search for aspects which potentially explain the relationship between horizontal illuminance (E hor ) and SA in order to be taken into account for future (field) studies. All considered variables in this study were categorized into general, environmental, and personal variables. General variables consisted of day and time of the day, environmental variables were light, temperature and relative humidity, and the personal variables were user characteristics, self-reported sleep quality and health scores. It was expected that SA was related to all three types of variables. In addition, since individuals respond differently to changes in lighting conditions, it was expected that the correlation between SA and E hor differed between the participants (i.e., that the correlation was significant for a percentage of the participant sample size). Finally, it was expected that differences in correlations (i.e., between SA and E hor ) between the participants could be explained through the personal variables. Methods The field experiment was performed during one 5-day work week in May 2016 in a two-floor office building in the Netherlands. The weather conditions varied from an overcast sky on Monday, Tuesday, and Wednesday towards a clear sky on Thursday and Friday. The dawn and dusk times were around the local times 5:30 and 21:45 respectively. The local times related to the daylight saving time in the Netherlands (March, 27 th till October 30 th , 2016). The office hours of all the participants fell in this daylight period. Office environment The study location was a two-floor office building in the West of the Netherlands (Hendrik-Ido-Ambacht) (see Fig. 1). This building was renovated in 2015 and transformed from a closed structure to an open structure with office landscapes. This office transformation is part of the new Flexible Working Arrangements (FWA) [19]. Companies increasingly support this working practice in order to improve employee's productivity at work. The office building of the current study consists of two floors, each consisting one large office landscape. On the first floor there is one separate office landscape on the North side and there are four office spaces enclosed with glass throughout the whole office building. The first floor contains 52 desks and the ground floor contains 31 desks. Office lighting The west façade on the ground floor contained daylight openings without sun shading devices. In contrast, on the first floor, the building façade was more open and this façade consisted of sun shading devices (see Fig. 2 and Fig. 3). It was not recorded when the shading devices were open or closed. In addition to the presence of daylight, electric lights were installed. The office landscapes were lit by dimmable suspended luminaires (Prolicht, Glorius, Ø1400 7x14//24 W DALI, see Fig. 4) and dimmable LED spots (Quadro LED reflector 31 W 2100 l m 3000 K or Quadro LED Reflector 53 W 2400 l m, see Fig. 5). The electric lighting in the office landscapes was on during office hours and dimmed based on the amount of daylight. The dimming levels (0-100%) were logged in the lighting system. There were no desk lights available at the desks. Most lighting recommendations for Dutch office buildings are horizontally focused [20]. In earlier times, when most offices were paperbased, it was important to focus on the horizontal light levels. Recently, the vertical lighting conditions (e.g., vertical illuminance) are more important due to the digital world the office workers are currently working in. However, due to practical reasons, only E hor at desk level were measured in this study. In order to gather continuously measured E hor at all work places throughout the office building, the non-obtrusive method (Location-Bound Estimations, i.e. LBE) developed by van Duijnhoven et al. [11,12] was applied. This method consists of reference locations at which continuous measurements are performed and predictive models between the reference locations and all other workplaces (i.e., outcome locations 11 ) inside the office in order to estimate the lighting conditions at all workplaces. Between two and four relation measurements (between reference and outcome locations) [11] were performed per outcome location to create the predictive models. During the relation measurements, an overcast sky prevented direct sunlight entering the office building. The average fit (i.e., R 2 ) between the relation measurements and the developed predictive models was 0.98 with the best at 0.99 and the worst at 0.89. The predictive models were applied using inter-and extrapolation of the relation measurement data points (more information regarding the LBE method can be found in other papers of van Duijnhoven et al. 11,12 ). The continuous estimated lighting conditions at all workplaces were used for the analysis of the relationship between light and subjective alertness in this study. In this study, E hor was continuously (once per minute) measured at three reference locations throughout the office building. Fig. 1 shows the floor plans of the office building in which the red dots indicate the three measurement locations. Two measurement locations (0.1 and 0.2, see Fig. 1) were situated on the ground floor, respectively at a distance of 6 m and 2 m from the facade, one (1.1) was located on the first floor, at a distance of 6.5 m from the facade. The three locations were spread throughout the office building and chosen based on a prior observation before the start of the study regarding the occupancy of the desks during the experiment period. E hor at desk level was measured using Hagner SD2 photometers. The estimated E hor at all desks used by participants during the study, varied between 219 lx and 4831 lx throughout the work week. Two days (Monday and Wednesday) the maximum E hor was around 2200 lx whereas the maximum E hor on the other three days reached over 4000 lx. On the first floor, the sun shading devices caused fast decreases in E hor whereas a more closed façade on the ground floor led to a lower variation in E hor compared to the measurements on the first floor ( Fig. 6). Subjective alertness The lighting measurements were accompanied by questionnaires completed by employees. Participants received a unique participant number after signing the informed consent form, in order to analyse all data anonymously. Four questionnaires were distributed during the day via participants' work email addresses. The participant number and desk number were asked at the beginning of each questionnaire. Desk numbers were asked because of a flexible workplaces policy in the office building. In reality, there were only limited changes of workplaces. The 46 participants worked at 49 different desks throughout the experiment period. Within the questionnaires, the Karolinska Sleepiness Scale [21] (KSS) was applied to measure SA. The KSS measures on a scale from 1 to 10 providing 1 = extremely alert and 10 = extremely sleepy [21]. The KSS questionnaire refers to the sleepiness level the last 5 min before completing the questionnaire and is a non-obtrusive way to investigate office workers' alertness. The four questionnaires were distributed at 9 a.m., 11:15 a.m., 2 p.m., and 4:15 p.m. Additional measures Besides the E hor and SA, additional aspects were objectively and subjectively measured in order to obtain more information about the work environment and the participant's conditions. Objective measures In addition to the objective lighting measurements, temperature and relative humidity were continuously measured at the three reference locations. Rense HT-732 transmitters were used for measuring. Throughout the experiment period (i.e., five consecutive days between 8:30 h and 17:30 h), the temperature measured at the three reference locations varied between 21.8°C and 24.8°C (X ± s = 2.7 ± 0.402°C). The temperatures on Tuesday varied the most (s = 0.439°C). The mean temperature measured at the first floor was slightly higher (X = 22.91°C) compared to the two reference locations at the ground floor (X = 22.85°C and 22.32°C, respectively). Relative humidity measured at the three reference locations varied between 32.9% and 57.5% (X ± s = 46.7 ± 5.52%). The variation in relative humidity was the highest on Friday (s = 4.83%). Measurements on the first floor showed a lower mean relative humidity (X = 42.7%) compared to the two measurements on the ground floor (X = 44.4% and 52.9%, respectively). The standard deviation for the three measurement locations was approximately 3%. Subjective measures The subjective KSS data was also extended with more survey results. The Short-Form 36 items (SF-36) [22] is a set of easily administered quality-of-life measures and was used to measure functional health and wellbeing from the individual's perspective [22,23]. This questionnaire was distributed only once, at the beginning of the study period. The health of employees is described by the World Health Organisation in the definition of occupational health: a combined term which includes all aspects of health and safety in the workplace, ranging from prevention of hazards to working conditions [24]. The health data from the SF-36 health questionnaire resulted into eight aspects: physical functioning (PF), role physical (RP), bodily pain (BP), general health (GH), vitality (VT), social functioning (SF), role emotional (RE), and mental health (MH). All aspects were assessed using a 0-100 score, 100 indicating the healthiest. An extra question concerning sleep quality was added to every questionnaire at 9am. The statement 'I slept well last night' with a 5-point scale answer possibility (completely agree agreeneither agree nor disagreedisagreecompletely disagree) was added to the questionnaire to include self-reported sleep quality. Participant sample Participants were recruited after providing general information about the study. 54 out of 70 employees (i.e., response rate = 77%) agreed to participate and signed the informed consent form. Participation was voluntary and anonymous. In total, 570 completed questionnaires were collected. 46 (22 male and 24 female) participants filled in at least three questionnaires. The average number of completed questionnaires was 12 with a maximum of 20 (four questionnaires per day for 5 consecutive days). The median age was "35-44 years", approximately 65% of the participants reported to have a 5-day work week and the average working hours regarding all participants were 7.7 hours per work day. The majority (56.5%) of the participants used corrective lenses (glasses or contacts) and that was most of the time (57.7%) due to myopia. Nearly all (i.e., 93.5%) participants rated their general health as good, very good, or excellent. Data analysis The objective and subjective data were analysed using MATLAB R2015a and SPSS Statistics 22. The data analysis consisted of four steps. First, Kendall's tau correlation coefficients were calculated between SA and other variables potentially being a confounder in the relationship between E hor and SA. All subjective alertness scores from the data set were included in these correlation analyses. These non-parametric correlations were used because the majority of the data was not normally distributed and because the SA values in the analysis were ordinal variables. All the tests were two-sided using a significance (p) level of 0.05 to indicate statistical significance. Secondly, the relationship between E hor and SA was investigated. The estimated E hor at the specific desks (i.e., where the participants were working) for the same time of the day as filling in the questionnaires were selected to perform the statistical analysis. All E hor together with the KSS data were tested on significant correlations. Individual differences for filling in the KSS required a within-subject statistical analysis. Thirdly, partial correlations were calculated for the relationship between E hor and SA including all variables identified as confounder in the first step of the data analysis. The last step was to investigate the differences between two groups: the participants with a significant correlation between E hor and SA and the participants without this significant correlation. The non-parametric Mann-Whitney test was applied to test whether these differences between both groups were significant. Exact significance levels were used due to relatively small sample sizes. Results This section provides results regarding aspects correlating with SA (section 3.1), the relationship between E hor and SA with and without confounders (section 3.2), and the differences between participants with a significant correlation between E hor and SA and the participants where this correlation was not significant (section 3.3). Aspects correlating with SA In this paper, the tested variables which potentially predict SA were categorized into general, environmental, and personal variables (see Fig. 7 and Fig. 8). Day of the week and time of the day were the included general variables. Light (i.e., E hor ), temperature, and relative humidity were the environmental variables. User characteristics, selfreported sleep quality, and health scores (i.e., determined with the SF-36) were the personal variables. All general, environmental, and personal variables which correlate significantly with SA were included as confounding variables when investigating the relationship between E hor and SA. General variables A significant correlation between KSS and day of the week indicated a slightly higher sleepiness in the beginning of the week (τ = −.13, p < .001) compared to the end of the week. In addition, a significant correlation between KSS and time of the day demonstrated higher subjective sleepiness towards the end of the day compared to the beginning of the day (τ = .11, p = .002). Although the correlations were low to medium, they were significant and both day and time should be included as potential confounders for SA. Environmental variables The correlations between E hor and SA (τ = .02, p = .526) and between temperature and SA (τ = .02, p = .526) were not significant. The correlation between relative humidity and SA, however, was significant (τ = .07, p = .027). Again, although the correlation was weak, relative humidity should also be considered as a potential confounder for SA. Personal variables Personal variables were subdivided into user characteristics, selfreported sleep quality, and health scores obtained from the SF-36 questionnaire. 3.1.3.1. User characteristics. In this paragraph, the relationships between multiple user characteristics and SA were determined based on correlations. A negative significant correlation (τ = −.14 p < .001) between gender (1 = male and 2 = female) and SA indicated that the female participants reported to be slightly more alert compared to the male participants. In addition, the use of corrective lenses correlated significantly with SA (τ = .08 p = .027). The correlation between age category and SA was not significant (τ = .04 p = .256). The participants were all working for the same company and performed similar work tasks. However, the number of work days a week and work hours a day differed between participants. SA did not correlate significantly with the number of work days (τ = −.03 p = .432). However, the number of work hours during a work day correlated significantly with SA (τ = −.17 p < .001). 3.1.3.2. Self-reported sleep quality. Self-reported sleep quality was obtained via one question in the morning questionnaire (9am). A significant correlation between this statement and SA indicated that self-reported sleep quality was a potential predictor for SA (τ = .17 p < .001). The positive correlation suggests that individuals who reported to disagree with the statement (i.e., "I slept well last night") reported to feel sleepier in the morning (KSS evaluation 9:00). Overview confounders All variables which showed a significant correlation with SA were included as potential confounders in the analysis of the relationship between E hor and SA (see Fig. 9). The general variables caused medium (τ = .30) effects on the explanation of the total variance in SA. The effect of the environmental aspect relative humidity on SA was small (τ = .10). The correlations between personal variables and SA were the strongest compared to the general and environmental variables. However, the correlations between personal variables and SA were still small to medium. Office lighting and subjective alertness In section 3.1.2, a non-significant correlation was described between E hor and SA (τ = .02, p = .526). Whereas this correlation was based on the data of all participants together, calculating correlations for each individual participant resulted in a group of six participants out of the total 46 for whom a significant correlation was found between E hor and SA (see Fig. 10). Five of the six correlations were negative correlations indicating office workers being more alert when exposed to a higher E hor . For one participant, a significant positive correlation was found. All negative correlations had a medium to large effect explaining the variance of SA and the effect regarding the positive correlation was medium (i.e., τ = −.52, τ = −.38, τ = −.72, τ = −.59, τ = −.48, and τ = .42). For the calculation of these correlations the number of data points varied between 7 and 17 per participant. Although Fig. 10 showed significant initial correlations between E hor and SA, the correlations needed to be calculated including the confounding variables identified in section 3.1.4. Partial correlations were calculated by inserting the confounding variables for all participants together and for the six specific participants for whom a significant correlation was found between E hor and SA excluding confounding variables. For all participants together, the correlation between E hor and SA remained non-significant when all confounders were included in the analysis. For the determination of the partial correlations for the individual participants, the majority of personal variables identified as potential confounders were not used as confounder as they did not vary within subjects. The only personal confounder included in the calculation of the partial correlations was the self-reported sleep quality as this could have varied throughout the five experiment days. Table 1 shows the correlations when including none, one, or more groups of confounders. 3.3. Differences between groups with and without significant initial correlation between E hor and SA Differences were analysed between the two groups: group 1: the group of 40 participants in which no significant correlation was found between E hor and SA and group 2: the group of 6 participants where this correlation (excluding confounders) was significant. SA in group 1 did not differ significantly from group 2 (U = 18274, z = −1.749, p = .08). The median SA was 3 in both groups; however, the mean in group 1 was slightly lower (3.46) compared to group 2 (3.89), which may suggest a slightly higher sleepiness in group 2. E hor , by contrast, differed significantly between the two groups (U = 16166, z = −3.199, p = .001). The mean E hor in group 1 was 981 lx whereas the mean E hor in group 2 was 862 lx. In addition to E hor , the environmental parameter temperature differed significantly between both groups (U = 15274, z = −3.835, p < .001). The mean temperature for group 1 was 22.85°C whereas the mean temperature for group 2 was slightly lower (22.69°C). The third environmental parameter, relative humidity, did not differ significantly between both groups (U = 18222, z = −1.733, p = .083). The mean relative humidity for group 1 was 45.46% and for group 2 46.31%. Group 2 included four male and two female participants, all aged between 25 and 44 years. They were working throughout the entire office building and their most performed task was 'using the computer'. Four participants of these six used corrective lenses, mostly because of myopia. None suffered from colour vision problems, but one participant indicated an unspecified medical eye problem. No large differences were noticed between the groups for these categorical variables location of the office worker inside the office landscape, gender, age category, most performed task, job type, the use and reason of corrective lenses, colour vision problems, and medical eye problems. The number The self-reported sleep quality in group 1 (mean = 2.41) did not significantly differ from group 2 (mean = 2.64). Regarding the eight health category scores, no significant differences were reported between both groups for PF, RP, SF, and MH. Bodily Pain (U = 36600, z = −4.506, p < .001), General Health (U = 34200, z = −5.106, p < .001), Vitality (U = 38000, z = −3.715, p < .001), and Role Emotional (U = 39600, z = −4.954, p < .001) differed significantly between both groups. Table 2 provides the health category means and standard deviations for both groups. Discussion The current study investigated the ambiguities regarding the relationship between E hor and SA, based on findings from a Dutch field study. The first step in the analysis was to identify aspects significantly correlating with SA in order to include these later as potential confounders while investigating the relationship between E hor and SA. Both investigated general variables (i.e., time and day), one environmental variable (i.e., relative humidity), and eight personal variables (i.e., gender, corrective lenses, working hours, self-reported sleep quality, VT, GH, SF, and MH) were found to significantly correlate with SA. The clothing value (clo) of the participants was not included and this may explain the absence of significance for the relation between air temperature and SA. All of the significant correlations were of small to medium size. This was in accordance with the hypothesis that all three types of variables (i.e., general, environmental, and personal) would influence SA and need to be included as confounder. The second step was to investigate the relationship between E hor and SA. Initial correlations (excluding confounders) were calculated and this showed a significant correlation between E hor and SA for six participants out of the total 46. However, including the confounders as identified in the first step, removed all the significance for the relationship between E hor and SA. Including the general or environmental confounders led to some differences in significance levels whereas including the personal confounders led to no significant correlations at all anymore. This may indicate that personal variables had more influence on the SA compared to the effect E hor had. These results are in contradiction to multiple lab studies demonstrating beneficial effects of light on SA [7,16,25]. This discrepancy may be explained by the amount, duration or timing of the light exposure or the absence of confounders. In the current study the estimated E hor varied throughout the entire office building at minimum between 232 lx and 2157 lx over a day and at maximum between 219 lx and 4831 lx throughout a week. However some lab studies [7,16,25] used vertical illuminances, this E hor range falls within their applied (corresponding) ranges of vertical illuminances (i.e., in Smolders et al. [7,25] 1000 lx versus 200 lx and in Maierova et al. [16] 1000 lx versus 5 lx). E hor in the current study changed gradually over the day whereas in the mentioned lab studies the contrast between the bright and dim light condition was more noticeable. The International Commission of Illumination (CIE) highlighted that the dose-response relationship between light exposure and daytime effects on alertness is essential information to determine whether or not illuminance recommendations during the day are adequate to support NIF functions (e.g., human health) [22]. In this study, six out of the total 46 participants had significant initial correlation between E hor and SA, excluding the confounders. However, when the confounders were included in the statistical analysis, the correlations for those six participants were no longer significant. It is of high importance to include all potential confounders while investigating the relationship between light and health. Multiple laboratory studies demonstrated effects of different lighting conditions on SA and human health [7,25,26]. The advantage of performing a lab experiment is that the researchers are able to control potential confounders (e.g., temperature, relative humidity, time and duration of the light exposure) and to change only the independent variable to be investigated. The benefit of performing a field study is that the results of tests in controlled environments (laboratory studies) can be validated in a real office environment and this leads to realistic results. The major challenge of field studies is to investigate a specific relationship in a constantly varying office environment. This field study showed a significant correlation between E hor and SA which is a 'response-to-light percentage' of ± 13%. Similar results were found in another pilot field study performed in the Netherlands, i.e. for one out of the eleven participants ( ± 9%) a significant correlation (excluding confounders) was found between light and alertness level [27]. These percentages may indicate that not all individuals are equally sensitive to changes in the lit environment. The last step was to explore differences between the groups with and without a significant initial correlation between E hor and SA. Remarkable was the significant lower E hor in the group where a significant relation between E hor and SA was found (i.e., group 2) compared to the other group (i.e. group 1). In addition, group 2 reported significantly less work days but more work hours per day compared to Table 1 Kendall's tau correlations between E hor and SA including multiple control variables. Significance levels:* = p = .05,** = p = .01,*** = p = .001, ns = not significant. More working hours per day may cause higher sleepiness (not a significant difference but group 2 showed a slightly higher sleepiness) and this may have increased the probability of responding to light. Regarding the personal health scores, there were significant differences between the two groups for BP, VT, GH, and RE. Notably, the BP and VT scores were significantly lower in group 2 compared to group 1, whereas the GH and RE scores were significantly higher in group 2 compared to group 1. Limitations of the study The relationship between light and health is often, as also done in this study, determined by measuring illuminance levels or correlated colour temperatures [17]. Illuminance levels are often reported in the forms of horizontally measured values at desk level or vertically measured at eye level. Lighting designs typically aim for recommended values for E hor as this parameter is included in standards [28]. In contrast, the amount and type of light entering human eyes is relevant since this light causes the light-related health effects. This amount is often expressed as the vertical illuminance measured at eye height. Khademagha et al. proposed a theoretical framework to integrate the nonvisual effects of light (e.g., human health) into lighting designs [29]. They identified three luminous (spectrum, quantity, directionality) and three temporal (timing, duration, history) light factors to be relevant for triggering NIF effects. A limitation of the current study is that E hor , as applied in this study, only covers the quantity light factor. In addition, the non-obtrusive method (LBE) [11,12] was applied to estimate E hor at every participant's workplace. This method consists of location-bound measurements (at workplaces) and does not include location changes and the corresponding light exposures (duration of light exposure and light history) for each office worker. Rea et al. [30] mentioned that duration of the light exposure is one of the aspects of lighting conditions which support the circadian system functions (e.g., human health) in addition to the visual system functions. In order to measure the light exposure per participant, the exact location and viewing direction of each participant is required in addition to continuous measurements throughout the entire office building (not only at the workplaces). Another method to measure individual's light exposure is by using person-bound measurement devices. These devices, however, bring along practical and comfort issues [31] as well as certain measurement inaccuracies [32]. In order to be as unobtrusive as possible for the participants, the LBE method was applied in this study. Finally, all health-related variables were subjectively measured. Individual's sleep quality, functional health and wellbeing (SF-36), and alertness were all self-reported measured and may therefore deviate from objective health measures. Alertness, for example, was subjectively measured by including the KSS in the distributed questionnaires. The KSS was validated by a study of Kaida et al. [33] including sixteen female participants. The number of participants as well as the user characteristics may be questioned for correct validation. It is dubious how many participants are required to eliminate the potential disinterest of participants completing the questionnaire. In addition, it is uncertain how large a difference on the KSS needs to be in order to be relevant, for example, for human health or employee's work performance. The potential relationship between lighting conditions and subjective alertness may be influenced by the circadian rhythm of subjective alertness as well. Regardless varying lighting conditions, subjective alertness was already proven to be influenced by time of the day [23]. This diurnal variation of subjective alertness was not included in this research. Recommendations for further research Based on limitations of this study, implications for theory and practice, several recommendations for further research were determined. The differences between the groups with a significant initial correlation between E hor and SA may be questionable because of the limited sample size (n = 40 in group 1 and n = 6 in group 2). Further research needs to include more participants. A limitation potentially caused by this limited sample size is the absence of normality in the data. Therefore, the data analysis in this study was mostly performed based on correlation coefficients. The drawback of a correlation coefficient is that the direction of the correlation is uncertain. In this study, it is uncertain whether the health scores influenced participant's SA or that SA influenced the health scores. Changes in lighting conditions may (indirectly via SA) have impacted human health. Aries et al. [34] also mentioned that physical conditions at work influence home life. The small differences between the two groups may also be explained by the included (and excluded) variables (both environmental and personal). Further research should include light-dependent user characteristics such as light sensitivity, sensitivity to seasonal depressions, chronotype, sleep-wake rhythms, and activity patterns. Maierova et al. [28] found, for example, significant differences in SA between morning chronotypes and evening chronotypes. Although both chronotypes were more alert in the bright light condition compared to the dim light condition, these significant differences in SA may be of relevance while investigating the relationship between E hor and SA. In addition, the environmental physical aspects light, air temperature, and relative humidity were included in this study. Al Horr et al. discusses eight physical factors (i.e., indoor air quality and ventilation, thermal comfort, lighting and daylighting, noise and acoustics, office layout, biophilia and views, look and feel, and location and amenities) which affect occupant satisfaction and productivity in an office environment [35]. It is recommended to include these personal and environmental factors in further research investigating the relation between light and alertness. Adding the two above mentioned recommendations to further research may explain why certain individuals respond to light and why certain people do not. Conclusions This study investigated ambiguities regarding the relationship between E hor and SA based on findings from a Dutch field study. The results showed that multiple confounders (general, environmental, and personal variables) were identified suggesting they should be taken into account when investigating the relationship between office lighting and human health. In addition, the initial relationship (excluding confounders) between E hor and SA was established for six participants out of the total 46. Differences between the groups with and without the significant initial correlation between E hor and SA did not explain why certain individuals respond to changes in the lit environment and others do not. The current study demonstrated discrepancies between this field study and previously executed laboratory studies. The benefit of performing a field study is that the results of tests in controlled environments (laboratory studies) can be validated in a real office environment and this leads to realistic results. This study highlights the importance of validating laboratory study results in field studies. Further research should incorporate a larger sample size and additional potential confounders for the relationship between E hor and SA. Further research including these recommendations may explain individual variability in the response to light. recognized for participation in this study. Dr. Kelly A Mulder is recognized for her contributions in the language check.
2019-04-16T13:28:23.557Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "cb728e3d56243be87af674af67b5b6b2c3918e24", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.buildenv.2018.06.011", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9c61a9e7a083d3f0c77607c2cd299aef63032b30", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Psychology" ] }
231188781
pes2o/s2orc
v3-fos-license
Analysis of two-phase air-water annular flow in U-bends This paper presents an experimental and numerical study of gas-liquid annular flow in horizontal 180 U-bends. The paper aims to study the effect of bend curvature radius and superficial gas velocity in the liquid film's behavior and annular flow characteristics. The study is divided into three sections. The first section corresponds to the experimental methodology and results. The second section compresses the validation of the computational fluid dynamic (CFD) model with the experimental results. Finally, the last section presents the CFD estimation of additional variables that cannot be acquired with the existing experimental setup. The experimental results provide an initial understanding of the multiphase mixture obtained using optical techniques (i.e., High-Speed Filming (HSF) analysis). The comparison between the experiments and the numerical simulations is presented, and a reasonable agreement is observed between both approaches. Finally, additional results such as film distribution and rotation before and after the bend are extracted from the CFD simulations. Introduction Two-phase gas-liquid flows in pipelines are commonly present in several industries as Oil and Gas, nuclear, and chemical. Return bends or U-bends are found in several applications, and their unique configuration affects the behavior of the multiphase mixture. Gravitational, interfacial, and centrifugal forces affect the gas-liquid mixture's behavior as it passes through the bend. The bend's effect on the flow depends on the flow pattern, phase velocities, flow direction (i.e., upward, downward, horizontal), and the bend configuration (i.e., vertical, horizontal, inclined, and curvature radius). Parameters as liquid film distribution, pressure loss, wall shear stress, and liquid holdup are critical parameters in the study of gas-liquid annular flow. Forcing annular flow through a 180 bend may lead to operational issues such as secondary flow, flow separation, pressure pulsation, and pipe drying. In turn, these can lead to pipeline integrity problems such as burnout, corrosion, or tube failure. In horizontal pipes, gravity causes asymmetry in the flow pattern distribution, especially in annular flow, where the liquid film distribution accumulates at the bottom of the tube. This asymmetry gets more relevant where this flow pattern occurs in a bend due to centrifugal forces. Therefore, this asymmetry and curvature's influence are essential to describe the annular flow pattern [1] correctly. There are four mechanisms present in annular flow: (1) dispersion of liquid drops in the gas core by surface tension force, (2) dispersion of liquid film by secondary gas flow, (3) dispersion of the film by the liquid waves, and (4) liquid drop entrainment in the gas core [2,3]. The effect of the bend on these mechanisms is of particular interest [4]. An extensive review of the available literature on the subject reveals that, despite the extensive research conducted on annular flow through U-bends, they do not consider the combined effect of superficial gas velocity and bend curvature radius. Especially in the effect that it may have in the liquid film distribution up and downstream of the bend. Studies as Abdulkadir et al. [1], Bandyopadhyay et al. [5], or Ribeiro et al. [6], among others, studied annular flow in bends from an experimental approximation. The studies focus mainly on developing the separator's flow downstream, the pressure drop through the bend, and analysis of liquid entrainment in the gas core. Some other studies as Abdulkadir [7], or Ghosh et al. [8], performed numerical studies, where parameters as erosion due to solid transport, pressure drop, or liquid velocity profiles were analyzed and studied. A summary of some of the relevant papers reviewed involving annular U-bends conditions is presented in Table 1. Studies comparing experimental and numerical simulations in annular two-phase flow in U-bends are limited, as observed in the table. Furthermore, only a handful explore the effect that the curvature has on the configuration of annular flow. Despite the comprehensive studies in gas-liquid annular flow in bends, presented in the table above, there are no studies where the bend's effect in the liquid film is studied. Therefore, this work aims to study a horizontal U-bend's effect in the liquid film's behavior in annular gasliquids two-phase flow. Specifically, it analyzes the effect of bend curvature radius and gas velocity in the liquid film's behavior before, after, and at the bend. The experimental results provide an initial approximation and serve as a validation technique for the CFD model. Then, once validated, additional variables of interest are extracted and analyzed from the CFD simulations. Methodology This section describes the experimental and numerical methodology followed in this study. Experimental setup The Universidad de Los Andes low-pressure flow in bends facility was designed, constructed, and used in this study. The facility is divided into three sections: fluids delivery system, test section, and return and separations section. A schematic of the facility is shown in Figure 1. The measurements were performed using optical techniques. The facility comprises four zones: intake, mixing, test section, and separation and return ( Figure 1). City of Bogot a tap water (density and viscosity of 997.56 kg⋅m À3 and 8.88e À4 kg⋅m À1 ⋅s À1 , respectively) and compressed filter air (density of 1.18 kg⋅m À3 and viscosity of 1.8e-5 kg⋅m À1 ⋅s À1 )were used as testing fluids. The fluids delivery system corresponds to the preparation and metering of the water and air. A 690-kPa (100-psi) air supply line was used for the air. The flow is filtered to avoid solid particles or humidity in the stream. The air supply can reach flows of up to 1:66Â 10 À3 À m 3 = s (100-LPM), with a precision of 8:33 Â 10 À6 À m 3 =s (0.5-SLPM). The air is metered and controlled using a standard Dwyer® rotameter. Following the flow meter, a check valve prevents the reverse flow of the liquid into the rotameter. A centrifugal 0.5-HP pump was used to flow the water from a 0:2 À m 3 storage tank into the mixing and test section. For safety reasons, a check valve was located between the storage tank and the pump. Once the fluid exits the pump, it passes through a gate valve, allowing manual control of the flow, and then sent to a Hedland® flowmeter where the maximum flow rate that can be achieved is 2:5Â 10 À4 À m 3 =s (15-LPM), with a precision of 8:33 Â 10 À6 m 3 =s (0.5-SLPM). The mixing of the fluids is achieved by a static T-shape mixer, which generates the multiphase mixture and diverts it to the test section. The test section is divided into three parts: developing region, bend, and return region. The developing region consists of a straight horizontal 2-m long pipe made of transparent polymethylmethacrylate with a 14mm inner diameter. An observation zone is located at 1.7-m from the inlet. This observation zone consists of a polymethyl methacrylate box that surrounds the pipe full of water to avoid light diffraction when the high-speed videos are recorded. Additionally, a high intensity LED light is placed in the background to provide enough illumination in the zone. The high-speed videos are obtained in the visualization zone. The dimensions of the developing region ensure the development of the flow as required [13]. The bend section consists of a 14-mm inner diameter glass U-bend, attached to the developing and return regions by polypropylene couplings (these couplings allow the attachment of different bend configurations). The return section is an additional 1.4-m polymethyl methacrylate pipe after the U-bend, also fitted with an observation box. The second box is located at 0.45-m from the bend. The return pipe takes the multiphase mixture to the last section of the facility (i.e., the separation and liquid return). A degasification atmospheric tank is used to separate the gas from the liquid and recirculate the water to the feed tank using an 0.25-HP electro-submersible pump that connects both tanks. As previously mentioned, different U-bends geometries can be attached to the test loop. For this study, three curvature radii of the bend were selected. The R/D relationships (curvature radiuspipe diameter) of the chosen curvature were: 2, 3.5, and 5. The facility allows identifying the flow pattern and quantifying the holdup in a straight section of the pipe (i.e., before and after the bend). This is performed by optical methods, namely, high-speed filming (HSF) analysis. A Photron® FASTCAM 100KC camera was used with a Vivitar® 28-210 macro lens to record videos of up to 10900 frames per second. The videos were collected at 2000 frames per second, and a 1.5 s physical time was captured using a video with 3000 frames. These videos were processed and analyzed using the methodology proposed by L opez, et al. [14], where a four-step process is performed. The videos are trimmed to obtain square footage, in which only the pipe is captured (the remainder of the visualization box must be trimmed). This process is needed to perform the analysis just of the two-phase mixture and avoid using computational resources analyzing other sections of the visualization box that do not correspond to the multiphase mix (i.e., pipe walls and outside the pipe). Nevertheless, a maximum resolution of the image is required to differentiate the phases and track the interphase; thus, it is crucial to reduce image resolution in the trimming process. The second step is to apply a threshold technique in the video. This step converts each frame into black and white. This method allows the differentiation of the bubbles' contours and tracks the interphase between the phases per frame. With the contours already identified, the next step is to fill the shapes. This process fills the white water shapes and leaves the gas shapes in black, generating a final image in black and white. A graphical description of the four steps is presented in Figure 2. For a further description of the process, refer to L opez, et al. [14]. The plane where the videos are recorded corresponds to the center plane of the pipeline. Therefore, the percentage of white pixels obtained during the video processing corresponds to the liquid film's height at the pipeline's center plane. To convert this value into liquid holdup, an approximation similar to the one proposed by Chen, et al. [15] was followed. The methodology is based on a double circle approximation to estimate the wetted wall fraction and liquid holdup. A schematic of the proposed approximation is presented in Figure 3. The values for the height of the lower liquid film (h 0 f ) and the upper section of the film (h 00 f ) are obtained from the video analysis in pixels, which then are converted to meters. Then, the liquid holdup (H L ) can be calculated straightforwardly from geometrical relationships as: Table 2 presents the experiments matrix tested. A single superficial liquid velocity and two superficial gas velocities were tested. This combination was tested for three different curvature ratios. The combination of studied parameters allows the performance of sensitivity analysis in the phenomenon and allows the determination of the bend's effect in the annular flow. It also presents the terminology of each case, which is used in the presentation of the results. Numerical model STAR-CCMþ® v13.04 (Siemens) was used for the computational fluid dynamics simulations, following the L opez, et al. [15], Tkaczyk and Morvan [16], and Pineda-P erez, et al. [17] procedures. Within the simulations, three main steps were followed: pre-processing, processing, and post-processing. Pre-processing consists of defining the geometry, mesh, physics, boundary conditions, and initial conditions for each case. The Computer-Aided Design (CAD) modeler available in STAR-CCMþ was used to create a facility's physical description. Figure 4a shows the dimensions of the geometry simulated in the facility. The same dimensions that the experimental facility was simulated in the simulations. This was to ensure that the two-phase mixture was fully developed before the bend and the visualization box. A structured mesh was performed, based on the CAD model, as shown in Figure 4b. This mesh allows a constant cell number in the inlet, outlet, and transversal section of the pipe, resulting in a constant density of cells in the entire domain, and therefore, direct control over cell size and distribution [18]. This orthogonal mesh type has been recommended for the simulation of two-phase flow in pipes [19]. Since this study focuses on annular flow behavior, a refinement near the wall was carried out to correctly solve the boundary layer, and a proper wall treatment by the turbulence model [20]. The refinement was achieved by applying a hyperbolic progression from the wall into the pipe's core, as seen in the previous figure. Finally, the transversal mesh's extrusion is performed in a longitudinal direction to complete the meshing process. It is worth mentioning that the complete test section of the experimental facility was modeled in the software CAD and correspondingly meshed. As for the used physics models, the Volume of Fluid (VOF) multiphase model with a High-Resolution Interface Capturing (HRIC) method was selected to simulate the liquid-gas mixture. This model allows the gasliquid interface to be tracked using the same transport model for each phase and includes an additional transport equation for each phase's volume fraction. The mixtures' properties are calculated by each phase's contribution, taking into account each volume fraction [7,20]. The HRIC method is designed to mimic the convective transport of immiscible fluid (e.g., water and air), resulting in a suitable scheme for tracking sharp interfaces in significant spatial variations of phase volume fractions. The VOF with the HRIC method uses a second-order downwind scheme that accurately captures interfaces that are perpendicular to the flow direction. However, if the two-phase interface is parallel to the flow direction(as in this case), the HRIC scheme tends to wrinkle it and aligns it with the mesh lines [21]. This promotes convergence by having the mesh cells full of only one phase and the mesh boundary interface. To avoid the misalignment of the interface, a correction can be made to the scheme. This correction is called the interface angle factor. This angle correction is based on the angle formed between the interface and the cell-face surface vector. If the angle factor tends to zero, no correction is performed. Since the free surface is not smooth, and not following the grid lines, a larger value of 0.2 was used [22]. The gas-phase was selected as the primary phase and the liquid as the second phase; this is necessary to compute the phase interaction driven by tension forces. A parameter of importance in the VOF modeling is the sharpening factor, which reduces the numerical diffusion in the domain by adding a term in the transport equation [18,22]. This factor varies from 0.0 to 1.0, where 0.0 means no reduction of the numerical diffusion. Although the 0.0 value could obtain excellent results using the standard discretization scheme, a sharper interface is obtained with higher values of the sharpening factor. Pineda-P erez, et al. [17] commented that a sharpening factor of 1 allows a better definition of the flow patterns since there is a better definition of the disperse phase (the liquid film). Additionally, since an annular flow experiences a sharp interface between the gas core and the liquid film, a value of 1 was selected for the simulations. A segregated model was applied to solve the flow equations, using the second-order upwind convection scheme. Since the two-phase phenomenon is time-dependent, an implicit unsteady study was performed. The liquid was modeled with constant densities, and the gas was model as a real gas. The remaining fluid properties (e.g., viscosities and surface The κ-ω SST (shear stress transport model) was selected to model the turbulence in the system. This model is a two-equation model that solves the turbulent kinetic energy and its dissipation. One of the reported advantages of the κ-ω model over other turbulence models is its improved performance for boundary layers under adverse pressure gradients. However, possibly the most significant advantage is that it may be applied throughout the boundary layer, including the viscousdominated region, without further modification [23]. Nevertheless, the model's disadvantage is that boundary layer computations are sensitive to the free stream's values. This means an extreme sensitivity to inlet boundary conditions for internal flows. Simulating the complete developing region pipe and the Shear Stress Transport correction can address this issue. The transformed equation looks similar to the standard model but adds a non-conservative cross-diffusion term. This other term potentially makes this model give identical results to the κ-ε model. Then, Menter [24] suggested using a blending function that would include the cross-diffusion term far from walls (the κ-ε model in the far-field), but not near the wall (κ-ω in the boundary layer), and thus, take advantage of both methods were they perform the best. The inlet of the phases was set up as a velocity inlet where the phases are segregated, as observed in Figure 5. The phase velocities are set up accordingly to match the volumetric flowrates of each experimental condition. This was performed to match the experimental conditions. Since the whole developing region was simulated, enough pipe length is available to ensure flow development. The outlet boundary was modeled as a pressure outlet, emulating the degasification tank in the facilities. The Convective Courant Number was used to calculate the time step of each of the simulations. It is defined as: This parameter determines the simulation time step (Δt) based on the cell size of the mesh (Δx) and the phase velocities (u). If the Courant number is 1, a particle of fluid will move one cell per time step. If larger, in a one-time step, the fluid will move more than one cell. Therefore, an assumption of values must be performed in the intermediate cells. Consequently, a Courant Number inferior to 1 is required to model the phenomenon correctly. It is suggested to use a low CFL number when the HRIC model is used (e.g., below 0.35) [18]. This is to allow enough time for the interface within each grid to be adequately solved. Thus, the maximum value of the CFL for all the simulations was fixed at 0.25. The time step for each simulation was calculated using the double of the maximum velocity between the phases. Since the gas velocity was always higher than the liquid velocity, the time step was calculated using the double of the superficial gas velocity and the axial space between cells of the mesh (2-mm). The resulting time step varied between 1.53 μs and 2.29 μs. Finally, five iterations per time step were defined. The processing phase of the model is the fundamental computational part of the methodology. The previously selected equations are solved using the finite volumes method. The initial condition for each simulation is a pipe is full of non-moving air; this is because the primary phase is the air, and therefore, the computational cost of the initial steps focuses on the pipe inlet. The stopping criterion is when it takes the slow fluid (water) to pass through the bend twice. This provided flow development and avoided divergence errors. Finally, the post-processing is the procedure in which the desired results are extracted for each simulation. For this study, variables as flow pattern, the liquid holdup in different locations, liquid film distribution in the bend, and velocity profiles were obtained. Results and discussion This section presents and discusses the experimental and CFD results. First, the experimental results are presented. Next, the CFD results are presented where a grid independence study is initially performed, and then the validation of the CFD results with the experimental ones is presented. Finally, additional CFD results are presented. These results cannot be obtained with the current experimental setup. Table 3 presents the liquid holdup results obtained by the video analysis methodology. The average holdup and the standard deviation of the trend in time are reported. Experimental results and discussion An impressive result extracted from the experimental results is the effect of the bend in the liquid holdup. As the curvature radio increases, so does the liquid holdup. This reduction in the mean value suggests that the gas travels at high velocities at a lower curvature radius. Therefore, the gas goes through the bend faster than the liquid and causes an accumulation of the liquid before the bend. This leads to higher liquid hold up. As the curvature increases, the liquid holdup decreases as the liquid passes through the bend more rapidly and with more ease. These results can also be compared with the prediction of annular flow in fully developed pipe flow. Two additional experimental points were generated where no bend was included in the experimental setup. This means that the facility test section's bend and return region was removed, and the two-phase mixture was diverted directly into the degasification tank and liquid return. This was performed to obtain experimental cases without a bend and validate the measuring technique (i.e., HSF analysis) with available gas-liquid annular flow models. The validation is performed with Zhang, et al. [25] unified model for gas-liquid mixture in pipelines. The model is based on the hydrodynamics of slug flow and was developed to predict flow pattern transitions, pressure gradient, liquid holdup, and slug characteristics in gas-liquid pipe flow at all inclination angles from À90 to 90 from horizontal. Since the model has been widely validated in the literature, it is used to validate the developed measuring technique and calculation approach. The comparison between the experimental results and Zhang, et al. [25] model predictions are presented in Figure 6. An initial observation of the figure concludes that the experimental data disagrees with the Zhang et al. model's predictions. This discrepancy is attributed to the inclusion of the bend. The model slightly underpredicts the liquid holdup for the experimental cases with no bend (showed as purple markers). However, the measured holdup lies within a 10% error of the predicted value. This means that the presented experimental technique correctly predicts the behavior of liquid holdup in pies. Once the bend is included in the facility, the experimental measurement seems to overpredict the liquid holdup. This difference is attributed to the effect that the bend has on the flow upstream of it. It is worth mentioning that holdup's value is measured upstream of the bend in the developing region regarding the curvature tested. Then, the increase in the measured liquid hold up means an accumulation of liquid upstream of the bend. It can also be observed that the smaller the bend, the higher is the value of holdup, meaning the higher the accumulation of liquid before the bend. The experimental results are also presented as normalized Probability Density Function (PDF). Observing the liquid holdup results in a PDF removes the noise of observing the results a time trend. In this way, an explicit comparison of the liquid holdup's dynamic behavior can be obtained between the cases. The PDF's are presented in Figure 7. The bend and gas flow rate effect can also be explained by analyzing centrifugal forces in the bend. At a tighter curvature radius, a higher liquid accumulation before the bend was observed. Therefore, more gas mass flux is in the bend. This led the gas to experience higher centrifugal forces. This is results accelerates the gas through the bend, causing an accumulation of liquid before the same. As the curvature increases, the effect of centrifugal forces in the gas phase; thus, less accumulation of the liquid film is observed. A modified gas Froude number that relates the relationship between inertial and centrifugal forces [7] can be applied to quantify the effect of the bend, as: When Fr g > 1; the centrifugal force of the gas phase is greater than the inertial force. In other words, the gas phase will go faster through the bend, and therefore a higher liquid accumulation before the bend will be observed. Table 4 presents the results of the modified Froude number for the studied cases. The modified gas Froude number confirms that the tighter the bend, the higher the centrifugal force observed by the gas. This force balance can also be used to analyze the distribution of the liquid film in the bend. When Fr g > 1 the gas phase will be inside the bend since the centrifugal force is higher than the inertial one. The fast-moving gas will push the liquid film to the outside of the bend. When Fr g < 1; the gas's inertial force is greater than the centrifugal force; hence, the gas will attempt to stay in its current trajectory and thus remains on the outside of the bend. The gas will push the liquid film into the inner section of the bend. Finally, if Fr g ¼ 1; a balance on the forces is presented in the system. For a constant curvature radius, a higher superficial gas velocity will reduce the Froude number since the inertial force will increase. This is observed in the Froude number of the experimental results. This also means a critical superficial gas velocity for a curvature radius that will move the liquid film from the outside to the inside of the bend. Likewise, for a superficial gas velocity, a curvature radius will move the liquid film from the outside to the inside of the bend. Table 5 presents the critical superficial gas velocity for each of the tested bends. Due to facility limitations, the critical gas velocities were not able to be achieved. These conditions can be addressed in futures works. Kalpakli [26] claimed that one of the U-bends' uses is as a flow conditioner, as it seems that the bend influences the mixture's behavior, especially in the behavior of the liquid film. These obtained results confirm the conclusions presented by the author. Grid sensitivity study To select the mesh size to be used in the CFD simulations, the effect of the discretization of the cross-sectional direction and the axial direction were studied in terms of the available experimental variables. Simulations were carried out, changing the number of divisions in both directions while maintaining the cell size ratio. Figure 8 shows Once the number of elements in the longitudinal direction was chosen, four cross-section divisions were tested (i.e., 185, 448, 806, and 1060) to test the effect of the cross discretization. Figure 9 shows the cross-sectional divisions' impact on the average liquid holdup and the total simulation time. It can be observed in the figure that there is no clear trend in the results. However, a medium-size mesh (448) was selected since many divisions will produce a more considerable computational time. CFD results and comparison The experimental results were compared with the simulations' outputs to validate the developed CFD model. The CDF data was postprocessed to match the experimental output to perform a valid comparison. The liquid holdup value was obtained for all simulations in the pipe's cross-sectional plane in the coordinates where the visualization box is in the experimental facility. A value of liquid holdup is obtained every time step of the simulation, resulting in a time profile data set. The data corresponding to the last 1.5 s of the simulations was selected for the comparison. This was performed to ensure a flow development within the simulation and match the experiment's time recorded. Figure 10 and Table 6 presents the comparison between the experimental results and the CFD results. Figure 10a presents the comparison of the liquid holdup between the two techniques. Figure 10b presents the simulation error as a function of the curvature radius per every superficial gas velocity, and Figure 10c shows the simulation error as a function of the superficial gas velocity per curvature radius. The model seems to underpredict the liquid holdup at tighter bends and overpredicts as the bend gets larger. This systematic error can be attributed to the HSF analysis, as the representation of the annular flow pattern from 3D to 2D representation, some elements are missing (i.e., dispersed droplets in the gas flow and a smooth interface between the liquid film and the gas flow). Also, it is observed that the error of the higher gas velocity is less than, the lower gas velocity. The presented comparison between the two techniques reveals that the CFD can accurately predict the average liquid holdup despite discrepancies in the results. Overall, the performance of the CFD code can be estimated by calculating the average percentage relative error ðε 1 Þ, the average absolute percentage relative error ðε 2 Þ, and the standard deviation of relative error ðε 3 Þ, defined as: where N is the number of simulations and e ir is the relative error. The overall performance of the CFD code is 3.11, 9.59, and 13.02 for ε 1 , ε 2 , and ε 3 respectively. The lower value of the percentage relative error and the absolute percentage relative error confirm that the CFD model correctly predicts the annular flow behavior. As with the model's overall performance, the error analysis can be performed to each variable changed in the conditions (i.e., superficial gas velocity and curvature radius). The results are presented in Table 7. The error distribution evidence that the error in the prediction increases as the curvature radius increases (also observed in Figure 10a). The standard deviation of the error appears to be high for all cases due to the small number of experiments at each condition. Additionally, as with the experimental findings, the holdup results are shown as a PDF of the data. Figure 11 presents the comparison between the experimental and the numerical results. In all cases, it can be observed that the CFD cases present a higher dispersion of the data, and therefore, the normal distributions seem broader than the experimental cases. This difference is due to the way the values of holdup are obtained in both techniques. In CFD, the liquid holdup was obtained in a cross-sectional plane of the pipe. Consequently, the phenomena' unsteady behavior is captured as it passes through that transversal section, meaning that an area approximation of the holdup is used. For the HSF cases, an analysis is performed in a volumetric section of the pipe. The volumetric approximation reduces the signal's noise, as the immediacy of the phenomenon captured by the area approximation (waves, ripples, etc.) is offset by the rest of the volume analyzed. For example, the CFD noise generated by a wave is captured instantaneously and stored as an output. The same wave found in HSF does not have the same effect on the output signal since the total calculation of the liquid holdup is calculated on a broader section of the pipe, where the wave's impact has not yet happened. Despite this, a fair agreement between the two techniques is observed for the cases studied, even with differences in Figure 11. Experimental vs. CFD liquid holdup PDF results. Finally, a visual comparison of the distribution of the liquid film in the bend is performed. In the experimental facility, high-quality images of the mixture passing through the bend were captured using a Canon Rebel T3i Eos 600D camera, with a Canon efs 18-135 mm lens. The camera was in the inferior part of the bend facing up. The direction of the flow observed in the pictures is presented in Figure 12. In CFD, the same view as the one obtained in the facility was replicated. Figure 13 presents the comparison between the results of liquid film distribution obtained experimentally and numerically. As seen in the figure, despite the gas velocity or the bend's curvature radius, the liquid film's precise distribution cannot be determined by experimental observation. It is only possible to see that the liquid film's tendency is to be located outside the bend for all the cases. This tendency is validated with the CFD results where, for all cases, the liquid film is located on the outside of the bend. Comparing the experimental results and the numerical results demonstrate that the CFD code correctly describes annular flow behavior through U-bends under experimentally presented conditions. Based on these, additional results were obtained from the simulations. Therefore, different variables of interest can be extracted from the numerical code. This is the liquid film distribution before and after the bend and the liquid film's velocity in the bend. As mentioned before, an accumulation of the liquid is observed before the bend. To further validate the observation, the liquid film's behavior upstream and downstream of the bend can be analyzed. Table 8 presents the CFD average liquid holdup (in the final 2.0 s of the simulation) before the bend and the percentage change observed after the bend. If the change is positive, a higher value of holdup was observed after the bend; if it is lower, the liquid holdup decreases after the bend. The data is extracted 4D and 2D away from the bend (both up and downstream) and at the bend's inlet and outlet. Figure 14 shows a visual description of this behavior. The first phenomenon observed in the table is the mixture's unsteady behavior. This is because the values of holdup change as the mixture gets closer to the bend, and also, the values do not match the mean values of liquid in the visualization box (located further upstream of the bend). It also explains the accumulation of liquid observed before the bend. For all cases, the holdup after the bend (i.e., bend outlet, two and four diameters after the bend) presents a lower value than the holdup values before the bend (i.e., bend inlet, two and four diameters before the bend). This is presented as an adverse change in the holdup percentage. It is also observed that the reduction in holdup can be up to 50% in the smaller bends. This evidence the significant effect of the bend in the mixture. Figure 14 presents the water fraction of water in the outside of the pipe for both before and after the bend. This visual representation of the liquid film does not allow us to see the liquid film's reduction after the bend. There is more liquid for the pipe's outlet section than the inlet pipe, although the values of liquid holdup evidence the opposite. This is due to an apparent rotation of the liquid film in and after the bend. The rotation is caused by the secondary gas flow and the effect of the bend's centrifugal force. To confirm this, an approximation to the liquid velocity was obtained. Since the in VOF both phases share the velocity profile, the liquid velocity can be approximated by dividing the fluid domain into two separate sections: gas only (where the holdup is above 0.5) and liquid only (holdup below 0.5). This is to validate the observed rotation of the liquid film. In the gas-only section, the velocity is defined as 0, and in the liquid only section, the velocity profile is left unmodified. This approximation allows obtaining the velocity of the liquid in any section of the pipe or curve. An analysis of the tangential component of the velocity was made in different tube sections to validate the liquid film's rotation. Figure 15 presents the tangential velocity profile of the cases shown in Figure 14. It is worth mentioning that the liquid film's colorimetry behavior is similar for the remaining four cases. Thus, the validation of the rotation of those two cases can be extrapolated to the remaining ones. Figure 15 presents the tangential velocity contours of the liquid in 7 different sections of the pipe, 5 of those in the bend (from the beginning, every π/4 ), and 2 and 4 diameters after the bend a normal view aligned with the flow direction. If the velocity profile is brown, this means that the film is rotating counterclockwise. If the velocity profile is blue, the film is rotating clockwise, and if it is transparent or nearly transparent, the tangential velocity is near zero. As the figure intends to provide evidence of the liquid film's rotation, the magnitude of the velocity is not a critical parameter, and, as such, it was not considered. At the bend entrance, the liquid film is located at the lower section of the pipe, and its core is not rotating as it is near transparency. The interface between gas and liquid seems to be rotating due to the secondary gas flow. However, it is interesting to observe that once the film enters the bend (i.e., π/4 and beyond), the film moves to the outside section of the bend and rotates by climbing the outlet wall of the pipe. Additionally, it is observed that once the film gets to the upper section of the pipe, it falls due to gravity and starts to rotate in the opposite direction. Once the mixture leaves the bend, the film's rotation is still observed, even four diameters after the bend. For the small curvature radius (Figure 15a) at four diameters after the bend, it seems that the film performed a full rotation around the bend as some liquid is observed on the inner section of the pipe, and it still has a positive magnitude in the tangential velocity. This shows the effect of the secondary flow and centrifugal force on the behavior of the annular flow. Future studies must be performed to examine the threshold for detecting the mixture's curvature effect; this could be accomplished by analyzing additional pipe sections downstream from the bend. Conclusions U-bends are standard and common accessories present in gas processing and gas transportation facilities and industrial applications. This makes their study highly relevant-especially in annular flow cases-to understand the behavior and the consequence on the return in an intricate flow pattern. This study contemplates an experimental and computational approach to the topic. The study aimed to validate and complement all the experiments with CFD simulations. The optical measurement technique used in this study allowed quantifying and studying a time trend of liquid holdup in straight sections of pipes, allowing researchers to identify and analyze the effect of downstream accessories in the tube such as the bends. These experimental measurements were used as a validation for the CFD results. The experimental measurements are based on a volumetric approximation to the liquid holdup, while the CFD results are based on an area approximation. Part of the observed discrepancy between the two approaches is attributed to the previously mentioned difference. It was possible to develop a CFD methodology that correctly predicts annular flow behavior in straight pipes and bends under the studied conditions. The method can capture, quantify, and identify crucial multiphase factors such as liquid film distribution in the bend and the straight pipe and the film's behavior and rotation induced by the bend. Useful parameters such as describing the velocity profiles in bends and the characterization of accessories were studied. It was possible to demonstrate the bend's effect on the mixture, especially in the liquid film's effect before the bend. An accumulation of liquid was observed upstream of the bend, especially at tighter curvatures. Also, the behavior of the film after the bend was analyzed. At a lower curvature radius, a more significant reduction in the holdup was observed after the bend, where, in some cases, a 50% reduction in the holdup value was observed. Hence, it can be concluded that the bend can work as a flow conditioner. Declarations Author contribution statement J. L opez: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. N. Ratkovich & E. Pereyra: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
2020-12-31T09:03:22.686Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "642e9b4f4cd574196397564b4bd10fe82dfc9674", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S240584402032661X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29cdfe686e8a6fec56ded0ff5dbf66cf7559120a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
246820622
pes2o/s2orc
v3-fos-license
Diagnosis of Customer Expectations and Perceptions in Restaurants in the State of Tlaxcala The restaurant industry is one of the main productive sectors in Mexico that generates an important economic benefit for the country, Mexican food in particular is recognized throughout the world for its great value and historical contribution. Derived from its importance, it is significant to consider offering quality customer service; However, if you want to improve the Mexican food restaurant sector, you must start by addressing all the problems that commonly occur in establishments on a day-to-day basis, derived from the support of a company with home delivery service in the state of Tlaxcala significant increase in customer complaints was detected, such as; they do not deliver as promised, slow service, poor attention from employees and even some complaints with food hygiene. The objective of the research is to analyze the perception and expectations of customers and identify the factors that impact the quality of the service received in Mexican food restaurants in the central area of the state of Tlaxcala, taking the Servqual model as a reference. The Servqual instrument adapted to 45 clients was applied to know their expectations and perceptions, the research was carried out in 3 Mexican food restaurants located in: Apizaco, Chiautempan and Tlaxcala Capital. The results showed that customers are not satisfied with the service they receive from restaurants because in the gap analysis all variables are negative, although the difference between expectations and perceptions is not so significant, so they generally agree. depending on the service received. The most critical variables or those with the greatest impact are responsiveness and empathy. I. INTRODUCTION The quality of the service is the habit developed and practiced by an organization to interpret the needs and expectations of the clients and offer, consequently, an accessible, adequate, agile, flexible, appreciable, useful, timely, safe and reliable service, even under unforeseen situations or errors, in such a way that the client feels understood, cared for and cared for personally, with dedication and efficiency, and surprised with greater value than expected, consequently providing higher income and lower costs for the organization (Pizzo, 2013). As Edwards Deming (2013) said, "what is not measured cannot be improved". No company or organization can join in continuous improvement if it does not have a complete knowledge of its areas of opportunity. Mexican food is recognized throughout the world named by UNESCO in 2010 as a cultural heritage of humanity is a recognition and obligation of conservation and heritage that is undoubtedly part of a whole series of initiatives that must be launched now that the Mexican cuisine is in great demand throughout the country and with global recognition which must be preserved and maintained, so strategies are required to increase and / or improve the quality of services. On the other hand, the number of establishments that open each year exceeds the behavior of the demand for this type of services. Therefore, being a sector in constant growth and generating a significant economic spill in the country, always consider offering a quality service. If you want to improve the Mexican food restaurant industry, you have to start first by addressing all the problems that often occur in establishments on a daily basis. Really, know the level of satisfaction of the client for which it is proposed to use the SERVQUAL method. Tlaxcala's tourism sector is still below the state's potential. In 2006, temporary housing, food, and beverage preparation services, as well as recreational services accounted for 2.3% of the state's GDP. In 2015, these tourism-related services represented 1.10% of state GDP and registered an increase of 14% compared to the previous year, according to data from INEGI (2014). In 2020, restaurants have suffered very low sales, reporting a 50% drop due to the contingency situation that everyone is going through, sales have been below expectations and the restaurant industry is one of the most affected sectors according to Canirac (National Chamber of the Restaurant and Seasoned Food Industry). Given the situation, the customer every day becomes more demanding in terms of the quality of services, so that everyday complaints to establishments increase. In an interview with the call-center staff of the Food express mobile application (app dedicated to the delivery of food at home in the state of Tlaxcala) with a sample of 167 comments from the application, they mention that approximately 4 out of 10 customers per day are They complain of poor service in the restaurant related to: they do not comply with what was promised, they do not have the food established in their menu, outdated prices, food hygiene, they do not comply with the promised hours, among other aspects that generate dissatisfaction with the customer and comment that the complaints are in small restaurants, in some cases medium and mostly in the category of fast food, Mexican food, seafood and Japanese food located in the municipalities of Apizaco and Tlaxcala (Food Express, 2021). For this reason, this research proposes to analyze quality by mainly identifying the expectations and perceptions of customers in the restaurant sector of the state of Tlaxcala, which is one of the main sectors of Tourism in Mexico and thus obtain the level of customer satisfaction. In relation to the subject, a review of the literature was previously carried out where the researchers analyzed, for the most part, obtain negative results, that is, the clients are not satisfied with the service received. Due to the approach to the problem of this research, it is very important to carry it out to detect areas of opportunity in the restaurant sector of Tlaxcala. The researchers analyzed that focus on the restaurant industry only contemplate one company in the sector. It is also observed that in some cases, adaptations were made to the model, or another was used together and in the same way valuable information was obtained, which confirms that the combination of two epistemological axes is useful. Despite having mostly negative results, the authors refrain from proposing a model to improve quality and take advantage of the areas of opportunities detected, which are considered of utmost importance since they could add more value to entrepreneurs or businessmen and to the sector, for therefore general. The proposed research considers two important points that could add more depth of knowledge and they are: the study for a broader restaurant sector and a model of improvements for the sector. A. Quality It represents a process of continuous improvement, in which all areas of the company seek to satisfy the client's needs or anticipate them, actively participating in the development of products or in the provision of services (Álvarez, 2006). B. Service It is a means of delivering value to clients, facilitating the results that clients want to achieve without assuming specific costs or risks (Bon, 2008). C. Quality in the Service Hoffman and Baeston (2011) define service quality considering it as an attitude formed through the long-term general evaluation of a company's performance, since they see customer satisfaction as a specific measure of short-term operations of a company. The perception of quality is the level of service that the client qualifies subjectively close to his experience received from the service. The client perceives the services based on what is quality for him and to what extent he is satisfied. On the other hand, customer expectations are the level of service they expect to receive; Likewise, this level of expectations is different for each client (Sánchez & Sanchez, 2016). Finally, customer satisfaction is that in which their expectations are compared with their perceptions of the actual service contract (Hoffman and Bateson, 2011). D. Importance of Quality in Service The quality of customer service is one of the main points that must be met within each of the companies; Regardless of the size, structure, and nature of their operations, they must demonstrate the capacity they have to perform in this area, since being the first image that is given to clients helps to maintain their preference, and if it is altered it can become a threat. However, on many occasions it can be used by organizations incorrectly, affecting both their development and growth, therefore, mainly, the importance of said customer service must be defined, in order to properly structure the most optimal way to carry it out (Parra, 2013). E. Models to Measure Service Quality Quantifying the quality that the customer perceives of a service is not easy; It is necessary to use an instrument that supports organizations to understand the meaning of value for the client and diagnose if the activities carried out are aligned with the fulfillment of their needs. Faced with this need, various techniques and methodologies arise to measure customer satisfaction (Sánchez & Sanchez, 2016). Table I briefly describes the models that have been used to measure service quality within various organizations over the past 30 years. F. Target Analyze the perception and expectations of customers and identify the factors that impact on the quality of the service received, taking the servqual model as a reference and in this way propose a model to improve the quality of service in Mexican food restaurants in the center of the city town of the state of Tlaxcala. III. METHODOLOGY The data collection for the research is through a questionnaire based on the Servqual model proposed and designed by Zeithaml, Parasuraman and Berry (1988) whose purpose is to improve the quality of the service offered by an organization. Servqual consists of 5 dimensions and a total of 22 elements. However, an adaptation was made to the model and the instrument to be applied was added two extra variables that are Hygiene and Advertising, in addition to adding items in the variable safety related to the hygiene measures that the restaurant has regarding Covid 19 this due the change that has taken place today in the face of the pandemic that is being experienced around the world and that have been important factors that the client considers to go out to eat at restaurants in addition to the advertising variable is also considered by the technological changes that are They have increased in recent years, especially the change in digital marketing and that also represent a great importance for the client when it comes to acquiring a service and of course being evaluated. The model used is shown in Table II. The servqual model is chosen because, according to Vera & Trujillo (2017), it is one of the most widely used and reliable instruments to measure service quality. Furthermore, compared to other models such as SERVFREP, the Grönroos service quality model or the three-component model, Servqual is one of the main sources of information as it detects quality dimensions in a timely manner, which is why servqual is a one of the most accurate. Models for service companies to know the level of customer satisfaction, locate areas of opportunity and propose and/or implement improvements to have satisfied customers ( (Monrroy & Urcádiz , 2018)). The instrument was adapted to the research needs and some changes were also considered due to the current need in the sector. A. Instrument Validation The data collection instrument was validated using the Cron Bach alpha method by applying 35 test surveys in three restaurants in the municipalities; Apizaco, Tlaxcala and Chiuatempan, the results were as follows: Cron Bach's Alpha: The reliability analysis was performed with SPSS version 21 software and the results are shown in Table III. The reliability result of Cronbach's alpha corresponds to 0.917, which according to Table IV can be interpreted with excellent reliability. IV. RESULTS The data collection instrument was used in 45 people, the same instrument was applied to each one twice, once before entering the restaurant (expectation) and once after receiving the service (perception) with a total of 90 surveys in 3 different Mexican food restaurants in the state of Tlaxcala. Of which 36.5% of the respondents were men while 64.4% were women, aged between 14 and 54 years from the municipality of Tlaxcala. The Mexican food restaurants under study are shown in Table V. The data matrix was made for each item. Subsequently, the items were grouped for each variable to be measured. Once the grouping of items by variables has been obtained, the sequence of data exploration as well as expectations and perceptions is carried out through the calculation of averages according to (Hernandez, Fernandez, & Baptista, 2006). Subsequently, the mode and the range of expectations and perceptions are obtained in Tables VI and VII. A. GAP Analysis Once the information of each of the variables of the surveys carried out by 45 clients in 3 Mexican food restaurants in the state of Tlaxcala has been analyzed and interpreted, considering home delivery and reservations, the following results are obtained, showing the global averages by variable, and are observed in Fig. 1 and Table VIII. V. CONCLUSION Finally, it can be concluded that according to the data analyzed, in general customers agree with the service offered by Mexican food restaurants in the state of Tlaxcala, however there are gaps with negative global averages that, although they are not so significant, it is important to attend. The hygiene and publicity variables are the least critical, the gap is not very significant. The most critical variables are responsiveness and empathy with a difference of -1.1377 and -0.6888, respectively. The improvement model must include strategies specifically to consider the inclusion of catering in new technologies such as home service applications and social networks, since the instrument indicates that customers consider it of utmost importance. We must pay attention and improve the security, responsiveness, and empathy variables since, derived from the GAP gap analysis, they are the most critical with a slightly more significant difference, so that customers are still not completely satisfied with the service they receive. The level of satisfaction perceived by customers in Mexican food restaurants in the central area of the state of Tlaxcala is unsatisfied, since according to the analysis of the gaps, the customer does not receive the service they expect. It is considered that the quality level of the restaurants is also a bit low due to the fact that the restaurant does not adequately meet the customer's expectations, however the differences and gaps are not very significant so the quality cannot be considered 100% bad but if it is an opportunity to improve areas of opportunity that were found in the research.
2022-02-15T16:01:13.780Z
2022-02-10T00:00:00.000
{ "year": 2022, "sha1": "702a78c51b1ebb6e7de455c7742a830e7e6dd3e1", "oa_license": null, "oa_url": "https://ejbmr.org/index.php/ejbmr/article/download/1308/675", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4ed99e9f4bb9db3e1b2b28ed6d3e261075bddb32", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
250130120
pes2o/s2orc
v3-fos-license
Editorial: Enhancing Quality of Life in Ambient Spaces The focus of this Research Topic was on ways in which digital technologies can be used within ambient spaces to enhance the lives of the people living, working, traveling, or in other ways experiencing those places. This is clearly a very broad Research Topic, inspired by technological developments, including sensor-based devices and ambient displays (e.g., Wisneski et al., 1998), often combined with AI techniques for machine learning about user behaviors (e.g., Jafarinaimi et al., 2005), which are proliferating in a wide variety of physical and social settings (Streitz et al., 2003; Kim et al., 2010). Ambiently available technologies provide numerous possibilities for the design and creation of adaptive spaces in buildings, homes, vehicles, urban spaces, and other forms of human-building interaction (Alavi et al., 2019). These have the potential to make life easier, safer, and more enjoyable for their users, and may be specifically designed for therapeutic or assistive purposes. Both Quality of Life and Ambient Spaces were interpreted broadly by the authors in this Research Topic. Ambient Spaces constitute a wide variety of possible situations in which a person’s surrounding environment senses them and responds to their presence and/or state. There is considerable scope for design innovations that bring together components of these two Research Topic areas in the creation of humane places and spaces in improving quality of life through design. There are also a range of ethical and security issues that challenge and constrain this emerging design area. INTRODUCTION The focus of this Research Topic was on ways in which digital technologies can be used within ambient spaces to enhance the lives of the people living, working, traveling, or in other ways experiencing those places. This is clearly a very broad Research Topic, inspired by technological developments, including sensor-based devices and ambient displays (e.g., Wisneski et al., 1998), often combined with AI techniques for machine learning about user behaviors (e.g., Jafarinaimi et al., 2005), which are proliferating in a wide variety of physical and social settings (Streitz et al., 2003;Kim et al., 2010). Ambiently available technologies provide numerous possibilities for the design and creation of adaptive spaces in buildings, homes, vehicles, urban spaces, and other forms of human-building interaction (Alavi et al., 2019). These have the potential to make life easier, safer, and more enjoyable for their users, and may be specifically designed for therapeutic or assistive purposes. Both Quality of Life and Ambient Spaces were interpreted broadly by the authors in this Research Topic. Ambient Spaces constitute a wide variety of possible situations in which a person's surrounding environment senses them and responds to their presence and/or state. There is considerable scope for design innovations that bring together components of these two Research Topic areas in the creation of humane places and spaces in improving quality of life through design. There are also a range of ethical and security issues that challenge and constrain this emerging design area. ACCEPTED PAPERS Four full-length papers have been included in this Research Topic after peer review. The paper by Margariti et al. aimed at understanding the experiences of remote workers during the application of COVID-19 travel restrictions. It addresses the lack of a detailed understanding of domestic workplaces in terms of their experiential dimensions and associated challenges to wellbeing. A small-scale study was conducted over a period of 4 weeks of continuous home working. The study reported quantitative and qualitative analyses of occupant experiences, using sensor wristbands, self-reported mood changes, and recording of environmental aspects. Based on their results, the authors discuss the impact of feedback mechanisms in the domestic workplace on the wellbeing and behavior of remote workers. They also outline a design agenda for future use of ambient technologies to support the wellbeing of remote workers. This kind of research is important because remote collaboration is likely to be a significant aspect of future working life, even in times when working remotely is not required as a means of infection control. Adopting a Virtual Reality (VR) approach, Gamberini, Bettelli et al. address another Research Topic of widespread and growing importance, that of the safety and wellbeing of the public at large in the face of natural disasters, focusing particularly on flood situations. Although VR has frequently been used for training and education about how to behave in emergency situations, this study examines the question of how to design VR to provide relevant and effective social and psychological cues to citizens at such times. Multiple stakeholders contributed to the creation of convincing virtual scenarios, leading to the derivation of design requirements and strategies to improve quality of life and psychological wellbeing in risk-exposed citizens. This work is highly relevant as river floods are highly threatening climate events which are likely to become ever more frequent due to climate change. Furthermore, similar techniques should be equally relevant for other disaster situations. The third paper, also by Gamberini, Pluchino et al., looks at how the Internet of Things (IoT) can be deployed to provide non-pharmaceutical interventions for the safety of living environments in a time of social (physical) isolation. The paper presents plans for the SAFE PLACE project, which will exploit cutting-edge IoT systems and Artificial Intelligence to support healthy and safe living environments tailored to living requirements during the COVID-19 epidemic. The outcomes of the project are expected to provide detailed information about how to exploit advanced IoT technologies and Artificial Intelligence to deal with this and future health crises, thereby reducing the impact of those crises on healthcare systems. In the fourth and final paper, Bacchin et al. address the important Research Topic of how living spaces can be designed better to meet the needs, and improve the quality of life, of the millions of people with motor and cognitive disabilities who face hardships in daily life due to the limited accessibility and inclusiveness of their current living spaces. As described by Bacchin et al., the DOMHO project aims to improve this situation by creating a smart home equipped with a series of home automated and intelligent technologies. The paper focuses on an analysis of test participant interactions with the system control application to be deployed in the project, based on video data, interviews, and a questionnaire. The authors anticipate that this preliminary work may lay the foundations for user-centered design of IoT systems that provide supportive living spaces for people with disabilities. This important work is expected to break down architectural barriers and improve the social lives of disabled people, helping to reduce social isolation and feelings of loneliness and helplessness. CONCLUSION Potential impacts of designed ambient spaces are large and growing and we hope that this Research Topic will stimulate more research in this important field in future. Human wellbeing is constrained, to a considerable extent, by the buildings and environments in which we live and work, and as technology progresses there are increasing opportunities to design ambient spaces that promote, rather than restrict, our quality of life. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
2022-06-30T15:08:30.316Z
2022-06-28T00:00:00.000
{ "year": 2022, "sha1": "50d8b832cda9fc5ddd7315180dfc65be24382e49", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcomp.2022.947056/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "37cb82d7304b651c814f58aae597154977d15359", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
291818
pes2o/s2orc
v3-fos-license
Cellular and Molecular Mechanisms of Chronic Inflammation-Associated Organ Fibrosis Organ fibrosis is a pathological condition associated with chronic inflammatory diseases. In fibrosis, excessive deposition of extracellular matrix (ECM) severely impairs tissue architecture and function, eventually resulting in organ failure. This process is mediated primarily by the induction of myofibroblasts, which produce large amounts of collagen I, the main component of the ECM. Accordingly, the origin, developmental pathways, and mechanisms of myofibroblast regulation are attracting increasing attention as potential therapeutic targets. The fibrotic cascade, from initial epithelial damage to eventual myofibroblast induction, is mediated by complex biological processes such as macrophage infiltration, a shift from Th1 to Th2 phenotype, and by inflammatory mediators such as transforming growth factor-β. Here, we review the current understanding of the cellular and molecular mechanisms underlying organ fibrosis. INTRODUCTION Organ fibrosis is an intractable, progressive condition that arises in multi-factorial chronic inflammatory diseases in which excessive deposition of extracellular matrix (ECM), mainly composed of collagen I (Col I), severely impairs tissue architecture and function, eventually resulting in organ failure (Kis et al., 2011). Fibrosis affects various organs following tissue injury, including the lungs, liver, and kidneys, and has become a major cause of death in the developed world. Lung fibrosis occurs mainly in idiopathic interstitial pneumonia (IIPs), a general term describing multi-factorial conditions such as idiopathic pulmonary fibrosis (IPF), non-specific interstitial pneumonia (NSIP), and cryptogenic organizing pneumonia (COP). IPF is a chronic and progressive disease with an estimated prevalence of 20 cases per 100,000. The prognosis for patients with IPF is poor, and 50% die within 3 years of diagnosis. Hepatic fibrosis (fibrosis of the liver) can be triggered by the hepatitis virus or alcohol. There are an estimated 350 million and 180 million carriers of the Hepatitis B (HBV) and C (HCV) viruses worldwide, respectively. In Japan, deaths from hepatic cirrhosis total around 15,000 per year (HCV, 50%; HBV, 12%; non B/non C, 4%; alcoholic hepatitis, 13%). In addition, hepatic cirrhosis is associated with hepatic cancer, which causes over 30,000 deaths annually. The prevalence of non-alcoholic steatohepatitis (NASH) ranges from 9 to 37% of the population depending on the country, and a subset of NASH patients eventually develops hepatitis and hepatic cancer. Kidney fibrosis commonly occurs in glomerulonephritis and diabetic nephropathy. While the number of patients requiring dialysis due to chronic glomerulonephritis has decreased in recent years, the number of those with diabetic nephropathy continues to increase year by year. The cost of dialysis represents a considerable medical expense in advanced countries. In addition, organ fibrosis is associated with autoimmune diseases. About 15-30% of rheumatoid arthritis patients develop IPF, and about 30% of IIP cases are associated with autoimmune diseases. Given the prevalence and severity of diseases involving tissue fibrosis, the prevention, and treatment of this condition remains a major medical challenge. This review focuses on the cellular and molecular bases for the accumulation of Col I producing fibroblasts and myofibroblasts, which are responsible for the excessive deposition of ECM during the fibrotic process. THE ORIGIN OF Col I PRODUCING FIBROBLASTS AND MYOFIBROBLASTS Fibroblasts are non-hematopoietic, non-epithelial, non-endothelial cells that widely distribute throughout the mesenchyme where they synthesize ECM proteins that form a structural framework to support tissue architecture and function in steady-state conditions. Fibroblasts also play an important role in tissue repair following multi-factorial tissue damage by forming a provisional ECM, a process preceding re-epithelialization in successful repair. Unfortunately, dysregulated activation, proliferation, and survival of fibroblasts often results in the excessive deposition of ECM proteins and the inhibition of re-epithelialization, leading to tissue fibrosis (Gabbiani, 2003). Therefore, control of the activation, proliferation, and survival of fibroblasts is critical for the prevention and treatment of tissue fibrosis. Fibroblasts form clusters within fibrotic tissues that are known as fibrotic foci (Visscher and Myers, 2006). These fibroblasts include α-smooth muscle actin (αSMA) expressing myofibroblasts that have the potential to produce large amounts of Col I, which has resulted in this cell population being widely considered to be the key effector cells in organ fibrosis (Gabbiani et al., 1971;Gabbiani, 2003;Sandbo and Dulin, 2011). Results in some models of organ fibrosis have suggested that there may be therapeutic benefit in targeting myofibroblasts, although the experimental approaches in these models leave-questions remaining about the selectivity of the interventions for myofibroblasts (Douglass et al., 2008). As mentioned above, fibroblasts are immunophenotypically identified as cells negative for hematopoietic, epithelial, and endothelial makers. The lack of specific markers for fibroblasts or possible subpopulations, including myofibroblasts, complicates the cellular and molecular understanding of these cells. Thus, the establishment of specific markers to identify fibroblasts and myofibroblasts remains a major challenge in this field. Myofibroblasts have classically been considered to differentiate from tissue-resident fibroblasts. However, recent studies have suggested alternative sources of myofibroblasts (Hinz et al., 2007). Bone marrow-derived fibrocytes express both hematopoietic markers (CD45, CD11b, and HLADR) and ECM proteins (Col I and vimentin). These cells have been shown to be recruited from the circulation to inflamed tissues via chemokine receptors CXCR4 and CCR1, 2, 5, and 7, after which they differentiate into myofibroblasts (Phillips et al., 2004;Keeley et al., 2011). Epithelial cells are reported to trans-differentiate into myofibroblasts via chronic inflammation-induced epithelial-mesenchymal transition (EMT) in several fibrosis models (Kalluri and Neilson, 2003). In addition, blood vessel wall smooth muscle cells have been proposed as myofibroblast progenitors. Meanwhile, stellate cells (Ito cells), a type of hepatic pericyte, have attracted interest as a major precursor of Col I producing fibroblasts and myofibroblasts in the liver (Atzori et al., 2009). Despite these studies, overall understanding of the origin and differentiation pathways of Col I producing fibroblasts and myofibroblasts remains poor. Identification of the major developmental pathway of these cells will be an essential step toward the development of therapeutic interventions for organ fibrosis. CHALLENGING THE EMT HYPOTHESIS Epithelial-mesenchymal transition is a process that was originally characterized in the context of embryonic development, in which epithelial cells lose their original phenotypic and functional features, including cell-cell adhesion and cell polarity, while acquiring migratory and invasive properties (Thiery et al., 2009). In vitro cell culture studies have shown clearly and reproducibly that transforming growth factor-β (TGFβ) treatment of epithelial cells induces expression of mesenchymal markers and morphology with a concomitant loss of epithelial markers (Qi et al., 2005;Venkov et al., 2007). Over the past 15 years, numerous studies have proposed that EMT also contributes to the activated fibroblast pool in various regenerative and pathogenic processes. For example, transition from epithelial tumor cells to mesenchymal cells occurs at the invasive front of many tumors, driving tumor progression and metastasis. In addition, inflammation-induced epithelial cell damage in parenchymal organs such as the liver, lungs, and kidneys recapitulates part of the EMT process in that epithelial cells acquire mesenchymal cell-like properties and migrate beyond the basal membrane to the interstitium, where they differentiate into Col I producing fibroblasts and myofibroblasts. However, the inflammation-associated EMT hypothesis has been challenged by an increasing number of studies, and lacks convincing evidence (Wells, 2010;Kriz et al., 2011). For example, the EMT hypothesis for kidney fibrosis was first reported by Strutz et al. (1995), when the authors used FSP-1 (fibroblast specific protein-1/S100A4) as a marker of mesenchymal lineage. However, subsequent characterization revealed that FSP-1 is not a mesenchymal cell specific marker, and is expressed on leukocytes and endothelial cells as well. Similarly, expression of vimentin, another marker commonly used in EMT studies, is not enough on its own to identify mesenchymal cells, because a subset of epithelial cells express vimentin in both resting and inflammatory-states (Grone et al., 1987;Witzgall et al., 1994). Moreover, recent extensive and well designed cell-fate tracing studies have not provided any evidence for inflammation-associated EMT (Humphreys et al., 2010;Scholten et al., 2010). Unless the inflammation-induced conversion of epithelial cells into Col I producing fibroblasts and myofibroblasts in vivo can be demonstrated more convincingly, the role of EMT in organ fibrosis should be reconsidered. FIBROCYTES MAKE ONLY A MINIMAL CONTRIBUTION TO ORGAN FIBROSIS The existence of bone marrow-derived fibrocytes was originally reported by Bucala et al. (1994). Later, Strieter and colleagues reported that fibrocytes express several chemokine receptors and are recruited to inflamed tissues in a CXCR4 dependent manner, where they contribute to the Col I producing myofibroblast pool after bleomycin-induced epithelial injury in the lungs (Phillips et al., 2004). We have also demonstrated that blocking chemokine receptors CCR1, 2, 5, and 7 in mouse lung or kidney fibrosis models reduces the number of myofibroblasts detected and ameliorates organ fibrosis (Sakai et al., 2006;Ishida et al., 2007). However, it remains unclear whether the cognate chemokines regulate organ fibrosis through the recruitment of fibrocytes to the inflamed tissues, by influencing the activation or differentiation of fibroblasts, or through the recruitment of inflammatory cells such as macrophages and neutrophils that subsequently influence the tissue microenvironment. While many studies have confirmed the presence of fibrocytes in fibrotic disease, accumulating experimental evidence suggests that the contribution of bone marrowderived cells to the Col I producing fibroblast/myofibroblast pool is limited (Higashiyama et al., 2009(Higashiyama et al., , 2011. ORIGIN OF CAPILLARY PERICYTES AND THEIR SIMILARITY WITH TISSUE FIBROBLASTS Recently, a novel role for pericytes as precursors of pro-fibrotic Col I producing cells has been described. Studies using Col 1α2-GFP transgenic mice have demonstrated that CD73 + PDGFRβ + pericytes/fibroblasts migrate from capillaries to the interstitial space Frontiers in Immunology | Molecular Innate Immunity and differentiate to Col 1 producing myofibroblasts in kidney and liver fibrosis models (Lin et al., 2008;Higashiyama et al., 2009). In addition, Goritz et al. (2011) recently demonstrated that a specific pericyte subtype gives rise to scar-forming stromal cells in the injured spinal cord. However, because fibroblasts in the interstitial space not only provide a scaffold for micro-tissue architecture such as nephrons and renal tubules (in the case of the kidneys), but also come into direct contact with microvessels, it is often difficult to distinguish between pericytes and tissue fibroblasts under steadystate conditions (Kriz et al., 2011). The similarities, differences, and lineage relationship between pericytes and tissue fibroblasts remain to be elucidated. THE ROLE OF INFLAMMATORY CELLS IN FIBROTIC TISSUE Macrophage infiltration into inflamed tissues has been implicated in chronic inflammation-induced organ fibrosis (Wynn and Barron, 2010). Inflamed tissue-infiltrating macrophages are derived from CCR2 + inflammatory monocytes or CX 3 CR1 hi resident monocytes (Ricardo et al., 2008). The phenotype of these macrophages is generally reported to match that of alternatively activated cells (M2) rather than classically activated cells (M1). M2 macrophages express immunosuppressive molecules such as IL-10 and arginase I, which suppress the induction of Th1 cells that produce the anti-fibrotic cytokine IFNγ. On the other hand, M1 macrophages express IL-1, IL-12, IL-23, and induce Th1 cell infiltration and activation. However, it remains to be established whether a particular macrophage subset with M2-type properties preferentially infiltrates into fibrotic tissues, or whether it is the pro-fibrotic microenvironment that drives macrophage polarization toward an M2 phenotype. In addition to their roles in immune regulation, macrophages play a pivotal role in matrix regression during the recovery phase of fibrosis (Duffield et al., 2005) and in the regulation of stellate cell proliferation (Olaso et al., 2011). In the future, conditional and lineage specific depletion or gene targeting approaches may help to reveal the specific function and overall role of each macrophage subset in tissue fibrosis. The contribution of T lymphocytes to organ fibrosis seems to be context dependent. While a number of studies suggest an exacerbating role of T cells in fibrosis, T cells also appear to be dispensable because T cell-deficient mice develop fibrosis in some models (Luzina et al., 2008). The general concept is that prolonged inflammation induces a shift from a Th1 to Th2 phenotype, and the resulting production of Th2 cytokines induces the infiltration of pro-fibrotic eosinophils via cognate chemokine (e.g., eotaxin) production. On the other hand, a role for recently identified functional T cell subsets such as Th17 and regulatory T cells in tissue fibrosis has also begun to emerge. For example, adoptive transfer of CD4 T cells restored bacterial-induced lung inflammatory and fibrotic responses in TCRβ deficient mice with an accompanying increase in lung IL-17A protein levels, and IL-17 receptor α deficient mice develop less severe inflammation and fibrosis than wild type counterparts (Simonian et al., 2009). Recently, platelet-derived growth factor (PDGF)-producing CD4 + Foxp3 + Tregs have been shown to promote lung fibrosis by activating fibroblasts (Lo Re et al., 2011). A better understanding of the roles that inflammatory cells play in the fibrotic process may reveal new points of therapeutic intervention, which may be able to induce a shift from a pro-fibrotic microenvironment to an anti-fibrotic microenvironment. REGULATION OF FIBROSIS BY INFLAMMATORY MEDIATORS The fibrotic signaling cascade that occurs during chronic inflammation, which is initiated by epithelial injury and results in irreversible organ damage, is regulated by various inflammatory mediators. The pro-fibrotic roles of plasma components, plateletderived soluble factors, and cytokines produced by activated tissue cells and infiltrating leukocytes, have been demonstrated in animal models. These mediators include factors induced as a part of an inflammatory cascade, regulatory molecules that provide feedback during the inflammatory response, and factors constitutively expressed in the body. Transforming growth factor-β plays a central role in fibroblast activation and fibroblast-to-myofibroblast differentiation, and induces the expression of genes for ECM components including Col 1. However, despite its great potential as a therapeutic target for fibrosis, inhibition of TGFβ signaling has unacceptable side effects due to the critical role of this cytokine in the maintenance of homeostasis (Leask, 2010). Bone morphogenic proteins (BMPs) belong to the TGFβ family and regulate proliferation and differentiation of both mesenchymal cells and epithelial cells (Rider and Mulloy, 2010). Recent studies have revealed that BMP7 prevents fibrosis by promoting epithelial regeneration, while BMP antagonists such as gremlin and ectodin drive organ fibrosis by inhibiting BMP7 signaling. Interestingly, there is a direct Smad-dependent counteraction of the TGFβ pathway by BMP7 signaling, and vice versa (Zeisberg et al., 2003). G-protein coupled receptor ligands also regulate chronic inflammation and the fibrotic cascade. Angiotensin II (Ang II) induces the expression of pro-fibrotic factors such as connective tissue growth factor (CTGF; Ruperez et al., 2003;Esteban et al., 2004), and recent studies have revealed that there is an intracellular cross-talk between Ang II signaling and TGFβ signaling that cooperatively promotes fibrosis (Campbell and Katwa, 1997;Schultz Jel et al., 2002;Gao et al., 2009). Leukotrienes (LTs) not only induce fibroblast migration, proliferation, and matrix protein synthesis, but also promote fibrosis through the stimulation and activation of TGFβ (Shim et al., 2006). On the contrary, prostaglandin E 2 (PGE 2 ), which has well established anti-inflammatory activities, may suppress fibrosis by inhibiting the proliferation, migration, and differentiation of myofibroblasts (Kohyama et al., 2001;Lama et al., 2002;Thomas et al., 2007). Recent studies have demonstrated that PGF2a receptor deficient mice are resistant against bleomycininduced lung fibrosis (Oga et al., 2009), and that LTB4 receptor inhibitors and LPA1 inhibitors suppress bleomycin-induced lung fibrosis (Tager et al., 2008). Lysophosphatidic acid (LPA) and sphingosine-1-phosphate (S1P) are liberated from stored lipid precursors through enzymatic activation and provide migration, proliferation, and differentiation signals to a variety of cells through the LPA receptors (LPA 1-8 ) and S1P receptors (S1P 1-5 ), respectively (Pattanaik and Postlethwaite, 2010). LPA 1 deficient mice are protected from bleomycin-induced lung fibrosis and unilateral ureteral ligation induced-renal fibrosis (Tager et al., 2008). The pro-fibrotic role of LPA is reportedly mediated in part by www.frontiersin.org the induction of fibroblast-to-myofibroblast differentiation (Yin et al., 2008). S1P plays a critical role in the circulation of lymphocytes, and accordingly, inhibition of the S1P-S1P 1 axis results in strong immunosuppressive effects. In addition, S1P also regulates the migration and activation of fibroblasts, and recent studies have revealed cross-talk between the S1P 3 and TGFβ -Smad signaling pathways that promote cardiac fibrosis (Takuwa et al., 2010). Plasma coagulation cascade proteases are also involved in fibrosis (Chambers and Laurent, 2002); thrombin, factor VII, and factor Xa activate protease-activated receptor-1 (PAR-1) on fibroblasts and induce their proliferation. In addition, these proteases promote fibrosis through the induction of pro-fibrotic molecules such as platelet-derived growth factors and CTGF. CTGF mediates mesenchymal stem cell (MSC)-to-fibroblast differentiation as well as fibroblast activation (Ponticos et al., 2009;Lee et al., 2010), while PDGFs induce the proliferation and activation of fibroblasts leading to vascular diseases and fibrosis. Ijichi et al. (2011) have demonstrated that CXC chemokines induce CTGF expression in fibroblasts, and that the inhibition of CXCR2 in tumor-bearing mice impairs tumor progression. Matrix metalloproteinases (MMPs) and their inhibitors, tissue inhibitors of MMPs (TIMPs), play an important role in the regulation of ECM turnover in fibrotic tissues. While the degradation of pathological fibrillar collagen by MMPs is a key event in the resolution of fibrosis, the degradation of normal ECM components in the early stages of fibrosis promotes deposition of newly synthesized collagen (Hemmann et al., 2007). ATP released from damaged epithelial cells serves as a danger signal to alert the immune system of tissue damage, and may also trigger a fibrotic cascade (Mortaz et al., 2010). Activation of the Wnt/β-catenin signaling pathway, which regulates epithelial and mesenchymal proliferation and activation, has been demonstrated in lung epithelial cells of IPF patients. Overall, this activation drives fibrosis rather than epithelial repair, possibly due to cross-talk with other pro-fibrotic factors such as TGFβ and CTGF (Konigshoff and Eickelberg, 2010). Furthermore, inhibition of Wnt signaling (Henderson et al., 2010) and the BMP binding protein ectodin ameliorates renal fibrosis. A better understanding of the role of each inflammatory mediator in the fibrotic cascade is likely to reveal novel molecular targets for the early diagnosis, prevention, and treatment of fibrotic disease. CONCLUSION AND FUTURE PERSPECTIVES In recent years, confusion has surrounded the major source of myofibroblasts in fibrosis, with attention centering on tissueresident fibroblasts and pericytes (Figure 1). However, the relative FIGURE 1 | Molecular and cellular mechanisms of chronic inflammation-associated organ fibrosis. Organ fibrosis is mediated primarily by the induction of myofibroblasts, which produce large amounts of collagen I. Tissue fibroblasts, transdifferentiated epithelial cells (EMT), bone marrow-derived fibrocytes, and pericytes have attracted interest as potential myofibroblast precursors. The fibrotic cascade, from initial epithelial damage to eventual myofibroblast induction, is mediated by complex biological processes such as macrophage infiltration, a shift from Th1 to Th2 phenotype, and by inflammatory mediators such as transforming growth factor-β. importance of the various developmental pathways of Col I producing fibroblasts and myofibroblasts needs to be re-examined by lineage tracing approaches, utilizing cell-type specific promoters, and inducible systems in a range of fibrosis models. It will also be important to further elucidate the mechanisms underlying the maintenance of myofibroblasts during chronic inflammation. It is possible that precursor cells provide a continuous supply of myofibroblasts, that myofibroblasts have proliferative potential, or that the myofibroblast lifespan is relatively long. A deeper understanding of the population dynamics of myofibroblasts and their precursors may reveal new points of therapeutic intervention with the potential to halt myofibroblast accumulation in fibrotic tissue. Although removal of the cause of chronic inflammation is essential and effective for the prevention and treatment of tissue fibrosis (for example, virus clearance by interferon effectively prevents viral hepatitis-associated fibrosis), this can be challenging as the precise cause of the inflammation is often unclear. Given that in most cases steroids are largely ineffective against fibrosis, currently there is no effective drug available for patients with clinically significant organ fibrosis. Further elucidation of the molecular and cellular bases for chronic inflammation-associated organ fibrosis is imperative for the development of effective anti-fibrotic therapies.
2014-10-01T00:00:00.000Z
2012-04-10T00:00:00.000
{ "year": 2012, "sha1": "e7f88c7e8a95b39447fd5c2949d1d84e7bfffe74", "oa_license": "CCBYNC", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2012.00071/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7f88c7e8a95b39447fd5c2949d1d84e7bfffe74", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245151052
pes2o/s2orc
v3-fos-license
Realist evaluation of Autism ServiCe Delivery (RE-ASCeD): which diagnostic pathways work best, for whom and in what context? Findings from a rapid realist review Objectives Waiting times in the UK for an autism diagnostic assessment have increased rapidly in the last 5 years. This review explored research (including ‘grey’ literature) to uncover the current evidence base about autism diagnostic pathways and what works best, for whom and in what circumstances, to deliver high quality and timely diagnosis. Design We performed a Rapid Realist Review consistent with recognised standards for realist syntheses. We collected 129 grey literature and policy/guidelines and 220 articles from seven databases (January 2011–December 2019). We developed programme theories of how, why and in what contexts an intervention worked, based on cross comparison and synthesis of evidence. The focus was on identifying factors that contributed to a clearly defined intervention (the diagnostic pathway), associated with specific outcomes (high quality and timely), within specific parameters (Autism diagnostic services in Paediatric and Child & Adolescent Mental Health services in the UK). Our Expert Stakeholder Group, including representatives from local parent forums, national advocacy groups and clinicians, was integral to the process. Results Based on 45 relevant articles, we identified 7 programme theories that were integral to the process of diagnostic service delivery. Four were related to the clinical pathway: initial recognition of possible autism; referral and triaging; diagnostic model; and providing feedback to parents. Three programme theories were pertinent to all stages of the referral and diagnostic process: working in partnership with families; interagency working; and training, service evaluation and development. Conclusions This theory informed review of childhood autism diagnostic pathways identified important aspects that may contribute to efficient, high quality and family-friendly service delivery. The programme theories will be further tested through a national survey of current practice and in-depth longitudinal case studies of exemplar services. Trial registration number NCT04422483. INTRODUCTION The number of children and young people (CYP) diagnosed with autism spectrum disorder (autism) has increased significantly in recent years 1-3 with a median age for diagnosis of 55 months. 4 This international phenomena is reflected in increasing pressures on diagnostic assessment and long waiting times in some services, 5 with associated family dissatisfaction. 6 The UK National Health Service (NHS) Long Term Plan 7 highlighted the need for research to identify the most effective ways to improve timely access to diagnosis while maintaining high-quality assessment for this service user group. Autism is characterised by persistent severe deficits in social interaction, social communication, and restricted, repetitive, inflexible patterns of behaviour and interests, 8 although the level of symptoms varies considerably between individuals. It is commonly associated with other neurodevelopmental and mental health conditions, such as anxiety, Attention Strengths and limitations of this study ► This realist review focused on reviewing and synthesising recent evidence to determine what approaches to autism diagnostic assessment worked best, for whom and in what context. The approach is better suited than more empirical methods that assume there is one model to suit all situations. ► Our Expert Stakeholder Group and parent representatives engaged with all stages of the review and enabled an iterative approach to identifying relevant literature and refining our findings. ► As appropriate to our research question, we limited the search to UK literature but may have missed relevant literature from similar health systems. Although synthesis was based on UK literature, we have considered how this relates to relevant international literature. Open access Deficit Hyperactivity Disorder (ADHD) and developmental language disorder, [9][10][11] making reliable diagnosis a complex process. National guidelines for Autism in the UK 12 recommend multidisciplinary assessment, with the skills to consider both the presence of other neurodevelopmental and mental health conditions (eg, ADHD, anxiety disorders), and coexisting conditions (eg, eating or sleeping related). However, this holistic assessment is time consuming and costly. 13 14 There are significant variations between diagnostic pathways, which some have defined as 'complex interventions for mutual decision making, organisation and standardization of predictable care for a well-defined group of patients during a welldefined period', 15 and only limited evidence of which pathways work best, for whom and in what circumstances. Although the formal research base is limited, some local providers have already reconfigured their services to address these issues. [16][17][18] However, robust evidence is needed to identify which care pathways, in which contexts, have the potential to meet the growing demand for diagnostic assessment in a timely, clinically valid and familyfriendly way. This Rapid Realist Review (RRR), the first step in a national Realist Evaluation of Autism ServiCe Delivery (RE-ASCeD), aimed to explore how particular approaches aspired to deliver high quality and timely autism diagnostic services. 19 High quality was defined as compliant with National Institute for Health and Care Excellence (NICE) guidelines. 12 'Timely' refers to diagnostic pathways that must be started within 3 months of referral, in-line with NICE guidelines, 1 20 21 Similarly, we did not focus on causes of service user dissatisfaction, rather ways of addressing it. A RRR is a well-established approach to synthesising evidence within a compressed time period and the key steps are consistent with the Realist And Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) standards for realist syntheses 22 ; thus the difference is the timeframe, not the level of rigour. Additionally, RRR is explicitly designed to engage with stakeholders to accelerate the search process and validate findings. 17 Our Expert Stakeholder Group included clinicians (consultant paediatricians, child psychology, SALT), policymakers and third sector advocacy groups (Council for Disabled Children and Autistica) who were involved in all stages of the process. 19 Ethical approval was not required because stakeholders were acting as research advisers, not participants. 23 Realist reviews do not seek to compare interventions, rather they present evidence as programme theories (PTs) which are key features of the service and describe what appears to lead to certain outcomes, 24 often phrased as 'If…. Then…' statements. PTs are supported by details of the context (C), mechanisms (M) and outcomes (O). These relationships are presented as CMO configurations. 25 A realist approach requires starting with an initial PT of what should work and what outcomes are expected from a complex intervention; our PT was based on NICE 2011 guidance, 12 the project team and Expert Stakeholder Group: If there is a MDT assessment by a team with competencies in child neurodevelopment and mental health (context), then Autism will be recognised as a complex condition that relies on detailed history and observation across settings (mechanism) to diagnose it. This will lead to accurate diagnosis, recognition of associated co-occurring conditions such as ADHD and intellectual disability (outcome), and the ruling out of complex differential diagnoses. This will also create, whilst not an explicit part of this project, an accurate picture of a child's strengths and needs to inform individualised packages of support and intervention through health, education and social care (outcome). We worked backwards from the intended outcomes although we know in practice that complex interventions operating in different health and social care environments do not lead to the same outcomes across services because of differing contexts (eg, differences between services, ways of operationalising and differences in recipient populations). Therefore, what is required is an understanding of what needs to be in place (circumstances or context), to trigger mechanisms (that can be responses or Open access resources) that lead to the desired (intended) outcomes or other unintended outcomes. Changes to protocol No changes to the review process proposed in the published protocol (https:// bmjopen. bmj. com/ content/ 10/ 7/ e037846). Search methods This RRR was carried out from 1 September 2019 to 30 June 2020 following RAMESES standards 24 for realist reviews. Through discussions within the RE-ASCeD project team and with our expert stakeholders, we confirmed and refined the research questions and scope; prioritised areas for investigation; identified search terms; and collected grey literature, policy and guideline papers iteratively throughout the review. Search terms were identified and developed with support from the RE-ASCeD project team and expert stakeholders. The primary search was conducted across Medline (Ovid), Embase (Ovid), PsycINFO (Ovid), Social Policy & Practice (Ovid), CINAHL Plus (EBSCO), Cochrane Library and Web of Science (Clarivate) limited by date (2011-2019), language (English) and country (UK only). Our focus was a clearly defined intervention (the diagnostic pathway, from receipt of referral to diagnosis), associated with specific outcomes (high quality and timely) within a particular set of parameters (autism/ Child & Adolescent Mental Health Services (CAMHS) in the UK). All study types were included. The search strategy was created by an information specialist (AP) using a combination of free text and MeSH index terms after iterative pilots in Medline and adapted for each database. Search strings were based on a combination of terms covering "Children", AND "Autism" AND how they "Relate to diagnostic pathway OR assessment". For full search terms, see online supplemental document 1. Box 1 provides our inclusion/exclusion criteria. Secondary searching was conducted iteratively throughout the review with input from our expert stakeholders. Two reviewers used papers identified in the primary and background search to look through reference lists for relevant articles; check forward citations; and search key authors and research teams to identify further literature, using Google scholar. Primary and background searches were restricted to UK only, given UK NHS context. On the advice of our expert stakeholders, we then reviewed high level national policy documents and guidelines and a few research articles from similar countries (USA, Canada, Australia, New Zealand) to help elucidate findings. Article selection and appraisal As shown in figure 1, we collected 294 articles from the primary search, 129 grey literature records suggested by the RE-ASCeD project team members and our expert stakeholders, with overall 338 items once duplicates removed. Furthermore, nine papers were collected via iterative secondary searches by searching all publications for key authors using Google Scholar and consulting our Expert Stakeholder Group. Two researchers (VA and WZ) carried out screening in two stages: an initial stage by title and abstract and second stage by full text. Title sifting of papers that deemed 'relevant' or 'maybe relevant' from both stages was also cross checked by three team members (PW, WF and IM). Data extraction and appraisal were carried out by two researchers (VA and WZ) using a hybrid approach 26 27 : basic details from each included article (n=79) were recorded; appraisal of evidence was based on concepts of relevance, rigour and richness, 26 27 with highly relevant articles (n=45, including 9 from iterative secondary search) coded in NVivo. For 20% of papers, a series of calibration exercises were undertaken by the RRR Lead (PW). When two reviewers were uncertain about the extraction or appraisal of a paper, this was discussed with the RRR Lead. The quality and relevance of the selected papers were also assessed during the synthesis process by members from the RE-ASCeD project team. Mapping the sources to test and develop PTs, we divided papers involved in NVivo analysis into three categories: (1) key papers that described a model of service delivery (eg, integrated neuro-developmental service) in detail and were conceptually rich, (2) 'medium' papers that mentioned a model with some useful information Box 1 Inclusion/exclusion criteria Inclusion criteria ► Children (preschool, primary or secondary school and adolescents) with Autism Spectrum Disorder (ASD) or Autism spectrum condition. ► UK healthcare system (England, Scotland, Wales and N. Ireland). ► Published 2011 onwards when the National Institute for Health and Care Excellence guidelines for recognition, referral and diagnosis of autism in under 19s (2011) was published. ► Relates to diagnostic pathway and model of service provision or relates to assessment process, for example, single discipline (paediatric consultant) or multidisciplinary. Primary exclusion criteria: ► Non-UK based literature. ► Relates only to adult diagnostic pathway. ► Relates only to tertiary services. ► Only relates to treatment. ► Relates to support services only after diagnosis. Secondary exclusion criteria: ► Descriptive or irrelevant commentary on materials we already included; no added insights relevant to context or mechanisms. ► Specific tools in terms of assessment tools or psychometric properties, for example, reliability/validity of the tool. ► Prevalence only studies. ► Studies only related to symptoms or aetiology. ► Articles about special needs in general, no mention of ASD (or ADHD). ► Duplicate material of Co-Investigators' (Co-Is) previous research, excluded by Co-Is. ► Conference paper with only abstract available. ► The data collected or published online before 2011. Open access but were not conceptually rich, (3) papers with a few 'nuggets' 28 relevant to PTs. This helped us focus on key and medium papers (online supplemental document 2) that could contribute most to developing a conceptual framework 29 and refining PTs. Synthesis and refinement Based on analysis of individual papers, we then conducted cross-evidence comparisons to build PTs and confirm/ refute and refine CMO configurations; both synthesis and refining the evidence involved substantial discussion of 'contradictory' evidence, or unintended outcomes. We also consulted with our expert stakeholders iteratively during the review process and at a data interpretation workshop in April 2020. Our expert stakeholders collectively reviewed the PTs, provided feedback and were invited to identify any omissions based on their clinical experience. We also asked them to suggest any further literature to help elucidate PTs. Based on feedback collected from the data interpretation workshop, two reviewers (VA and WZ) checked and added new papers suggested by our expert stakeholders; refined the PTs and conceptual framework. Patient and public involvement Our Co-Investigators included a patient and public involvement (PPI) representative from a local parent organisation (West Sussex Parent Carer Forum) who was able to consult a wider group of families with lived experience and a parent who had previously managed Sussex Autism Support. Our PPI representatives were equal partners within the Expert Stakeholder Group. This helped focus the review on the questions they were most interested in answering and enabled the identification of salient grey or unpublished documents for review. 30 PPI was embedded into the review protocol and was particularly helpful when synthesising and interpreting the data. A separate PPI Reference Group (all parents of CYP with autism), whose inception was delayed due to COVID-19, is integral to the wider project. RESULTS We developed seven PTs, based on cross-comparison and synthesis of 45 highly relevant articles: the first four focused on referral and diagnostic process and the last three on cross-cutting themes (table 1). Figure 2 summarises the interrelationship between these PTs, set in the wider context of structural and organisational barriers affecting autism diagnostic pathways. Full PTs with CMO configurations are provided as online supplemental document 3. PT1: listening and recognition Professionals had to balance early referral with parents' concerns so that they felt listened to and taken seriously 6 31 32 ; parents were often the first to notice atypical patterns of development or behaviour in their child. 6 31-34 Managing parental expectations 35 and developing a co-operative relationship appeared to help manage this balance but 'was perceived to be particularly problematic because access to services is based on diagnosis, rather than an assessment of the child and family's needs' 35 (p215). From parents' perspective, one autism charity website suggested they 'develop a talent for making a polite nuisance of themselves (more properly known as 'advocacy')' to traverse barriers to referral 34 Additionally, greater autism awareness and training for frontline professionals, particularly general practitioners and teachers, alongside training in how, when and who to refer to 6 12 32 34 36-38 was suggested as a strategy to improve early identification. PT2: referral and triaging Comprehensive information gathering preassessment reduced the number of contacts, assessment duration and total time taken to reach diagnosis. 39 A systematic approach to information gathering 12 38 40 improved efficiency, but referrers also wanted feedback when referrals were declined. 12 38 41 Innovative approaches to triaging included: sufficient information gathering preassessment to enable same-day assessment in the context of tertiary services [42][43][44] ; initial interview with an experienced clinician 45 ; community/neurodevelopmental paediatrician carrying out a General Developmental Assessment 41 42 46 ; assessment by CAMHS or a community paediatrician and SALT, then allocating to an abbreviated (local) or complex (specialist) pathway 41 ; triage meetings across CAMHS and Child Development Services (CDS). 41 However, whether these strategies constituted triaging or the first stage in the diagnostic pathway was arguable. PT3: diagnostic assessment Good practice in the UK (NICE) 12 recognises the importance of multidisciplinary assessment with use of information from parents, educational settings and direct observation/assessment of the child used as evidence alongside health professional assessment. However, services had different condition-specific remits, catchment areas and commissioning agreements. Where community paediatrics and mental health services were integrated and collocated in the same organisation this allowed a seamless transition, avoiding duplicated waits and enabling families to see all relevant professionals at once. 18 42 Few papers clearly delineated the service pathway 18 35 40-42 47 48 and within these were wide variations, including the balance of standardised assessments, observations and clinical judgement. As recommended by NICE, 12 most services were multidisciplinary, and many offered a single point of access, bridging the autism-ADHD diagnostic divide. 18 42 For example, Peterborough's integrated pathway provided assessments for ADHD and autism 18 42 and combined a single point of access with a comprehensive skill mix, including access to therapies. This reduced the number of assessments per individual, saved time and money, and provided a better diagnostic experience. 42 Another approach was to extend the role of available professions, for example, by training SALTs to carry out aspects of the assessment previously carried out by child psychiatrists. 40 However, disadvantages of multidisciplinary assessment and/or multi-agency working included being labour intensive and costly 13 ; being negatively affected by the dissonance between medical and educational paradigms 47 ; and a 'perceived power differential' evidenced by the 'decisionmaking power of doctors and psychologists over other clinicians' 49 (p322). Rutherford et al 41 presented a multi-agency diagnostic pathway with an 'abbreviated' pathway when the signs and symptoms of autism were easily identified and a 'complex' pathway for CYP with, for example, coexisting conditions needing onward referral to a specialist team. This resulted in fewer CYP unnecessarily going through the full process, improving the timeliness of assessment. 41 An interesting theme within the literature considered the balance of clinical expertise against standardised assessments. Less experienced clinicians appeared to prefer using standardised tools, while more experienced clinicians expressed confidence in their clinical judgement. 45 Some clinicians found diagnostic tools helpful, while others described them as 'very cumbersome and very time consuming' 47 (p118). Rogers et al 50 referred to 'upgrading', whereby the majority of professionals (78 out of 116) erred on the side of a positive diagnosis when faced with uncertainty. The main reasons were to facilitate access to funding/support (n=17; 22%); enable individuals to get a statement of Special Educational Needs (n=8; 10%); or differing opinions among colleagues in a team (n=32; 41%). Finally, there was limited but positive literature around the use of technology. Aims included 'remote' observational assessments carried out by families during a short telehealth assessment to screen for autism in children under 3 years 51 ; using mobile technology to collect observational data in advance of formal assessment 52 ; educational games to assess risk of autism 52 ; an automated story Open access Table 1 Programme theories and sources PTs 1-4: Stage specific programme theories affecting the diagnostic assessment pathway PT1 Listening and recognition If frontline health and education professionals (eg, GPs, teachers) are confident in recognising the signs and symptoms of autism, are cognisant of referral pathways and listen to parents, taking their concerns seriously then CYP will be referred to an appropriate service, in a timely manner, reducing parental frustration. PT2 Referral and triaging If autism diagnostic services provide clear guidelines for referrers on what information is needed and how to refer, and referrers follow these guidelines, then time will be saved at the triaging stage and proportionately fewer CYP who do not have autism will go through the full process. PT3 Diagnostic assessment If a structured, consistent and multidisciplinary approach to service delivery is adopted, making best use of available staff and clinical expertise, then the number of assessments per individual may be reduced. If a balance of interview, observation and recognised tools are used, alongside an assets-based approach, this will ensure a comprehensive and family-friendly diagnostic experience. If the same Trust manages both community paediatrics and mental health services, this potentially allows for a seamless transition, avoids duplicate waits and enables families to see all relevant professionals at the same time. PT4 Diagnostic feedback If parents understand the diagnostic process and feel supported this can moderate parental expectations. Feedback should take an assets-based approach and management plans should be individualised, taking account of co-existing conditions. Reports should be timely and in a format that everyone finds helpful. PT6: Interagency working If 'experts' including people with autism, carers, professionals and specialist organisations work in partnership and the knowledge generated is effectively embedded into local services, this will build capacity, improve parent/CYP satisfaction and support planning of services both locally and nationally. Open access ('A Pirates Adventure') scoring emotional cognition 53 ; and the use of computer-based Continuous Performance Tests. 54 Our expert stakeholders also suggested that where the presence of ADHD is suspected, the use of Qbtest 54 may enable an objective measurement of attention, concentration, impulsivity and distractibility but the evidence is limited. Since carrying out the RRR, Lord 55 has provided guidance on adapting autism diagnostic assessment during social distancing, including the Autism Diagnostic Observation Schedule (although unvalidated), for remote use, demonstrating that the current COVID-19 crisis has become a driver for telehealth approaches. PT4: diagnostic feedback Most parents regarded autism diagnosis as a gateway to services 50 but there was no consensus on best practice regarding feedback. 48 Parents valued a sensitive approach and positive comments about their child and their parenting 31 but found it hard to absorb feedback. 31 56 Practical strategies included a structured approach; using consistent and straightforward terminology; opportunity to ask questions (including later); and recognising their child's skills/strengths. 12 31 47 56 57 Guidelines recommended a needs-based and tailored management plan, co-developed with parents. 12 Only one paper provided detailed information on the report format 40 and used a digital report-writing tool and visual profiling tool. Reports were available within a few days, enabling parents to review the content, improving partnership working. The visual profiling tool provided a concise visual aide for understanding, explaining, and communicating the abilities of each CYP. PT5: working in partnership with families The diagnostic process was enhanced by integrating 'expertise from several perspectives… that of the individual, their family, and the professionals' 58 (p3762) and acknowledging parents as co-experts. When parents understood the diagnostic process in advance, this improved satisfaction and helped moderate expectations. 31 Open and honest dialogue involving parents in decision-making, 50 helped promote engagement and manage differences of opinion. 59 Having a named 'case coordinator' 12 or 'keyworker' 60 helped reduce stress and increase engagement. 59 Parents offered support following diagnosis were, unsurprisingly, more satisfied than those who were not. 58 A simple suggestion to improve satisfaction was to tailor links to relevant services and explore the full range of services that might prove useful. 6 Another approach was to help parents develop strategies to manage difficulties, for example, meeting families wherever most convenient to reduce non-attendance. 59 PT6: interagency working Integrating the pathways into a single assessment process potentially saved time and cost less 13 18 21 but we found little evidence of how to address macro-level constraints such as chronic underinvestment. 34 Much appeared to rest on personal relationships at the micro-level 61 and/ or parents co-ordinating services. 35 While joint working was endorsed 62 suggestions to promote it were limited to establishing clear pathways 63 ; creating opportunities to work in different teams, such as split posts or secondments 59 ; and an Additional Learning Needs Coordinator (a teacher at the school). 35 PT7: training, service evaluation and development Several papers identified the importance of training in improving the quality and efficiency of autism diagnostic services. 36 41 It was recommended that training should go beyond those working in autism services, include the educational sector 64 and be geared to the needs of managers as well as frontline staff 36 through multi-agency training. 12 Rutherford et al 41 advocated a training framework with different skill levels, depending on the 'nature, extent and likely impact of daily contact with individuals with ASD' 41 (p1583) and now reflected in Health Education England recommendations. 65 Other training suggestions included an opportunity to observe specialist autism services; buddying with experienced clinicians; regular review of training needs and succession planning; and a national forum to share experiences and knowledge. 38 63 Finally, service evaluation was advocated to check adherence to standards/guidelines 20 and provide evidence for commissioners 38 ; one strategy was a guidelines checklist at the front of each patient file. 38 Service development suggestions included having one person to champion change; generating research within clinical teams; encouraging practitioners to co-create contextually sensitive solutions 38 ; and drawing on the expertise Open access of people with autism, carers and specialist organisations. 36 Our stakeholders highlighted the importance of good quality national data to facilitate a whole system approach, with the current approach appearing somewhat fragmented. 66 DISCUSSION This RRR explored diagnostic pathways that have been adopted across the UK, to determine what works best, for whom and in what circumstances. Four PTs related to the clinical pathway, addressing ways to improve initial recognition of possible autism, referral and triaging, the diagnostic model and post-diagnostic feedback. While there were specific service delivery innovations of interest, such as adopting a broader neurodevelopmental approach to assessment, or the use of skill mix, there also appears to be scope to adapt stages within the process. For example, gathering information about a CYP's strengths/needs at the point of referral may enhance the process, regardless of the specific model. The three cross-cutting PTs centred on working in partnership with families; interagency working; and training, service evaluation and development. Collectively, these PTs evidence different approaches that could contribute to a better experience for families, improved efficiency (and potentially cost savings) and shorter waiting lists. Many of the issues identified in the RRR could be addressed by full adherence to NICE guidelines 12 and quality standards. 67 However, a gap exists between guidelines and local interpretation, exacerbated by demand for assessment outstripping capacity and resourcing constraints. In particular, the guidelines indicate the need for a team with the competencies to deliver a broader neurodevelopmental and mental health assessment, producing a comprehensive description of a chid's strengths and needs, but some services appeared focused solely on autism diagnosis, partly reflecting resourcing constraints. 36 A broader neurodevelopmental approach 58 may also ameliorate the concerns of those families whose child does not meet criteria for an autism diagnosis but has significant needs which may otherwise remain, or feel, unrecognised. This would be additionally aided by clinical teams resourcing the development of strengths and needs planning or working in consort with other agencies. As previously noted, there may also be a trade-off between carrying out comprehensive assessments for all CYP with possible autism and 'providing a more streamlined approach that is tailored to the child's presentation' 68 (p526) which could reduce diagnostic validity. This mirrors feedback from our expert stakeholdersthat there may need to be a discussion around the potential to increase investment in service delivery to enable high quality and timely approach versus the potential challenges associated with accepting lower quality and less timely diagnostic assessment. A similar approach delivering tiered assessment according to diagnostic complexity, has been recommended by recent Australian guidelines. 69 While the study findings are based on UK literature that relates to the NHS where health provision is free at the point of care, and insurance-based health economies are different, 68 the international literature was largely consistent with our findings. For example, recommendations to engage families in service design, and to produce a needs-based holistic assessment and report are mirrored internationally. 69 70 The seven PTs are echoed overall, for example in New Zealand recommendations, 71 while international research also supports individual PTs, including improving knowledge and skills of referrers, 72 improving information gathering to inform appropriateness of referral, 54 and upskilling the diagnostic workforce. 73 74 These are also echoed in recommendations from NHS England published after completion of the RRR. 75 Internationally, digitally delivered training programmes such as Extension for Community Healthcare Outcomes have been developed to enable upskilling of a wider diagnostic workforce, for example community general paediatricians in USA and Canada, 74 while the WHO has developed Caregiver Skills Training Programmes to train parents to support their children's development. 76 Similarly, the need for social distancing during the COVID-19 pandemic has acted as a driver to adopt digital technologies, although some of these had already been developed in response to geographical distancing between centralised specialist services and families living in widespread rural communities. 57 Implication for practice and future research From the PTs we identified six key areas that would benefit from further exploration. These were evaluation of: training and support materials available for non-specialist staff and parents/CYP accessing the diagnostic pathway which would increase early recognition that a child may need assessment and improve information gathering at the point of referral; training packages to upskill those working in autism services and the subsequent impact on workforce shortages; asset-based approaches to diagnosis, management and support; barriers and facilitators to comprehensive needs-led diagnostic assessment; approaches to integrating services dealing with autism; and increased use of technology in assessment that has already started in the context of COVID-19. 77 Strengths and limitations The realist approach was well suited to examining and understanding the complexity of autism diagnostic assessment, and the challenges of delivering such services in different contexts. We developed systematic and focused search strategies, within the parameters of RRR, 22 although not as extensive as a full realist review. Expert Stakeholder Engagement enhanced the search strategy, enabled an iterative approach to identifying relevant literature and was invaluable when synthesising Open access the findings. Most papers had limited information on care pathway processes and contextual factors (which in realist terminology refers to any trigger that influences responses or resources), or more general subanalysis by demographic/other characteristics, so PTs could only develop based on what was reported; this highlights the need for further empirical work which the next phase of this study will provide. Primary and background searches were restricted to UK only, given UK NHS context, but secondary searches included papers from countries with somewhat similar healthcare systems (USA, Canada, Australia, New Zealand) to help elucidate findings, as recommended by our expert stakeholders. However, we acknowledge that we may have missed literature from similar health systems that could have informed our PTs. CONCLUSION In conclusion, this RRR identified important aspects that may contribute to more efficient, high quality and familyfriendly service delivery. We will test the PTs and how service design could be further enhanced in the subsequent stages of the wider RE-ASCeD study. Contributors VA/WZ: involved in all stages of the review and writing all drafts of this paper. VA acting as guarantor. PW: substantial contribution to writing protocol for the overall RE-ASCeD project, all stages of the review and commenting on all drafts of this paper. WF/IM: substantial contribution to writing protocol for the overall RE-ASCeD project, all stages of the review and commenting on drafts of this paper. JP: substantial contribution to writing protocol for the overall RE-ASCeD project, some stages of the review and commenting on a draft of this paper. VR: substantial contribution to writing protocol for the overall RE-ASCeD project, some stages of the review and commenting on a draft of this paper. AP: designing the search strategy and commenting on the methodology section of this paper. VA acts as a gaurantor. Funding This work was supported by NHS England and funds were derived from the child and young person mental health transformation funding stream, via the Learning Disability and Autism Directorate (direct quote from NHS England letter dated 28/8/2019). There is no grant number. Competing interests None declared. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplementary information. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
2021-12-16T06:23:28.213Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "fb2939bcc3237c740c62510a70fa7e6fd34d306f", "oa_license": "CCBYNC", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8672008", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "cb855ab653e4e73a1c1665207e569fdb7bb02118", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
248112753
pes2o/s2orc
v3-fos-license
Climate Change Adaptation Strategies by Indonesian Vegetable Farmers: Comparative Study of Organic and Conventional Farmers Some experts believe that organic agriculture is more adaptable compared to conventional agriculture. Accordingly, the purpose of this study is to assess organic and conventional farmers' perception and adaptation to climate change and analyse the factors that influence such decisions. The survey was conducted in Java, involving 112 organic farmers and 112 conventional farmers. The chi-square test was used to differentiate climate change perceptions and adaptation strategies applied by farmers. The factors that influenced the selection of the adaptation strategies were analysed using logistic regression. The results of analysis found that organic farmers have more precise perceptions of climate change than that of conventional farmers. Organic farmers more commonly implement mixed cropping, crop rotation, increasing organic manure, using shade, and changing irrigation techniques as their adaptation strategies, while conventional farmers more commonly prefer to adjust planting and harvesting dates and use traditional climate prediction called Pranata Mangsa. The selection of farmers' adaptation strategies is influenced by age, education, experience, distance to extension services, access to credit, information about climate and farmer groups, as well as farmers' perceptions of climate change. The results of the study recommend that policy makers increase farmers' adaptive capacity through investment in education and institutions to support climate change adaptation. Introduction Climate change has become a serious threat to agricultural sectors in Indonesia, including horticulture, because it can disrupt sustainability and production systems [1]. Climate change may decrease the quantity and quality of yields, increase pests and diseases outbreak, and result in horticultural crop failure, especially vegetables [2,3]. Vegetables often require water supplies to increase and improve the quality of yields. For example, soil water shortage in the beginning of shallot cultivation may decrease the production by 26% [4]. Appropriate selection of adaptation strategies is necessary for farmers to reduce the negative impacts of climate change [5][6][7][8][9]. One of the adaptation strategy alternatives is organic farming. Organic farming has higher endurance and adaptation during extreme conditions because organic agriculture uses a higher level of organic manure than conventional agriculture that can improve the water holding capacity of soil, thus decreasing the risk of yield loss; organic agriculture applies various traditional skills and knowledge to manipulate agroecosystems, thus decreasing dependence on external inputs; and organic farming involves plant diversification, rotation, landscapes, and agricultural activities that may increase farmers' resilience in facing climate change impacts [10,11]. Comparative studies of organic and conventional vegetable farmers on climate change and its impacts, the adaptation measures they take to overcome the situation, and the main factors that affect the selection of adaptation strategies are still limited in Indonesia. e purpose of this research is to assess organic and conventional farmers' perception and adaptation to climate change and analyse the factors that influence such decisions. is study is important because understanding the perceptions of vegetable farmers and the way they think and act in response to climate change is very important in dealing with the negative impacts of climate change and in maintaining the productivity of vegetable farming. In particular, this study seeks to examine the following is article proceeds as follows. We first present in Section 2 the materials and methods. We interviewed 224 farmers and analysed the data using chi-square and logistic regression. Our results are presented in Section 3, in which we provide farmers' socioeconomic characteristics, vegetable farmers' perceptions of climate change and its impacts on vegetable farming, farmers' adaptation strategies in dealing with climate change, and factors influencing vegetable farmers' selection of adaptation strategies in dealing with climate change. A discussion is presented in Section 4. Section 5 concludes this work. Study Location. e study was conducted in Central Java Province and the Special Region of Yogyakarta Province, Indonesia. ese two provinces are the locations for the development of smallholder organic farming. en, four districts where organic vegetables were cultivated were selected deliberately as the research locations, i.e., Sleman (special region of Yogyakarta) and Magelang, Boyolali, and Semarang regencies (Central Java) ( Figure 1). Survey. e survey was conducted in February-August 2018 by interviewing 112 respondents of organic vegetable farmers who were randomly selected, and as a comparison, 112 conventional farmers around the organic farmers were also interviewed. e survey was carried out using a structured questionnaire. e first section of the questionnaire collected information about socioeconomic characteristics including age, education, experience, access to various institutions, and background of vegetable farming activities. e next part investigated farmers' perceptions of climate change and its impacts on vegetable farming. Farmers' perceptions of climate change were assessed based on the indicators of rainfall and temperature over the last 30 years. e last section contained various alternatives of adaptation strategies or changes in vegetable farming practices carried out by farmers in response to perceived climate change. Analysis. e data were analysed using chi-square and logistic regression. Chi-square was used to determine whether there was a significant difference between organic and conventional farmers' perceptions of climate change and its impacts. Logistic regression was used to find out about the factors influencing the organic and conventional vegetable farmers' selection of adaptation strategies in dealing with climate change. e logistic regression model was considered the right model to use because the dependent variable was binary (whether or not adaptation strategies were used to minimize the negative impacts of climate change; yes � 1, 0 � otherwise). e coefficient of logistic regression analysis could not explain the effect of variable change X n , the probability of farmers to adopt certain adaptation strategies (Pr Y ij � 1). Interpretation and measurement of the results may use the marginal effect or partial elasticity. e marginal effect describes the effect of changes in the explanatory variable X n on the probability of farmers' adaptation strategies. Partial elasticity measures the effect of increasing explanatory variable X n by 1% on the change in the probability of farmers' adaptation strategies. A study conducted by [5] used the marginal effect when explanatory variable X n was a binary variable or used partial elasticity when explanatory variable X n was continuous variable. e explanatory variables which affected the selection of adaptation strategies (Table 1) were selected based on the findings of previous research; such as age [29,35,36], education [25,29,[35][36][37][38][39], farming experience [36], distance to extension services [31], distance to input market [28,40], access to credit [41], access to climate information [42], farmer group membership [41,43], access to climate training [23,32], and perception of climate change [44]. Farmers' Socioeconomic Characteristics. In general, the conditions of organic vegetable farmers are relatively better than those of conventional farmers in terms of socioeconomic conditions ( Table 2). Organic vegetable farmers have better accessibility to credit institutions and farmer groups membership as well as closer distance to input market than conventional vegetable farmers. In addition, there is also a significant difference in the perceptions of decreased rainfall 2 e Scientific World Journal between organic and conventional vegetable farmers, where more organic farmers believe that rainfall tends to decline over the past 30 years compared to conventional farmers. Nonetheless, there are a few exceptions regarding the socioeconomic conditions, i.e., organic farmers have farther distance to extension services compared to conventional farmers. Vegetable Farmers' Perceptions of Climate Change and Its Impacts on Vegetable Farming. e results showed that most organic and conventional farmers have the same perceptions in terms of temperature, i.e., the temperature increases over the past 30 years. On the other hand, rainfall is perceived differently between organic and conventional farmers, i.e., most conventional farmers perceive that rainfall increases, while organic farmers perceive that rainfall in the research locations decrease over the past 30 years (Figure 2). Both organic and conventional farmers perceive the negative impacts of climate change on vegetable farming (Table 3). In general, more conventional farmers perceive the negative impacts of climate change compared to organic farmers. ree negative impacts of climate change perceived by most organic and conventional vegetable farmers are an increase in pests and diseases, decrease in the quality of yields, and decrease in production. In addition, an increase in pests and diseases is the most significant impact experienced by conventional farmers compared to organic farmers. Factors Influencing Vegetable Farmers' Selection of Adaptation Strategies in Dealing with Climate Change. e values of marginal and partial elasticity coefficient resulted from the analysis of the factors that influence the strategies selected by organic vegetable farmers are given in Table 5, while the values of marginal and partial elasticity coefficient resulted from the analysis of the factors that influence the strategies selected by conventional vegetable farmers are given in Table 6. e results showed that age, education, Significance based on Pearson chi-square for differences in proportions between the two groups or t-test for the average difference between the two groups. * * * Significant at 1% level. * * Significant at 5% level. * Significant at 10% level. Significance based on Pearson chi-square for differences in proportions between the two groups. * * Significant at 5% level. 4 e Scientific World Journal experience, distance to extension services, access to credit, information of climate, and farmer groups as well as farmers' perceptions of temperature and rainfall changes over the past 30 years influence the climate change adaptation strategies selected by both organic and conventional farmers. e effects of each of these variables on both organic and conventional farmers' selection of adaptation strategies are as follows. Age. e coefficient of farmers' age resulted in a negative sign on the significant selection of adaptation strategies, indicating that age negatively influences the probability of adaptation strategy selected by farmers. For example, the fact that age has a negative effect on Pranata Mangsa (traditional planting season calendar) strategy implemented by organic farmers indicates that an increase in age significantly decreases the probability of organic farmers to use Pranata Mangsa adaptation strategy. e value of partial elasticity shows that an increase in farmer's age by 1% will decrease the probability of farmers to select Pranata Mangsa adaptation strategy by 0.65%. In addition, among conventional farmers, an increase in age by 1% will lower the probability of farmers to implement mixed cropping, grow nonwater intensive crops, conduct crop rotations, and adjust planting and harvesting dates by 0.68%, 0.21%, 0.39%, and 0.49%, respectively. Education. Farmers' education resulted in either positive or negative signs on various adaptation strategies that farmers select, meaning that an increase in the length of education undergone by farmers may increase or decrease the probability of farmers to select a particular adaptation strategy. A positive effect of education on the probability of farmers in selecting an adaptation strategy can be seen in a strategy to increase the use of organic manure and the use of shade, indicating that an increase in the length of education by 1% among organic farmers will increase the probability of organic farmers to increase the use of organic manure and shade by 0.55% and 0.68%, respectively, while such increase among conventional farmers will increase the probability of using superior varieties by 0.18%. On the other hand, a negative effect of education on some adaptation strategies selected by organic farmers can be seen in the strategy to grow nonwater intensive crops (0.03%), adjust planting and harvesting dates (0.69%), and use Pranata Mangsa (0.87%), while in conventional farmers, it can be seen in the adaptation strategies to adjust planting and harvesting dates (0.46%), increase the dose of organic manure (0.31%), and use of Pranata Mangsa (0.87%). Farming Experience. Farmers' experience in vegetable farming is positively related to several adaptation strategies implemented by organic and conventional farmers. For example, there is a positive relationship between farming experience and the implementation of Pranata Mangsa adaptation strategy, meaning that an increase in organic farmers' experience by 1% increases the probability of farmers to select Pranata Mangsa adaptation strategy by 0.03%. Among conventional farmers, an increase in farming experience by 1% increases the probability of farmers to implement mixed cropping adaptation strategy (0.28%), use superior varieties (0.32%), conduct crop rotation (0.23%), and use Pranata Mangsa (0.52%). Distance to Extension Services. Distance to extension services has a negative effect on several adaptation strategies implemented by organic and conventional farmers. Among organic farmers, closer distance to extension services by 1% increases the probability of farmers to use superior varieties (0.08%) and adjust planting and harvesting dates (0.22%). Among conventional farmers, a closer distance to extension services by 1% increases the probability of conventional farmers to implement mixed cropping (0.29%), conduct crop rotation (0.19%), and use shade (0.62%). Distance to Input Markets. Distance to input market is negatively related to several adaptation strategies selected by organic and conventional farmers. Farther distance to input markets increases the probability of farmers to implement mixed cropping (0.22%), grow nonwater intensive crops (0.48%), conduct crop rotation (0.22%), adjust planting and harvesting dates (0.31%), use shade (0.56%), and use Pranata Mangsa (0.39%). Among conventional farmers, closer distance to input market by 1% increases the probability to Significance based on Pearson chi-square for the differences in proportions between the two groups. * * * Significant at 1% level. * * Significant at 5% level. * Significant at 10%. e Scientific World Journal Table 5: Estimation coefficients, marginal effects, and partial elasticity in logistic analysis of factors influencing organic vegetable farmers' adaptation strategies. 3.4.6. Access to Credit. Access to agricultural credit has a positive effect on the strategy to increase the use of organic manure by organic farmers and the use of mulch by conventional farmers. Farmers who have access to credit have a higher probability to implement the adaptation strategies of using organic manure and mulch by 25.3% and 11%, respectively, compared to conventional farmers. Farmer Group Membership. Farmers' membership in farmer groups has a positive and significant effect on several adaptation strategies selected by farmers. Organic farmers' access to farmer groups increases the probability of farmers to adopt particular adaptation strategies such as adjusting planting dates (38.7%) and using mulch (16.8%), while conventional farmers' access to farmer groups increases the probability of farmers to adopt some adaptation strategies including mixed cropping (16.2%), using superior varieties (18.9%), and crop rotation (11.8%). Access to Climate Training. Farmer's access to climate training does not significantly affect the adaptation strategies selected by organic and conventional vegetable farmers. Perceptions of Temperature Changes over the Last 30 years. Farmers who perceive that the air temperature increased over the last 30 years tend to have increased probability to implement several adaptation strategies such as growing nonwater intensive crops (16.9%), adjusting planting, and harvesting dates (15.9%) and increasing the uses of organic manure (17.7%). Among conventional farmers, it increases the probability to adopt several adaptation strategies such as mixed cropping (12.1%), adjusting planting and harvesting dates (20.7%), increasing the uses of organic manure (23.3%), and changing irrigation techniques (11%). Perceptions of Rainfall Changes over the Last 30 years. Farmers' perception that rainfall decreases has a positive effect on several adaptation strategies selected by farmers. Organic farmers who perceive that rainfall tends to decrease tend to have higher probability to implement particular adaptation strategies such as increasing the use of organic manure and using Pranata Mangsa by 15.2% and 17.2%, respectively. Among conventional farmers, it increases the probability to adopt several adaptation strategies such as using superior varieties (22%), increasing the uses of organic manure (25.2%), changing irrigation techniques (16.8%), and using Pranata Mangsa (31.6%). Discussion Our first analysis confirms that organic farmers have more accurate perceptions of climate change compared to conventional farmers. Organic vegetable farmers perceive that the temperature increased, and rainfall declined over the past 30 years. is perception is in line with the actual data from the Central Bureau of Statistics, where the average temperature over the past 30 years increased ( Figure 3) and rainfall decreased over the past 30 years (Figure 4). is finding is in line with the findings of previous studies, stating that the majority of farmers perceive the occurrence of climate change, marked by an increasingly hot temperature and declined rainfall [16,20,[45][46][47][48][49]. In addition, farmers who have experienced crop failure due to climate change such as drought or flooding will be more aware about climate change [50]. Organic and conventional vegetable farmers perceive that climate change has a negative impact on vegetable farming. Farmers' perceptions of the negative impacts of climate change on farming support the findings of previous studies [1,7,26,[51][52][53]. e three most significant impacts perceived by organic and conventional vegetable farmers are a decrease in the quality of yield, a decrease in production, and an increase in pests. e organic and conventional vegetable farmers interviewed mentioned that, in addition to rainfall and temperature changes, the occurrence of extreme weather in the study locations is more frequent. Heavy rain damages vegetables, thus lowering the quality of production. Prolonged drought in several areas has caused rainfed farming systems to experience crop failures. Our second analysis confirms that the adaptation strategies implemented by organic vegetable farmers are more varied compared to those by conventional vegetable farmers. Organic vegetable farmers apply more adaptation strategies than conventional farmers [48]. Farmers more aware of climate change will implement more adaptation strategies to mitigate the effects of climate change [54][55][56]. e adaptation strategies most commonly implemented by organic farmers compared to conventional farmers are mixed cropping, crop rotation, shade, increasing the dose of organic manure, and changing irrigation techniques. Most organic farmers are bound by contracts in terms of sales with certain parties, so they make efforts by implementing various strategies to maintain the continuity of vegetable supply. Mixed cropping, crop rotation, shade, increasing the uses of organic manure, and changing irrigation techniques are the strategies that farmers believe to be able to support vegetable production for one year. During extreme rainfall or temperatures, shade will protect vegetables and reduce damage 8 e Scientific World Journal caused by extreme temperature and rainfall. Organic manure will support the soil by reducing the loss of water. A change is in irrigation techniques in the form of ponds as water storage for use during drought. Mixed cropping and crop rotation will maintain the availability of various types of vegetable and reduce the risks caused by climate change. Several previous studies show that some of the adaptation strategies implemented by farmers in dealing with climate change include mixed cropping, crop rotation [57], adjusting planting date [18,29], increasing the use of organic manure [38], using shades [27], and changing irrigation techniques [57]. e adaptation strategies more widely adopted by conventional vegetable farmers are growing nonwater intensive crops, adjusting planting and harvesting dates, and using Pranata Mangsa. Pranata Mangsa is still widely used by conventional farmers. e conventional farmers interviewed stated that they grow vegetables by adjusting to the climate conditions, and they still use Pranata Mangsa to determine the planting dates and the most suitable commodities to be grown during that time. e results of this study support those of previous studies that farmers adapt by adjusting the growing seasons [29] and using Pranata Mangsa [57]. e adaptation strategies implemented by farmers aim to maintain agricultural production [58] and use it as a profitable opportunity [59]. Various types of adaptation strategies are implemented by farmers and influenced by climatic conditions, the types of farming, and other conditions such as political, economic, and institutional factors [29,59]. Our third analysis confirms that age, education, experience, distance to extension services, access to credit, information about climate, and farmer groups as well as farmers' perceptions of temperature and rainfall changes over the past 30 years influence the climate change adaptation strategies selected by organic and conventional farmers. Age, education, and experience will influence the selection of adaptation strategies. Age negatively affects the adaptation strategies selected by organic and conventional vegetable farmers. ese results confirm the findings of a research conducted by [60] that young farmers more often adopt climate change adaptation strategies because they usually pay more attention to climate change. In fact, education may have either positive or negative impact on organic and conventional vegetable farmers' selection of climate change adaptation strategies. Some previous studies showed a significant positive relationship between farmers' education and climate change adaptation [61], but some also showed a negative relationship [62]. Organic farmers have higher education, so as to absorb technological innovations including innovations to adapt to climate change [48,63]. Another factor influencing farmers' selection of adaptation strategies is experience. Organic and conventional vegetable farmers who have more farm experiences tend to be more aware of climate events in the past and better skill in assessing how to adjust their farming with extreme weather. A positive relationship between adaptation and experience is also shown by previous studies [21,36,64]. Farmers' access to extension services and input markets also influences farmers' adaptation strategies. Organic and conventional vegetable farmers' distance to extension services and input markets negatively affects farmers' selection of climate change adaptation strategies. e results of this study confirm those of previous studies, where distance to input market [28] and distance to extension services [31] negatively affect farmers' decision to adapt. e negative effect of distance to input markets on adaptation strategies is related to the limited access to input markets in terms of purchasing inputs on time [62]. Extension services and input markets should be easily accessed by farmers. Farmers who obtain information about climate conditions from extension services will have the knowledge of how to reduce climate change impacts as well as effective and efficient adaptation strategies [62], while input markets are far from their land, causing it difficult for them to access crop production inputs [31]. Farmer's access to credit, climate information, and farmer groups have a positive effect on the adaptation e Scientific World Journal 9 strategies selected by organic and conventional farmers. Farmers' access to credit will provide farmers with additional funding sources for the implementation of adaptation strategies. e finding of this study is in line with that of [22] that access to credit can increase farmers' opportunity to adapt. In addition, farmers who obtain climate information will have knowledge about climate, its impacts, and climate change adaptation in terms of vegetable farming. Farmers who often receive climate information will be more adaptable to climate change [15]. Farmer groups serve as a forum for farmers to search for information and technology [19]. Any information about mitigation and adaptation methods can be more effective if obtained from neighbours, peer groups, and other members of farmer groups [32]. Farmer group meetings at the research locations are held monthly, allowing farmers to obtain information regularly. In fact, farmers could obtain knowledge and information from farmer group meetings. Besides, farmers could also obtain climate change adaptation technologies from these meetings. Farmers' perceptions of temperature and rainfall play a vital role in determining climate change adaptation strategies implemented by farmers. Perceptions will also determine the long-term measures that farmers will take in dealing with climate change. Organic farmers who perceive that the temperature is increasing will adapt by using organic manure. When organic farmers perceive that the rainfall is decreasing, they will increase the use of organic manure and use Pranata Mangsa. On the other hand, conventional farmers who perceive that the temperature is increasing will make efforts to change their irrigation techniques, for example, by making reservoirs or drilled wells as the main source of water for farming. If the rainfall is decreasing, conventional farmers will use drought-and-flood-tolerant superior varieties, adjust planting and harvesting dates, increase the uses of organic manure to maintain the soil binding capacity, so the soil is not easily drying, change irrigation techniques by making reservoirs, and use Pranata Mangsa to determine both the suitable commodities and the planting times. Conclusions and Recommendations is research examined organic and conventional farmers' perception and adaptation to climate change and the factors that influence such decisions. Organic farmers' perception of temperature and rainfall over the past 30 years is in accordance with the climate data, indicating that organic farmers have more accurate perceptions of climate change compared to conventional farmers. Organic vegetable farmers perceive that climate change greatly affects vegetable farming. e three impacts experienced by most farmers are a reduction in the quality of crops, an increase in pests, and crop failure. To reduce the impacts of climate change on vegetable farming, both organic and conventional farmers implement various adaptation strategies. e strategies include implementing mixed cropping, using superior varieties, growing nonwater intensive crops, implementing crop rotation, adjusting planting and harvest dates, increasing the use of organic manure, using shade, using mulch, changing irrigation techniques, and using Pranata Mangsa. In fact, the adaptation strategies implemented by organic farmers are more varied compared to those by conventional farmers. Organic farmers implement adaptation strategies to minimize the negative impacts of climate change on their farming, as a way for them to maintain the continuous supply of vegetables. In addition, farmers may select different strategies depending on the resources they have. Policy makers and stakeholders shall contribute to increasing farmers' adaptive capacity in dealing with climate change by increasing farmers' access to climate information, input markets, credit, and farmer groups. In addition, policy makers and stakeholders shall provide more extension and information about climate and climate change adaptation strategies, particularly in relation to vegetable farming. is study attempts to assess the perceptions and adaptations of organic and conventional farmers to climate change and analyse the factors that influence those decisions in Indonesia, using a logistic regression model. Data collected through self-administered questions, but some variables not included in this study, such as motivational factors, may have more influence towards the decision of farmers to further adapt. In addition, local wisdom regarding the planting season calendar in each research area should be considered in future research considering that these variables significantly affect the adaptation of farmers to climate change. e researcher also recommends that future work should consider other types of logistic regression. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2022-04-13T15:11:29.147Z
2022-04-11T00:00:00.000
{ "year": 2022, "sha1": "a5c9de428c81996090594d0c059de5a8ec2ffbef", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/tswj/2022/3590769.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ca1438899444e0c56a7404d5275b6088c8091dc", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
13194332
pes2o/s2orc
v3-fos-license
A Note on Approximating Weighted Independence on Intersection Graphs of Paths on a Grid A graph $G$ is called $B_k$-VPG, for some constant $k\geq 0$, if it has a string representation on an axis-parallel grid such that each vertex is a path with at most $k$ bends and two vertices are adjacent in $G$ if and only if the corresponding paths intersect each other. The part of a path that is between two consecutive bends is called a segment of the path. In this paper, we study the Maximum-Weighted Independent Set problem on $B_k$-VPG graphs. The problem is known to be NP-complete on $B_1$-VPG graphs, even when the two segments of every path have unit length [12], and $O(\log n)$-approximation algorithms are known on $B_k$-VPG graphs, for $k\leq 2$ [3, 14]. In this paper, we give a $(ck+c+1)$-approximation algorithm for the problem on $B_k$-VPG graphs for any $k\geq 0$, where $c>0$ is the length of the longest segment among all segments of paths in the graph. Notice that $c$ is not required to be a constant; for instance, when $c\in O(\log \log n)$, we get an $O(\log \log n)$-approximation or we get an $O(1)$-approximation when $c$ is a constant. To our knowledge, this is the first $o(\log n)$-approximation algorithm for a non-trivial subclass of $B_k$-VPG graphs. Introduction In this paper, we study the Maximum-Weighted Independent Set problem on B k -VPG graphs. A graph is said to have a Vertex intersection of Paths in a Grid (VPG representation, for short), if its vertices can be represented as simple paths on an axis-parallel grid such that two vertices are adjacent if and only if the corresponding paths share at least one grid node. Although VPG graphs were considered a while ago when studying string graphs [15], they were formally investigated by Asinowski et al. [1]. Since then much of the work has focused on studying subclasses of VPG graphs; in particular, restricting the type of paths that are allowed. A turn of a path at a grid node is called a bend and a VPG graph is called B k -VPG graph if every path has at most k bends. Definition 1.1 (B k -VPG Graph). A graph G = (V, E) is called a B k -VPG graph, if every vertex u of G can be represented as a path P u on an axis-parallel grid G such that (i) P u has at most k bends, and (ii) two paths P u and P v intersect each other at a grid node if and only if (u, v) ∈ E. We remark that by intersecting each other in Definition 1.1, we include the case where two paths only touch each other. Figure 1 shows a graph with its B k -VPG representations for k = 1. Let G = (V, E) be an undirected graph. A set S ⊆ V is an independent set if no two vertices in S are adjacent. In the Maximum-Weighted Independent Set problem, each vertex is assigned a weight and the objective is to compute an independent set of G whose weight is maximum over all independent sets in G. The weight of an independent set is defined as the sum of the weights of vertices in the independent set. The Maximum-Weighted Independent Set is well known to be NP-hard, and n 1− -hard to approximate for any > 0 unless NP=ZPP [11]. Related work. Much of the early work is focused on deciding the existence or recognition of VPG representations of graphs [8,10,4,6]. Recently, however, there are works done on designing approximation algorithms for optimization problems on B k -VPG graphs. For instance, Minimum Dominating Set problem is APX-hard on one-string B 1 -VPG graphs 1 due to the facts that every circle graph is a one-string B 1 -VPG graph [1] and that Minimum Dominating Set is APX-hard on circle graphs [7]. Moreover, Mehrabi [13] studied the Minimum Dominating Set problem on B 1 -VPG graphs and gave an O(1)-approximation algorithm for this problem (see also [14]). For the Maximum-Weighted Independent Set problem, the decision version was shown to be NP-complete on B 1 -VPG graphs by Lahiri et al. [12] who also gave an O((log n) 2 )-approximation algorithm for this problem; notice that the best known algorithm for Maximum-Weighted Independent Set on arbitrary string graphs has an approximation factor n , for some > 0 [9]. Moreover, O(log n)-approximation algorithms exist for the problem on B k -VPG graphs, for k ≤ 2 [3,14]. It has been asked several times [12,3,14] whether there exist a constant-factor (or even an o(log n)factor) approximation algorithm for the Maximum-Weighted Independent Set problem on B k -VPG graphs. Our result. As the first step towards obtaining o(log n)-approximations, we consider the problem on B k -VPG graphs for which the longest segment among all segments of paths in the graph has length c, for some c > 0. We give a polynomial-time (ck + c + 1)-approximation algorithm for the problem on such graphs (Section 3). Notice that c is not required to be a constant; for instance, when c ∈ O(log log n), we get an O(log log n)-approximation or we get an O(1)-approximation when c is a constant. To our knowledge, this is the first o(log n)-approximation algorithm for a non-trivial subclass of B k -VPG graphs. Our algorithm is based on a linear programming formulation of the problem and then applying the local ratio technique [2] to the weight function with respect to a feasible solution of the linear program. The algorithm uses a rounding lemma, which exploits the properties of B k -VPG graphs. This will then be used to decompose the weight function for subsequent recursive steps of the algorithm. Preliminaries We denote the string representation of a B k -VPG graph G = (V, E) by P , G , where P is the collection of paths corresponding to the vertices of G and G is the underlying grid. We might sometimes violate the wording and say path(s) in G to actually refer to the vertices in G corresponding to the paths in P . Since the recognition problem is NP-hard on such graphs [16,5], we assume throughout this paper that a string representation of a B k -VPG graph is always given as part of the input (in addition to G). We denote the path in P corresponding to a vertex u ∈ V by P u . We say that two paths P u and P v are adjacent if the vertices u and v are adjacent in G. Let P be a path in P . We call the common endpoint of two consecutive segments of P a corner of P and denote it by corner(P ). Let N [P ] denote the set of paths adjacent to P ; we assume that P ∈ N [P ]. Moreover, for a set S ⊆P of paths, define N [S] := ∪ P ∈S N [P ]. Finally, we denote the x-and y-coordinates of a point p in the plane by x(p) and y(p), respectively. Let w ∈ R n be a weight vector, and let F be a set of feasibility constraints on vectors x ∈ R n . A vector x ∈ R n is a feasible solution to a given problem (F, p) if it satisfies all of the constraints in F . The value of a feasible solution x is the inner product w · x. A feasible solution is optimal for a maximization (resp., minimization) problem if its value is maximal (resp., minimal) among all feasible solutions. A feasible solution x for a maximization (resp., minimization) problem is an αapproximation solution, or simply an α-approximation, if w ·x ≥ α·w ·x * (resp., if w ·x ≤ α·w ·x * ), where x * is an optimal solution. An algorithm is said to have an approximation factor of α if it always computes α-approximation solutions. Our (ck + c + 1)-approximation algorithm for the Maximum-Weighted Independent Set problem on B k -VPG graphs is based on the local ratio technique, which was first developed by Bar-Yehuda and Even [2]. Let us formally state the local ratio theorem. Theorem 2.1. (Local Ratio [2]) Let F be a set of constraints, and let w, w 1 and w 2 be weight vectors where w = w 1 + w 2 . If x is an α-approximation solution with respect to (F, w 1 ) and with respect to (F, w 2 ), then x is an α-approximation solution with respect to (F, w). We now describe how the local ratio technique is usually used for solving a problem. First, the solution set is empty. The idea is to find a decomposition of the weight vector w into w 1 and w 2 such that w 1 is an "easy" weight function in some sense (we will discuss this in more details later). The local ratio algorithm continues recursively on the instance (F, w 2 ). We assume inductively that the solution returned recursively for the instance (F, w 2 ) is a good approximation and will then prove that it is also a good approximation for (F, w). This requires proving that the solution returned recursively for the instance (F, w 2 ) is also a good approximation for the instance (F, w 1 ). This step is usually the main part of the proof of the approximation factor. Approximation Algorithm In this section, we give a (ck + c + 1)-approximation algorithm for the Maximum-Weighted Independent Set problem on B k -VPG graphs, for any k ≥ 0, where c > 0 is the length of the longest segment among all segments of paths in the graph; notice that c is not required to be a constant. The algorithm is based on rounding a fractional solution derived from a linear programming relaxation of the problem. The standard linear programming relaxation of the Maximum-Weighted Independent Set problem is as follows. For each path P ∈ P, we define an indicator variable x(P ). The path P is in the independent set if and only if x(P ) = 1. The integer program assigns the binary values to the paths with the constraint that for each clique Q, the sum of the values assigned to all paths in Q is at most 1. By relaxing the integer constraint, we get the following linear program in which w(P ) denotes the weight of P . Let x denote the indicator vector of variables. Notice that any independent set in G gives a feasible integral solution to the linear program. Therefore, the value of an optimal (not necessarily integer) solution to the linear program is an upper bound on the value of an optimal integral solution. The linear program (1) might not have a polynomial number of constraints in general, however, we show that having a polynomial number of constraints in (1) can be guaranteed for B k -VPG graphs. We first need some definitions. For a path P ∈ P, let grid(P ) be the set of grid points on which P lies; that is, a grid point t is in grid(P ) if and only if P contains t. Moreover, we define grid(G) := P ∈P grid(P ). Consider any clique Q in G. Observe that the paths in Q must have at least one grid point in common; that is, This means that every clique in G is defined by some grid point. For a grid point t, let paths(t) denote the set of all paths in G that contain t. Then, we can re-formulate (1) as follows: subject to P ∈paths(t) x(P ) ≤ 1 ∀ points t ∈ grid(G), Since each path P has at most k + 1 segments and each segment has length at most c, we have |grid(P )| ≤ ck + c + 1 and so |grid(G)| ≤ (ck + c + 1)n. As such, the number of constraints of (2) is polynomial in n. Therefore, an optimal solution to (2) can be computed in polynomial time. The key to our rounding algorithm is the following lemma. x(P ) ≤ ck + c + 1. 3: Solve the problem recursively using w 2 as the weight vector. Let S be the independent set returned. 4: If there exists at least one path in S that is adjacent to P , then return S := S ; otherwise, return S := S ∪ P . x(P ) ≤ ck + c + 1. Proof. Take any path P in P. Consider the neighbours of P in P and partition them into at most ck + c + 1 sets based on the grid point they share with P . If a path P ∈ N [P ] shares more than one grid point with P , then assign it to (only) one of the partition sets arbitrarily. Consider a grid point t ∈ grid(P ). We know by (2) that P ∈paths(t) x(P ) ≤ 1. Therefore, x(P ) = t∈grid(P ) P ∈paths(t) x(P ) ≤ t∈grid(P ) (1) ≤ ck + c + 1. By Lemma 3.1, the rounding algorithm applies a local ratio decomposition of the weight vector w with respect to x. See Algorithm 1. Clearly, the set S returned by the algorithm is an independent set. The following lemma establishes the approximation factor of the algorithm. Proof. We prove the lemma by induction on the number of recursive calls. In the base case, the set returned by the algorithm satisfies the lemma because no vertices are remained. Moreover, the first step that removes all vertices with non-positive weight cannot decrease the right-hand side of the above inequality. We next prove the induction step. Suppose that z and z correspond to the indicator vectors for S and S , respectively. By induction, w 2 · z ≥ 1 ck+c+1 (w 2 · x). Since w 2 (P ) = 0, we have w 2 · z ≥ 1 ck+c+1 (w 2 · x). From the last step of the algorithm, we know that at least one path from N [P ] is in S (recall that we assumed P ∈ N [P ]) and so we have Moreover, by Lemma 3.1, x(P ) ≤ (ck + c + 1) · w(P ). Therefore, This completes the proof of the lemma. Since there exists at least one path P for which w 2 (P ) = 0 in each recursive step, Algorithm 1 terminates in polynomial time. Therefore, by Lemmas 3.2, we have the main result of this paper. Conclusion In this paper, we considered the Maximum-Weighted Independent Set problem on B k -VPG graphs. We gave a polynomial-time (ck + c + 1)-approximation algorithm on B k -VPG graphs, for any k > 0, where c denotes the length of the longest segment of all paths in the graph. The algorithm relies crucially on the fact that the longest segment in the graph has length c. Designing an o(log n)-approximation of a maximum-weighted independent set on any B k -VPG graph (even for k = 1) remains open.
2017-08-30T15:06:28.000Z
2017-08-30T00:00:00.000
{ "year": 2017, "sha1": "6d4397224466414d0054c40ee2104a098d7264d8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "004542582c676dd563e21866cc56878886e812ec", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
24239012
pes2o/s2orc
v3-fos-license
Immunization of Mice with Single PspA Fragments Induces Antibodies Capable of Mediating Complement Deposition on Different Pneumococcal Strains and Cross-Protection ABSTRACT PspA is an important candidate for a vaccine with serotype-independent immunity against pneumococcal infections. Based on sequence relatedness, PspA has been classified into three families comprising six clades. We have previously addressed the cross-reactivity of antibodies against PspA fragments containing the N-terminal and proline-rich regions of PspA from clades 1 to 5 (PspA1, PspA2, PspA3, PspA4, and PspA5) by Western blot analysis and reported that anti-PspA4 and anti-PspA5 were able to recognize pneumococci expressing PspA proteins from all of the clades analyzed. We have now analyzed the functional capacity of these antibodies to bind and to mediate complement deposition on intact bacteria in vitro. Our results show that both PspA4 and PspA5 elicit antibodies that are able to bind and to mediate complement deposition efficiently on pneumococcal strains bearing PspA proteins from clades 1 to 5. Moreover, mice immunized with PspA4 and PspA5 were protected against an intranasal lethal challenge with strains expressing PspA proteins from the two major families. PspA4 and PspA5 are thus able to induce antibodies with a high degree of cross-reactivity in vitro, which is reflected in cross-protection of mice. We have also analyzed the contribution of the nonproline (NonPro) block within the conserved proline-rich region to the reactivity of anti-PspA antibodies, and the results indicate that N-terminal α-helical region, the blocks of proline repeats, and the NonPro region can influence the degree of cross-reactivity of antibodies to PspA. Streptococcus pneumoniae is an important human pathogen, being responsible for millions of deaths worldwide every year. The pneumococcal disease burden could be greatly reduced by the use of the current seven-valent conjugate vaccine, but the high cost and restricted serotype coverage limit its widespread use, especially in developing countries. New-generation vaccines containing up to 13 serotypes are expected to increase vaccine coverage, but the serotype replacement in colonization and disease by nonvaccine serotypes observed with the use of the seven-valent conjugate vaccine (8)(9)11) further emphasizes the importance of the development of alternative vaccines. Protein antigens such as PspA (pneumococcal surface protein A) could be used to induce serotype-independent immunity at a low cost (24). PspA is present in all isolated pneumococcal strains and was shown to be an important virulence factor, interfering with complement deposition (19,21,25), killing by apolactoferrin (23), and immune adherence to erythrocytes (12). It has been shown to induce protection in mice in carriage, pneumonia, and fatal systemic models (2,4,16). Mature PspA is composed of a mosaic structure with four domains: an ␣-helical N-termi-nal domain, a proline-rich region, a choline-binding domain, and a short hydrophobic tail (10,(27)(28). PspA shows variability in the surface-exposed N-terminal region, and a classification was proposed based on sequence relatedness of the Cterminal portion of the ␣-helix, the clade-defining region. It has been classified into three families encompassing six clades. Family 1 (Fam1) is composed of clades 1 and 2, Fam2 includes clades 3, 4, and 5, and Fam3, which is rarely isolated, comprises clade 6 (10). Since the degree of similarity seems to be reflected in cross-reactivity, it has been proposed that a broadcoverage vaccine should contain at least one fragment from each of the two major families. Immunization of healthy adults with a single recombinant fragment of PspA in a phase I clinical trial showed the induction of cross-reactive antibodies (14) that were able to induce passive protection in mice challenged intravenously (3). The natural exposure of adults to several pneumococcal strains might be responsible for the cross-reactivity detected, with the immunization with PspA acting as a booster dose. Because of the diversity observed in PspA, it is extremely important to analyze whether each fragment selected to compose a vaccine is indeed able to induce cross-protection. We have previously addressed the degree of cross-reactivity of antibodies to recombinant fragments including the N-terminal and proline-rich regions of PspA proteins from clades 1 to 5 (PspA1, PspA2, PspA3, PspA4, and PspA5) by Western blot analysis of 35 strains isolated in Brazil. As expected, we have observed higher cross-reactivity within the same clade. Within Fam1, anti-PspA1 serum also showed cross-reaction with PspA2-expressing strains, while anti-PspA2 showed reaction restricted to the same clade. Within Fam2, anti-PspA3 serum also showed reactivity restricted to PspA3-expressing strains, while anti-PspA5 and, more strikingly, anti-PspA4 sera showed a broad recognition capacity, being able to react with strains expressing PspA proteins from clades 1 to 5 (7). The ability of sera to recognize a pneumococcal strain by Western blot analysis does not necessarily correlate with their capacity to induce protection in vivo though. In fact, the levels of antibodies to PspA detected by enzyme-linked immunosorbent assay (ELISA) or through surface staining of the bacteria failed to provide a useful correlate of protection (22). Based on the strong evidence supporting the importance of complement in protection against pneumococcal disease, it was proposed that in vitro complement deposition mediated by antibody may be used as a surrogate assay for the prediction of protection induced by surface antigens of pneumococci (15). This work aimed at further characterizing antibodies against the PspA1, PspA2, PspA3, PspA4, and PspA5 N-terminal fragments in terms of their capacity to mediate C3 deposition on the surface of pneumococci expressing PspA proteins from different clades. Moreover, protection of mice against a lethal intranasal challenge with strains expressing PspA from Fam1 or Fam2 was also analyzed. The basis for the broad reactivity observed in the anti-PspA4 serum by Western blot analysis was also further investigated. Of the five PspA fragments analyzed, PspA4 was the only one containing a nonproline (NonPro) block within the proline-rich region. Not all native PspA proteins include this region: of 24 PspA sequences analyzed by Hollingshead and collaborators (10), 14 were shown to have this NonPro block. We have thus examined whether this region would be responsible for increased cross-reactivity. MATERIALS AND METHODS Construction of PspA fragments. All cloning procedures were performed with Escherichia coli DH5␣ grown in Luria-Bertani medium supplemented with ampicillin (100 g/ml). The plasmids encoding the N-terminal regions of PspA proteins from clades 1 to 5 (PspA1, PspA2, PspA3, PspA4, and PspA5) were previously described (7). Two new plasmids encoding PspA4 fragments were constructed: while the original construct (PspA4) encoded the complete Nterminal region plus the proline-rich region (containing a NonPro block within this region), PspA4AB contains only the N-terminal ␣-helical region without the entire proline-rich region, and PspA4Pro contains the N-terminal ␣-helical region plus the first block of prolines only, lacking both the NonPro and second proline blocks (Fig. 1). Both fragments were amplified by PCR from the original PspA4 construct using primers 5Ј TAGCTCGAGACCATGGTAAGAGCAGA AGAAGCC 3Ј (forward) and 5Ј GGTACCTTAAGTCTCTTCTTCATCTC CATC 3Ј (PspA4AB-reverse) or 5Ј GGTACCTTATGGTTTTGGTGCTGG AGCT 3Ј (PspA4Pro-reverse). The gene products were cloned into the pGEMTeasy vector (Promega), and the sequences were confirmed by DNA sequencing. The pGEMT-easy-pspA constructs were digested with XhoI and KpnI, and the resulting fragments were subcloned into the pAE 6ϫHis vector (18), generating pAE-pspA4AB and pAE-pspA4Pro. A plasmid encoding a fusion between PspA3 and the proline-rich region (containing the NonPro block) of PspA4 was constructed by amplification of the proline-rich region of PspA4 using primers 5Ј TAGTCTAGACCAGCGCCAGCTCCTCAA 3Ј (NonPro F) and 5Ј TAGGGT ACCTTATGGTTGTGGTGCTGAAGCT 3Ј (NonPro R). The gene product was cloned into the pGEMT-easy vector (Promega), and the sequence was confirmed by DNA sequencing. The pGEMT-easy-NonPro construct was digested with XbaI and KpnI, and the resulting fragment was subcloned into the 3Ј end of pspA3 in pTG-pspA3NS (13). The fragment encoding the PspA3-NonPro fusion ( Fig. 1) was digested using XhoI and KpnI and cloned into pAE 6ϫHis, generating pAE-pspA3-NonPro. PspA expression and purification. The pAE 6ϫHis vectors containing the pspA constructs were used to transform BL21(DE3) SI E. coli competent cells (Invitrogen). Protein expression was induced in mid-log-phase cultures by the addition of 300 mM NaCl. The recombinant proteins, bearing an N-terminal histidine tag, were purified from the soluble fraction through affinity chromatography using Ni 2ϩ -charged resin (His Trap HP; GE HealthCare) in an Ä kta Prime apparatus (GE HealthCare). Elution was carried out with 250 mM imidazole. The purified fractions were analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis, dialyzed against 10 mM Tris-HCl (pH 8)-20 mM NaCl-0.1% glycine, and stored at Ϫ20°C. Pneumococcal strains. All of the strains used in this study were maintained as frozen (Ϫ80°C) stocks in Todd-Hewitt broth supplemented with 0.5% yeast extract (THY) in 20% glycerol. The serotypes, PspA clades, and sources of the strains are given in Table 1. Animal immunization and challenge. Animal experimental protocols were approved by the Ethics Committee of the Instituto Butantan (São Paulo, Brazil). Five-to 7-week-old female BALB/c mice from the Instituto Butantan (São Paulo, Brazil) were immunized intraperitoneally with 5 g of PspA1, PspA2, PspA3, PspA4, or PspA5 adjuvanted with Al(OH) 3 (50 g Al 3ϩ ). Animals were given three doses of protein at 14-day intervals. For the experiments with PspA4AB, PspA4Pro, and PspA4 or PspA3 and PspA3-NonPro, animals were immunized subcutaneously with three doses of protein (5 g) adjuvanted with Al(OH) 3 at 14-day intervals. The adjuvant alone was used as a control. Sera were collected from mice by retro-orbital bleeding 1 day before each dose and 14 days after the final dose. Mice were challenged 15 days after the last immunization. Animals were anesthetized through the intraperitoneal route with 200 l of a 0.2% xylazine-1.0% ketamine mixture and then challenged through the inoculation of 50 l of a suspension of strain A66.1 (4 ϫ 10 6 CFU/animal) or ATCC The plates were then incubated with serial dilutions of individual sera in PBS-1% bovine serum albumin at 37°C for 1 h. The plates were washed again and incubated with horseradish peroxidase-conjugated goat anti-mouse immunoglobulin G (IgG, 1:15,000; Sigma) in PBS-1% bovine serum albumin at 37°C for 1 h. Following washes, antibodies were detected by adding OPD substrate (0.04% o-phenylenediamine in citrate-phosphate buffer [pH 5] containing 0.01% H 2 O 2 ). After color development (10 min), the reaction was interrupted with H 2 SO 4 and the A 492 was determined. The reciprocal titer was considered the inverse of the last dilution of serum that registered an optical density above 0.1. Differences between groups were analyzed by Student's t test. Antibody binding and complement deposition assays. Antibody binding and complement deposition were analyzed as previously described (20). S. pneumoniae strains were grown in THY to a concentration of ϳ10 8 CFU/ml (optical density at 600 nm, 0.4 to 0.5) and harvested by centrifugation at 2,000 ϫ g for 2 min. The pellets were washed with PBS, resuspended in the same buffer, and incubated in the presence of heat-inactivated pooled sera from the third bleed of immunized mice for 30 min at 37°C. For antibody binding assays, samples were then washed with PBS and incubated with fluorescein isothiocyanate (FITC)conjugated goat anti-mouse IgG Fc (1:1,000; MP Biomedicals) on ice for 30 min. For complement deposition, bacteria were then washed with PBS, resuspended in gelatin Veronal buffer (Sigma), and incubated in the presence of freshly frozen normal mouse serum (from BALB/c mice as a source of complement) at 37°C for 30 min. After another wash with PBS, the samples were incubated with FITC-conjugated goat antiserum to mouse complement C3 (1:1,000; MP Biomedicals) on ice for 30 min. In both assays, bacteria were then washed twice with PBS, resuspended in 2% formaldehyde in PBS, and analyzed using a FACScalibur (BD Biosciences). Ten thousand gated events were acquired and analyzed in fluorescence intensity histograms. RESULTS PspA4 and PspA5 elicit antibodies with a high degree of cross-reactivity. Sera collected from individual mice immunized with PspA1, PspA2, PspA3, PspA4, or PspA5 were analyzed by ELISA for reactivity against each recombinant PspA fragment. Data from sera collected after the second immunization are shown in Fig. 2. The pattern of cross-reactivity followed the classification of the different clades into the families, with the induction of significantly higher titers of anti-PspA antibodies against PspA proteins from the same family in all of the fragments tested compared to sera from animals inoculated with alum only. Reactivity with fragments from the other family was not always observed: anti-PspA1 (Fam1) did not elicit higher titers of antibodies reacting with PspA5 (Fam2), anti-PspA2 (Fam1) did not show reactivity with either PspA4 or PspA5 (Fam2), and anti-PspA3 (Fam2) did not react with either PspA 1 or PspA2 (Fam1). Both anti-PspA4 and anti-PspA5 showed increased reactivity with all five PspA fragments examined compared to sera from animals injected with alum. PspA4 and PspA5 elicit antibodies that bind and mediate C3 deposition on strains expressing different PspA proteins. We further tested the ability of the anti-PspA antibodies to bind to the surface of pneumococci bearing different PspA proteins, and again the pattern of increased reactivity between clades of the same family was generally observed. Anti-PspA1 and anti-PspA 2 (Fam1) sera showed increased binding to St245/00 (clade 1, Fam1) and D39 (clade 2, Fam1), while binding to 3JYP2670 (clade 4, Fam2) and ATCC 6303 (clade 5, Fam2) was only slightly enhanced and no binding to M10 (clade 3, Fam2) at all was seen (Fig. 3A). Anti-PspA3 (Fam2) showed the narrowest cross-reactivity, being able to bind efficiently only to M10 (clade 3, Fam2). Restricted binding was seen for anti-PspA3 even to Fam2 strains 3JYP2670 (clade 4) and ATCC 6303 (clade 5), and no binding at all to Fam1 strains St245/00 (clade 1) and D39 (clade 2) was observed (Fig. 3B). Both anti-PspA4 and anti-PspA5 (Fam2) showed efficient binding to all of the strains tested (Fig. 3B). It is important to stress that the degree of binding varies with each individual pneumococcal isolate, so that we have only compared the median fluorescence values of the same strain with different sera. As for complement deposition, anti-PspA1 and anti-PspA2 (Fam1) sera were able to efficiently mediate the deposition of C3 only on Fam1-bearing strains (St245/00 and D39) (Fig. 3C). Anti-PspA3 (Fam2) showed efficient deposition only on M10 (clade 3, Fam2) (Fig. 3D), while anti-PspA4 and anti-PspA5 were capable of efficient mediation of C3 deposition on all Fam2 strains (Fig. 3D). As for Fam1 strains, anti-PspA4 was able to mediate C3 deposition on St245/00 (clade 1) and anti-PspA5 on D39 (clade 2) at levels similar to those observed for the homologous Fam1 antibodies (Fig. 3D). Again, only median fluorescence values of the same strain with the different sera were compared. PspA4 and PspA5 induce protection against challenges with Fam1-and Fam2-bearing strains. Immunized mice were then submitted to a lethal intranasal challenge with pneumococci expressing PspA from Fam1 or Fam2. As shown in Table 2, animals immunized with PspA2 (Fam1) showed significant protection against a challenge with A66.1 (clade 2, Fam1) (P ϭ 0.008). Moreover, PspA4 and PspA5 (Fam2) were also able to induce protection against A66.1, even though the strain expresses a PspA protein from a different family (P ϭ 0.001 for PspA4 and P ϭ 0.03 for PspA5). A66.1 was first reported to express only PspA2 (1), but subsequent work has shown it to express both PspA1 and PspA2 (3). We have amplified a single pspA band from this strain using primers LSM12 and SKH2 (17), and sequencing of cloned fragments confirmed only pspA2 clones. The expression of only PspA2 by A66.1 would be in accordance with the data showing protection after immuni-zation with PspA2, but not with PspA1. Challenge with ATCC 6303 (clade 5, Fam2) was also performed, and as shown in Table 3, only protection within the same family was seen for PspA4 (P ϭ 0.005) and PspA5 (P ϭ 0.0007). Contribution of the NonPro region to the reactivity to PspA. Of the five PspA fragments analyzed, PspA4 was the only one containing a NonPro region within the C-terminal proline-rich region. We have analyzed the contribution of both proline and NonPro regions to the cross-reactivity of PspA4. While the original PspA4 fragment contained the complete proline-rich region, PspA4AB contains only the ␣-helical region without the entire proline-rich region, and PspA4Pro contains the Nterminal ␣-helical region plus the first block of prolines only, lacking both the NonPro region and the second block of prolines (Fig. 1). We have analyzed the binding of anti-PspA4AB, anti-PspA4Pro, and anti-PspA4 antibodies to intact M10 (clade 3, lacks NonPro) and ATCC 6303 (clade 5, has Non-Pro). The capacity of antibodies to PspA4AB and PspA4Pro to bind to M10 was enhanced compared to that of antibodies to PspA4. Since M10 does not contain the NonPro region, reduced binding could be explained by a considerable amount of the antibodies induced after immunization with PspA4 being directed toward the NonPro region. Binding of antibodies to PspA4Pro was also slightly better than that of antibodies to PspA4AB, showing that the proline block is probably also immunogenic. Binding to ATCC 6303 followed a similar pattern, with higher binding for antibodies to PspA4Pro, followed by PspA4AB, but differences were not so evident (Fig. 4A). We next tested whether the addition of NonPro to PspA3 (which induced antibodies with very restricted cross-reactivity) would increase the recognition of different PspA proteins. Anti-PspA3 and anti-PspA3-NonPro showed similar binding to M10 (lacks NonPro), and the inclusion of the NonPro region clearly enhanced the cross-reactivity to ATCC 6303 (has NonPro) (Fig. 4B). FIG. 3. Binding of antibodies and complement deposition in the presence of the different anti-PspA sera. Sera from mice immunized with PspA from Fam1 (PspA1 or PspA2) (A and C) or from Fam2 (PspA3, PspA4, or PspA5) (B and D) were tested for the ability to bind (1% serum, A and B) and to mediate deposition of C3 (10% serum, C and D) on S. pneumoniae strains bearing PspA proteins from clades 1 to 5. Serum from mice immunized with alum was used as a control for each strain and is represented by the gray area in each graph. Results are shown as fluorescence intensity histograms, and the median fluorescence intensity is indicated for each sample. Results are representative of two experiments using sera from independent immunizations. DISCUSSION Since PspA is a highly polymorphic antigen, the correct choice of fragments to be included in a vaccine formulation is crucial. We have tested the ability of sera raised against PspA fragments from clades 1 to 5 to bind to the surface of and to mediate complement deposition on pneumococci bearing different PspA proteins. Complement deposition assays have been proposed as a surrogate for prediction of protection (15). Previously, our group has also analyzed antibodies against PspA1, PspA3, and hybrid molecules, showing that binding and complement deposition mediated by anti-PspA1 and anti-PspA3 antibodies were restricted to the same family, whereas the hybrids were able to broaden this recognition to some extent (6). Subsequent work has shown that PspA4 and PspA5 were able to elicit antibodies with broad cross-reactivity, being able to recognize extracts from several strains expressing PspA proteins from clades 1 to 5 by Western blot analysis (7). We now show that anti-PspA4 and anti-PspA5 are also able to efficiently bind to the surface of pneumococci expressing PspA proteins from clades 1 to 5. As for C3 deposition, both sera were able to efficiently mediate complement deposition on pneumococci from the same family (clades 3, 4, and 5), whereas for Fam1 strains, anti-PspA4 was able to mediate C3 deposition on a clade 1-bearing strain and anti-PspA5 on a clade 2-bearing strain at levels similar to those observed for the homologous Fam1 antibodies. PspA3 elicited antibodies with the narrowest cross-reactivity, which is in accordance with our previous Western blot analysis results. These results show that the previously proposed clade-dependent immunity in Fam2 (6) may not be characteristic of all of the representatives of this family and is probably restricted to clade 3. We have performed an intranasal lethal challenge of immunized mice, and the groups that received PspA4 or PspA5 showed statistically significantly higher survival after a challenge with A66.1, which expresses PspA2 (Fam1), and ATCC 6303, which expresses PspA5 (Fam2). These results thus show that both fragments would be able to confer broad protection against pneumococci expressing PspA proteins from the two major families. Moreover, these results are in accordance with the broad reactivity of antibodies to PspA4 and PspA5 with pneumococcal strains bearing different PspA proteins detected both in Western blot experiments and in functional assays of FIG. 4. Binding of antibodies against the PspA4 and PspA3 constructs. Sera from mice immunized with PspA4AB, PspA4Pro, and PspA4 (A) or PspA3 and PspA3-NonPro (B) were tested for the ability to bind (1% serum) to S. pneumoniae strains bearing PspA proteins from clade 3 (M10, lacks NonPro) and clade 5 (ATCC 6303, has NonPro). Serum from mice immunized with alum was used as a control for each strain and is represented by the gray area in each graph. Results are shown as fluorescence intensity histograms, and the median fluorescence intensity is indicated for each sample. Results are representative of two experiments using sera from independent immunizations. FITC, fluorescein isothiocyanate. binding and complement deposition on intact bacteria. Fusion proteins composed of PspA fragments from Fam1 and Fam2 have been proposed as an alternative for the induction of broad protection using recombinant protein (6) or recombinant attenuated Salmonella (26). In the latter work, oral immunization of mice with Salmonella strains expressing PspA fusion proteins was shown to induce serum antibodies that were able to bind and to mediate complement deposition on strains expressing PspA proteins from clades 1 to 5 and also to provide protection against challenges with multiple pneumococcal strains. Our results show the potential of immunization with a single PspA fragment to induce cross-protection. We have also analyzed the contribution of the NonPro region to the cross-reactivity of antibodies to PspA. The exclusion of the NonPro region of PspA4 led to the induction of antibodies with a higher capacity to bind to a strain that lacks this region, which indicates that a considerable amount of the antibodies induced by immunization with PspA4 might be directed to the NonPro region. The block of proline repeats also seemed to increase binding to the strains tested. Moreover, fusion of the entire proline-rich region containing NonPro to PspA3 (which was previously shown to induce antibodies with reduced cross-reactivity) was able to enhance the binding of antibodies to a strain that expresses PspA containing the Non-Pro region. Though it was not possible to define precisely the exact contribution of each region to the cross-reactivity of the antibodies, these results indicate that the N-terminal ␣-helical region, the blocks of proline repeats, and also the NonPro region can influence the degree of cross-reactivity, depending on whether the strain tested expresses a PspA protein containing the NonPro region or not. It has been previously suggested that the proline-rich region may protect mice against a pneumococcal challenge (5). In conclusion, our results show that immunization with both PspA4 and PspA5 elicits antibodies with a functional capacity to recognize and to mediate complement deposition on a broad range of pneumococcal isolates. More importantly, PspA4 and PspA5 were able to induce protection against a challenge with one strain expressing PspA from Fam1 and one strain from Fam2 (the two major families), indicating that these antigens have potential to be used to as vaccine antigens to induce broad protection against different pneumococcal strains.
2018-04-03T02:12:04.258Z
2010-01-20T00:00:00.000
{ "year": 2010, "sha1": "3069ca7b040368e6ec70b63ae3ad49311ed60a8a", "oa_license": null, "oa_url": "https://doi.org/10.1128/cvi.00430-09", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "3b689f53ae81c8073446a2ab1d149aadf2dd275c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
252124794
pes2o/s2orc
v3-fos-license
Early blindness modulates haptic object recognition Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to explore the cube twice, reporting whether the cube was the same or it differed in pins disposition. Results showed that recognition accuracy was not modulated by the level of visual ability. However, congenitally blind touched more cells simultaneously while exploring the faces and changed more the pattern of touched cells from one recording sample to the next than late blind and sighted. Furthermore, the number of simultaneously touched cells negatively correlated with exploration duration. These findings indicate that early blindness shapes haptic exploration of objects that can be held in hands. Introduction Humans can visually recognize objects in complex scenes in about one-tenth of a second (Potter, 1976;Thorpe et al., 1996). However, objects recognition is not a prerogative of vision. For instance, we can accurately identify real objects using only touch, although with a slower recognition time, in the order of seconds (Klatzky et al., 1985). The difference in recognition time between vision and touch is also due to intrinsic differences between the two sensory systems. Vision is usually characterized by holistic acquisition of information, whereas, touch often encodes information in a more sequential, and slower, fashion (Cattaneo and Vecchi, 2008). For instance, vision can decode simultaneously attributes of objects such as color and shape whereas touch may need different exploratory procedures, applied in sequence, to detect object properties such as texture and shape. We use indeed lateral motion to assess texture and contour-following to identify the shape (Lederman and Klatzky, 1987;Klatzky and Lederman, 1992). Visual and haptic object perception also differs for the weight they assign to different object properties (Lacey and Sathian, 2014). For instance, shape is more important than texture when visually categorizing, whereas shape and texture are approximately equally weighted in haptic categorization (Cooke et al., 2007). However, visual and haptic object perception also shares some properties. For example, when considering object categorization, both vision and haptics show categorical perception, i.e., discriminability increases markedly when objects belong to different categories and decrease when they belong to the same category (Gaißert et al., 2012). In addition, both sensory modalities seem to be viewpoint-specific, i.e., they best recognize an object when it is oriented in a specific way although vision prefers "front-view" and haptics prefer "backview" orientation (Newell et al., 2001). Scientific works support the idea that these similarities may also have a neurophysiological foundation. Indeed, the visual and tactile sensory systems share some analogies also at the neural level (Amedi et al., 2005). They are both characterized by a hierarchical organization of increasing complexity. For instance, the unspecific tactile input is firstly processed in areas 3b and 1 of the primary somatosensory cortex, then by area 2 which shows selectivity to attributes of objects such as curvature and, finally, by the anterior intraparietal sulcus (IPS), which shows preference to overall shape rather than primitive attributes such as curvature (Bodegård et al., 2001). Both visual and tactile sensory systems show a topographical organization, i.e., adjacent parts of the space are mapped in adjacent parts in retinotopic and somatotopic cortical maps. More importantly, vision and touch may activate similar brain areas when exploring objects, for example, the visual ventral and dorsal pathways are also involved during similar haptic tasks (Amedi et al., 2005;Lacey and Sathian, 2011). For instance, James et al. (2002) found that haptic object exploration activated the middle and lateral occipital areas active in the corresponding visual exploration task. These cortical areas may be part of a network of neural substrates responsible for a supramodal representation of spatial information (Cattaneo and Vecchi, 2008;Loomis et al., 2013;Ottink et al., 2021). The existence of such supramodal representation is also suggested by other findings. For instance, Giudice et al. (2011) showed similar biases and updating performance when learning visual or tactile maps. One might wonder what happens when the visual cortex does not receive visual input, as in blindness. It has been shown how the visual cortex can be functionally reprogramed in the blind to process tactile [see Sathian and Stilla (2010) for a review] or auditory stimuli (Kujala et al., 1995;Burton, 2003;Campus et al., 2019). As a consequence, the overall cortical representation of the tactile sense may be larger in the blind relative to sighted persons which may help explaining some superior tactile abilities, such as the higher tactile acuity, in the former population (Penfield and Boldrey, 1937;Goldreich and Kanics, 2003;Bliss et al., 2004;Wan et al., 2010;Norman and Bartholomew, 2011;Wong et al., 2011). However, haptic object recognition is a complex skill involving not only lowlevel tactile processing but also motor, memory, and spatial components. In particular, it has been suggested that visual mediation, that is, the translation of the tactile input into a visual image, may enhance haptic object recognition (Lederman et al., 1990). Therefore, according to the visual mediation hypothesis, we may hypothesize that object recognition based only on haptics may be superior in the late blind relative to congenitally blind or blindfolded sighted controls. Late blind individuals may indeed benefit of both extended haptic practice and the ability to translate the haptic information into a visual representation since they had seen earlier in life. Other researchers suggested that visual mediation may conduct to another advantage, that is the ability to represent spatial information in allocentric perspective. With allocentric representation we mean the ability to code spatial information based on an external perspective, independent from the observer, whereas, a representation is egocentric when it is based on the perspective of the observer (Taylor and Tversky, 1992). Allocentric representations are usually associated with higher spatial performance (Lawton, 1994;Meneghetti et al., 2011). It has been shown how blind individuals might prefer egocentric representations of spatial information while sighted persons tend to code the same information as allocentric, at least in the context of learning maps of environments (e.g., Noordzij et al., 2006). Toroj and Szubielska (2011) applied this framework to explain why their late blind participants, using an allocentric strategy when visualizing object shapes in their imagery, better identified such shapes than congenitally blind. The differentiation between egocentric and allocentric leads to the hypothesis that object recognition may depend also on the orientation of the objects relative to the participant. For instance, it has been shown how object recognition is impaired when the object is rotated with respect to the orientation of the learning phase which may be interpreted with the difficulty of moving from an egocentric to an allocentric perspective. This performance degradation is visible in the sighted regardless of the sense involved in recognition, that is vision or touch (Lacey et al., 2007). On the contrary, Occelli et al. (2016) showed that in the congenitally blind object recognition is view-independent, that is accuracy is not affected by the rotation of the learned object. Another result of this study is that overall no difference in performance between blind and sighted was observed. Szubielska and Zabielska-Mendyk (2018) also found similar ability in mentally rotating tactile figures in congenitally blind and sighted individuals. Another line of research used two dimensional depictions of 3D shapes presented on raised line drawings. Using this kind of material, Heller (1989) found better recognition performance in late blind compared to sighted or congenitally blind persons. These latter two groups showed similar performance. On the contrary, Lederman et al. (1990) found that congenitally blind did worse than sighted in haptic recognition of 3D shapes and Gori et al. (2010) showed that congenitally blind children had higher orientation discrimination threshold compared to age matched controls. Collectively, these findings have been interpreted in terms of the necessity to visually translate the haptic information. In this perspective, the better performance in late blind may be the result of two factors: (1) their welltrained tactile skills; (2) their possibility to visually translate haptic information thanks to the fact they had seen earlier in life. This latter hypothesis is also well in line with a previous finding showing how the lack of visual experience in the early years of life can disrupt spatial processing in other sensory modalities (i.e., audition) suggesting the idea the visual system calibrates auditory spatial maps (Gori et al., 2014). However, the limited performance in early blind may not be present when manipulating real tridimensional objects. An early attempt to investigate this behavior in sighted and congenitally blind children has been performed by Morrongiello et al. (1994). The authors failed to find any difference in performance between the two populations. However, more recently, Norman and Bartholomew (2011) found even superior recognition accuracy of 3D shapes, not resembling daily-life objects, in early and late blind, but not in congenitally blind compared to sighted. Certainly, the contradiction between the studies may be due to the different tasks used and to possible differences in the tested populations. In addition, to the best of our knowledge, most studies on this topic devoted little attention to the haptic patterns of exploration. For instance, studies using raised-lines drawings or textured pictures mainly focused on the final outcome in performance, that is recognition accuracy and time without investigating the haptic behavior conducting to that performance (e.g., Heller, 2002;Picard and Lebaz, 2012;Vinter et al., 2020). In Morrongiello et al. (1994), the authors also analyzed some basic haptic strategies of children exploring 3D objects. For instance, they measured the number of unique parts composing the object that was touched in a trial or the number of repetitions of exploration of those unique parts by examining video recordings. However, using this method, finer exploration features such as the number of touches of unique parts, their temporal frequency or the way subjects manipulated and rotated the objects could not be examined. Such haptic patterns may provide interesting complementary information as Leo et al. (2022) showed that different outcomes in performance in a haptic task may be associated with different haptic exploration strategies. Similarly, accuracy in haptic spatial tasks has been shown to depend on the level of development: children under 9 years of age showed indeed less effective haptic exploration than adults (Sciutti and Sandini, 2020). Furthermore, investigating such more detailed haptic exploration strategies may be necessary for identifying differences between groups of persons differing in spatial and visual ability. Therefore, in our study, we aimed at investigating: (1) how the performance in a haptic object recognition task is influenced by the level of visual ability; (2) how the level of visual ability shapes haptic exploration patterns. To do so, early blind, late blind, and sighted participants performed a haptic recognition task using an instrumented cube that measures the touches on its faces as well as its rotation, that is, the iCube . As in , we attached small pins on cube faces in varying number and asked participants to explore the cube twice, with the task of understanding whether any change occurred in the pins distribution between the first and the second presentation. This design is similar to a "study-test" paradigm to assess memory and recall (Pensky et al., 2008). Our study has a data-driven exploratory nature and several dependent variables recorded by iCube have never been collected in visually impaired subjects. However, we could at least expect that: (1) recognition accuracy may be similar across groups since the simple cube-like shape should not favor participants able to take advantage of a visual-mediation strategy; (2) both congenitally and late blind participants might be faster in doing the haptic task since they have larger haptic experience; (3) if it is true that blind persons and, particularly, congenitally blind prefer an egocentric representation of spatial information they might tend to rotate less the cube while exploring to facilitate the association of each cube face to its relative orientation. Materials and methods Participants A group of congenitally blind (CB, n = 7, four females), a group of late blind (LB, n = 10, five females) and a sighted control group, age and gender matched with the visually impaired groups (SI, n = 16, nine females), took part in the study (see Table 1). One congenitally blind was excluded due to a technical issue with data collection. Following the World Health Organization (WHO) guidelines, we defined blindness as vision in a person's best eye with correction of less than 20/500 or a visual field of less than 10 • . All LB lose sight after 6 years of age. CB age ranged from 23 to 49 years (mean age = 35; SD = 9.5). LB age ranged from 30 to 61 years (mean age = 43.9; SD = 12). SI age ranged from 22 to 64 years (mean age = 40.7; SD = 12.1). The iCube The iCube (v3) is an instrumented cube designed at IIT which measures its orientation in space as well as the location of contacts on its faces. This information is conveyed wirelessly to a laptop. iCube is of about 5 cm side, it has 16 cells per face and a weight of about 150 g (see Figure 1). Touch sensing is based on a 4 × 4 array of Capacitive Button Controllers (CY8CMBR2016) developed by Cypress Semiconductor Corporation. These are based on Multi Touch technology, allowing detection of simultaneous touches and support up to 16 capacitive cells (6 mm × 6 mm × 0.6 mm), which could be organized in any geometrical format, e.g., in matrix form. Each face of iCube is made with one of these boards. Their sensitivity, i.e., the smallest increase in capacitance that could be detected clearly as a signal, is set to 0.3 pF to allow the device to sense contacts without the need to apply pressure. Spatial orientation of the cube is estimated by a Motion Processing Unit TM (MPU), a nine axes integrated device, combining a three axes MEMS gyro, a three axes MEMS accelerometer, a three axes MEMS magnetometer and Digital Motion Processor TM (DMP). The MPU combines information about acceleration, rotation and gravitational field in a single flow of data. Data from iCube are sent to a laptop through a serial protocol. The transmission is performed through a radio module NRF24L01 (Nordic Semiconductor, Trondheim, Norway). The firmware of the device is designed to maximize the speed of capture of information from the boards measuring touches. The acquisition is always as fast as possible: faster when least faces are touched simultaneously and slower when it needs to encode information from multiple faces. As a result, the average sampling rate of the device was about 5 Hz (i.e., one sample every 203 ± 113 ms, SD). As in Leo et al. (2022), data were subsequently interpolated to analyze the temporal evolution of exploration at a constant temporal rate. Data generated in this study was further analyzed in Python (Python Software Foundation) to extract the pattern of touches, the amount of iCube rotation and the speed of rotation (see Section "Data Analysis"). Procedure The experimenter positioned on iCube faces a set of raised plastic pins (diameter: 0.3 cm, height: 0.2 cm). Each face contained from 0 to 5 pins with no limitation of the presence of two or more equal faces. The participant was seated in front of a table, where the iCube was positioned on a support. Whenever a sighted participant was tested, a cardboard panel was placed on the table between him/her and the cube to avoid any visual inspection of the device. To do so, a black curtain was also fixed to the lower part of the panel on the side of the participant. This panel allowed anyway comfortable movements of participants' upper limbs (see Figure 1). Before the experiment, participants performed a familiarization phase. In this phase, they first explored the cube without pins for a few seconds to get acquainted with it. After that, they did two practice trials in which they familiarized themselves with the experimental task, i.e., they were asked to explore the cube twice trying to understand whether any change occurred in the pins allocation between the first (memorization) and the second exploration (recall). Particularly, they were asked to report whether the cube in the second exploration was the "same" or "different" compared to the cube in the first exploration. When participants had proven to understand the task, the real experiment began. They did three trials in sequence for a total of six cube explorations for each participant. Between the memorization and recall phases, the cube could remain the same, but rotated on the support, or could be changed (e.g., by removing or adding one pin to one of the faces, see Figure 1 for an example). The experimenter rapidly operated these changes, with an interval between explorations lasting on average less than a minute. We opted for two "different" and one "same" trial to minimize participants' fatigue as the latter trial-type has been shown as more difficult in previous studies (Norman et al., 2004;. The experiment lasted about 30 min on average, including explanations and cube preparation. Data analysis Data about touches and rotations recorded by iCube were processed in Python following the methods used in Leo et al. (2022) and briefly described below. Touches The cube reported for each timestamp a tactile map, i.e., a list of 16 elements of zeros and ones, where one represents a touched cell. These tactile maps were independently interpolated at a constant rate of 0.2 s, i.e., a value close to the average sample rate of the device. We then spatiotemporally filtered the tactile maps to select the explorative touches, i.e., touches directly related to the exploration of a face to detect and count its pins, from the holding touches, i.e., touches that only reflect the holding or support of the device. This filter was based on simple matching coefficient (SMC: number of matching attributes number of attributes = M00+M11 M00+M01+M10+M11 ) which is a measure of similarity of samples sets with scores between 0 and 1, where 1 indicates perfect similarity and 0 indicates perfect diversity. M 11 is the total number of cells where sample 1 and sample 2 both have a value of 1 (active); M 01 is the total number of cells where the status of sample 1 is 0 (inactive) and the status of sample 2 is 1 (active); M 10 is the total number of cells where the status of sample 1 is 1 (active) and the status of sample 2 is 0 (inactive); M 00 is the total number of cells where sample 1 and sample 2 both have a value of 0 (inactive). Then, as in Leo et al. (2022) we assumed that explorative touches were characterized by higher variability in space and time than holding touches. Holding touches, by definition, are indeed stable in time to allow a secure grasping and movement of objects. For instance, the lateral motion exploratory procedure often associated with active exploration of a surface's tactile features such as texture is characterized by highly dynamic movement of the hand in contact with the object. This kind of movement would translate for our sensors in a rapid change of status of cells activation in a face, resulting in lower SMC for consecutive temporal samples. Therefore, at each time interval we only considered explorative touches those measured on the face with the lowest SMC computed concerning the previous sample. If more than one face shared the lowest SMC, we considered the touches of all those faces, unless the SMC was 1 for all faces which would likely indicate the cube lying untouched on the table. We then computed the mean SMC of the explored faces for each trial. We used this variable as an indirect measure of velocity in exploring a face since, for instance, a very low SMC between two consecutive samples (0.2 s duration each) means that the participant touched very different cells between the two samples. We also computed: (1) the exploration duration of each trial as the time between the first and last touch of the participant (via manual cutting for each file the initial and final phases of recording, when less than two cells were active); (2) the mean exploration duration for each face; (3) the variability (i.e., standard deviation) of the mean exploration duration for each face; (4) the touch frequency, i.e., the number of touches per time unit (s); (5) the mean number of active cells per sample in the explored faces (after removing samples with no active cells). Rotations The information about the orientation of iCube in time was provided in the form of quaternions. Quaternions were interpolated at a constant sample rate of 0.2 s via spherical linear interpolation (SLERP). Then, we computed the instantaneous angular variation by measuring the angle traversed over time by each of the three unitary axes orthogonal to the faces of iCube. In particular, given one axis: We integrated over time the rotations performed by the three axes to estimate the rotation impressed to iCube in all the possible directions. To quantify the amount of rotation, we considered the maximum value among cumulative sums of the rotations executed by the three axes. The instantaneous rotation speed was instead computed by dividing angle axis (t) for its time interval (i.e., 0.2 s) and averaging the results across the three axes and all the instants in a trial in which iCube was in motion (i.e., angular velocity > 1 • /s). As in , this selection was made to assess the actual velocity of rotation when the rotations were executed, without spuriously reducing the estimate with the analysis of the static phases. In addition, we determined for each timepoint the absolute and relative orientation of each face of iCube. With absolute orientation we mean the cardinal direction of the normal of a face (with labels such as "North, " "East, " etc.). With relative orientation of a face we mean its orientation in the participant's perspective (with labels such as "up, " "rear, " etc.). See Leo et al. (2022) for more details about these estimations. Transition matrices We computed the transition matrices for all the trials of the experiment, i.e., six by six matrices in which each cell corresponds to the percentage of cases in which the transition has occurred between the face individuated by the row number and the face corresponding to the column number (for instance, from "front" to "left"). Each trial is indeed characterized by a temporal sequence of explored faces (e.g., left, up, front, left, etc.). The transition matrix is computed by counting and summing the number of transitions (e.g., from "left" to "up") and converting these numbers into percentage of occurrences. In particular, we computed a transition matrix for each trial in each participant (i.e., three matrices for the "memorization" trial type and three matrices for the "recall" trial type). Then, for each transition matrix we computed two different scores (Leo et al., 2022): (1) the maximum diagonal score; (2) the mean number of different transitions. The maximum diagonal score is the highest value in the diagonal cells. These cells reflect the tendency to select specific relative orientations as objects of spatial attention (e.g., a high proportion in the "from right to right" cell indicates that participant preferentially explored the rightward face and rotated the cube to position the face they wanted to explore toward their right). The number of different transitions is a measure of exploration variability (e.g., low numbers indicate participants selected less orientations to explore, i.e., less variability). For instance, a participant with a high maximum diagonal score and a low number of different transitions would be characterized by a very focused and systematic exploration reflecting high spatial ability (Leo et al., 2022). Finally, we measured the number of returns to already explored faces. For this measure, we did not consider the sequence of explored orientations but the sequence of explored faces in terms of their label (from 1 to 6). This measure may be relevant because a previous study showed that participants with lower spatial skill showed also an higher number of returns (Leo et al., 2022). Statistical analyses Statistical analyses were performed using R. To sum up, we analyzed the following dependent variables: (1) recognition accuracy; (2) exploration duration (in s); (3) number of touches; (4) touch frequency (touches/s); (5) amount of rotation ( • ); (6) rotation velocity ( • /s); (7) maximum diagonal score; (8) number of different transitions; (9) exploration duration per face; (10) variability of exploration duration per face; (11) number of returns; (12) mean number of active cells per sample; (13) mean SMC. The independent variables were the Group (earlyblind, late-blind, sighted) and Trial Type (memorization vs. recall). Since we did not have specific hypotheses regarding the interaction between Group and Trial Type and since the comparison between memorization and recall in the same task has been already investigated in we only focused on group differences. Given the high number of dependent variables we ran an explorative MANOVA including all the normally distributed dependent variables (all but recognition accuracy) with Group as between factor. For recognition accuracy, after a Box-Cox transformation using the MASS R package (Venables and Ripley, 2002), we estimated a Bayes factor to compare the fit of the data under the null hypothesis and the alternative hypothesis using BayesFactor R package (Morey and Rouder, 2011). Data normality was assessed with Shapiro-Wilk tests. After the MANOVA we also performed a Linear Discriminant Analysis (LDA) as follow-up with the goal of defining which linear combination of dependent variables led to maximal group separability. We then conducted univariate ANOVA on the dependent variables that showed higher coefficients in the LDA followed by t-tests as post hoc. We corrected for multiple comparisons using Benjamini/Hochberg FDR correction (Benjamini and Hochberg, 1995a,b). We set statistical significance at p < 0.05. Results As for the iCube recognition, the mean accuracy was 72% for the CB, 77% for the LB and 69% for the SI. The estimated Bayes factor suggested that the data were 3.7 times more likely to occur under a model without including an effect of group, rather than a model with it. The follow-up LDA identified two linear discriminants which accounted for a percentage of separation between groups of 74.8 and 25.2%, respectively. The haptic variables which were able to discriminate more strongly the groups were the mean SMC, the mean active cells per sample and the maximum diagonal score. Table 2 shows the normalized coefficients of linear discriminants. Figure 2 shows participants distribution along the two discriminants. It is evident how the three groups concentrate in different areas defined by the two discriminants. Both CB and LB participants tend to have higher scores than SI in LD1. As for the LD2, while SI showed intermediate levels, LB and CB showed higher and lower scores, respectively. Finally, CB tend to form a quite separate cluster whereas LB and SI clusters show higher superposition. In order to statistically substantiate these differences, we ran a one-way ANOVA for each of the three haptic variables Coefficients for each linear discriminant. Bold indicates haptic variables whose linear combination discriminated more strongly between groups (absolute value > 1). that contributed more in discriminating the groups, i.e., max diagonal score, mean SMC and mean active cells per sample. As for the maximum diagonal score, the groups did not differ [CB = 3.41, LB = 3.27, SI = 2.24; F (2,29) = 0.87, p = 0.43]. As for the mean active cells per sample in the explored face, the groups tend to differ [CB = 5.24, LB = 4.32, SI = 4.14; F (2,29) = 3.75, p unc = 0.035, p fdr = 0.07]. Post hoc tests showed that the number of active sensors was higher in the CB than in the SI [t (44.8) = -4.96, p fdr < 0.001; see Figure 3A] and in the LB [t (55.5) = 3.91, p fdr = 0.00038; see Figure 3A]. The comparison between SI and LB was not significant (p = 0.22). As for the mean SMC, this score tend to differ in the three groups [CB = 0.77, LB = 0.81, SI = 0.80; F (2,29) = 3.38, p unc = 0.047, p fdr = 0.07; see Figure 3B] since it was lower in the CB than in the SI [t (59.6) = 4.14, p fdr = 0.00017] and in LB [t (68.6) = -4.31, p fdr = 0.00016]. No difference was observed between LB and SI (p = 0.58). A lower SMC and higher mean number of active cells per sample in the explored faces are potentially indexes of faster exploration because the former indicates the participant considerably changed the touched cells from one sample to the next and the latter shows that more cells were simultaneously considered. Therefore, we further hypothesized that SMC score and number of active cells per sample would correlate positively and negatively, respectively, with exploration duration. To verify these hypotheses, we computed Pearson's correlation coefficients (r). Results showed that the SMC did not correlate with exploration duration (r = 0.21, p = 0.127, one-tailed), whereas, the number of active cells per sample did (r = -0.38, p fdr = 0.03, one-tailed; see Figure 4). Discussion Our study had two different aims: first, investigating whether the level of visual ability modulates haptic object Scatterplot of participants distribution in the two LDA dimensions. The diagram depicts congenitally blind (CB) as red circles, late blind (LB) as green circles and sighted controls (SI) as blue circles. The labels above each circle specify participants' code. Ellipses indicate the three identified clusters. Note as the three groups tend to concentrate in different areas of the 2D space as defined by the two discriminants. recognition; second, highlighting possible differences in the exploration strategies in congenitally blind, late blind, and sighted individuals using a sensorized cube. To do so, we asked a group of congenitally blind, a group of late blind and a group of sighted persons (who could not see the device) to explore twice an iCube with pins attached to its faces. In the second exploration, the iCube could have the same pins disposition, although the cube would be presented in a different orientation, or a small change in pins disposition, e.g., one pin less or more in one of the faces. Participants had to report whether the two presented cubes had the same pin disposition, or they differed. The main advantage of using the iCube compared to common daily-life objects lies in that it allows a free and unconstrained manipulation while keeping the possibility of accurately measuring how it is touched and its orientation in space without the need to use video recordings. Our results showed that the level of visual ability does not influence the accuracy in recognizing the cube. This finding is in line with Morrongiello et al. (1994), who, in addition, also failed to observe differences between blind and sighted children in terms of exploration behavior. However, in our case, we showed evidence of different haptic strategies between congenitally blind and the other groups. Indeed, congenitally blind tend to touch simultaneously more cells in each recording sample when exploring a face than late blind and sighted persons, suggesting that they learnt to consider a larger tactile space with a single touch. They also tend to change touched cells more quickly than the other groups. This is an important result because it suggests that congenitally blind persons may have a peculiar way to explore the environment through touch, which differentiates them even from late blind persons characterized by many years of complete blindness, as in our sample of participants. Furthermore, we observed that the number of simultaneously touched cells negatively correlated with exploration duration. If we can cover a larger tactile space with a single touch, then the time needed to fully explore an object decreases. It should be noted that a previous study showed evidence of an impairment in haptic recognition of faces in the congenitally blind and not in late blind suggesting that early visual experience is necessary to process face features (Wallraven and Dopjans, 2013). However, there is also evidence that faces may be special kind of "objects" processed by dedicated brain areas in the human visual system, such as the fusiform gyrus (Puce et al., 1995;Yue et al., 2006). Therefore, findings on faces recognition in the blind may not be easily translated to different types of objects. Our third hypothesis, i.e., blind participants would rotate less the cube was not supported by results. However, this may simply be due to the reduced power of our analysis since congenitally blind and late blind tended to rotate less the device (560 • and 517 • , respectively) than sighted (710 • ). Importantly, our findings do not seem to be due to differences in spatial memory in the groups of participants. There is evidence that congenitally blind subjects may have difficulties in specific spatial memory tasks, particularly when they have to memorize and recall two separate haptic spatial configurations (Vecchi et al., 2004;Leo et al., 2018Leo et al., , 2020 or sequences of semantic sounds. However, in our study the congenitally blind showed a similar recalling accuracy than the other groups. Our task did not impose indeed a heavy burden on spatial memory since participants were required to keep in memory only five items (the number of pins in five faces) and their relative location. On the contrary, in Leo et al. (2018) participants had to memorize an average of 2.5 targets randomly located in a 3 × 3 grid and they had to do so for two different grids presented in sequence. This task is much more complex because there are many ways to place 2.5 targets in Correlation between exploration duration and mean number of active cells per sample. *p fdr < 0.05. a nine-elements grid and participants had to keep in memory two of these grids. The conflict between our and Morrongiello et al.'s (1994) findings who did not observe haptic differences in object recognition between blind and sighted participants may be due to several reasons: (a) Morrongiello and coauthors tested only children. It is possible that the differences we found in haptic patterns would emerge only later in life, as a consequence of the more extended haptic training [but see Withagen et al. (2012) for a similar result with adults]; (b) they used common daily-life objects, whereas we used two cubes eventually differing between each other only for relative pin disposition on the surface of their faces; (c) they studied haptic behavior through evaluation of video recordings, that is with a methodology and a selection of dependent variables which may be not sensitive enough to detect subtle differences in exploration procedures. On the other hand, there is also evidence in the literature regarding differences in exploratory procedures between blind and sighted children, although in studies using different materials and methods. For instance, Vinter et al. (2012) asked blind, low vision, and blindfolded sighted children to haptically explore raised-line drawings whose comprehension was subsequently evaluated through drawings of the remembered shapes. Briefly here, results showed how blind children used more types of exploratory procedures, as defined in Davidson (1972), Klatzky (1987, 1993), and Wijntjes et al. (2008), than their sighted peers. The use of certain kinds of procedures (e.g., contour following) also correlated with drawing performance. However, this study referred to the classical exploratory procedures originated by the seminal work of Lederman and Klatzky (1987) which cannot easily be translated to the case of solid objects such as our cube. While the fact that congenitally blind participants used different haptic strategies may be simply due to their higher training in using only the haptic modality, it is also possible that these differences could be partly due to divergent spatial strategies between congenitally blind, sighted and late blind persons. Previous studies suggested indeed that sighted individuals might prefer using an allocentric frame of reference (Noordzij et al., 2006;Pasqualotto et al., 2013) which, although accurate, may need more time to be built (Toroj and Szubielska, 2011). Even though we did not explicitly investigate this issue, two congenitally blind participants spontaneously reported they counted the number of pins of the cube faces to help memorizing pins configuration which suggests they were not using an allocentric strategy. This observation is also well in line with a previous finding showing that early blind subjects encoded 2D pattern elements by their location in a fixed coordinate system without visual representation (Vanlierde and Wanet-Defalque, 2004). Future studies might want to investigate in detail such cognitive aspects of haptic exploration using the iCube. With our current data, it is difficult to conclude whether the difference between congenitally and late blind is due to the fact the former group has never experienced the visual world and, therefore, it has exploited the brain plasticity that strongly characterizes the early years of life (e.g., Kupers and Ptito, 2014) resulting in a stronger haptic ability (Theurel et al., 2013) or to the fact that haptic skills are simply more trained in the congenitally blind since they lived more "years of blindness." Our congenitally blind group has experienced a mean of 35.5 years of blindness, whereas, this mean in the late blind group was 21.6 years. Future studies will be needed to compare exploration behavior of congenitally and late blind individuals having a similar amount of years of blindness (although, in this case, differing for age). On the other hand, we speculate that, since our late blind participants were probably fully blind for long enough to match the haptic expertise of the congenitally blind, the main difference between the two groups may lie in the extended haptic practice in the congenitally blind in their early years of life (Theurel et al., 2013;Amadeo et al., 2019). One limitation of our study lies in the small sample size, particularly the congenitally blind group. This may have limited the possibility to spot other haptic differences between this group and late blind and sighted groups. However, specific differences between groups, that is, the mean number of active cells per sample and the variability in active cells across recording samples, were evidently large enough to be already detected with groups of such size. A second limitation lies in that information about Braille-reading ability in our blind participants was not available. There is evidence that experience in reading Braille is correlated with superior tactile acuity in passive tasks (Wong et al., 2011) and in tasks using Braille-like stimuli (e.g., Foulke and Warm, 1967;Grant et al., 2000). However, our task involved the active manipulation of a 3D object and the pins attached on its faces have different dimension (diameter: 3 mm; height: 2 mm) than Braille dots (diameter: 1.44 mm; height: 5 mm). More importantly, the spacing between pins in our configuration is in the order of centimeters whereas it is about 2.5 mm in the Braille. Therefore, our task did not involve any measure of tactile acuity at its limit of performance, as Wong et al. (2011) did. A third limitation is represented by the fact we used a cube-shaped object which imposes limits in the exploration behavior of participants and makes potentially difficult generalizing our results to objects with more complex shapes. Finally, subjects performed a small number of trials since we wanted to minimize the effort of participants. Therefore, we could not investigate in detail the temporal evolution of performance as well as possible changes in exploration strategies. In conclusion, our study showed that congenitally, late blind and sighted participants did not differ in the haptic recognition accuracy of a three-dimensional object. However, we identified two exploratory strategies that differentiated congenitally blind from late blind and sighted individuals. The former group touched more cells simultaneously when exploring a face, suggesting that they could acquire more tactile information "at first glance." Furthermore, congenitally blind showed higher haptic velocity, that is, they changed more the pattern of touched cells from one recording sample to the next. Finally, we also found that the number of simultaneously touched cells negatively correlated with exploration duration suggesting that the ability to cover a larger tactile space while touching an object allows a more effective and faster exploration. Future studies might want to verify whether we could use the sensorized cube to measure the haptic and spatial skills of different populations such as in the elderly. There is indeed evidence that cognitive decline may impair haptic object recognition (Kalisch et al., 2012) but the modulation of the exploratory procedures by age has not been investigated in detail yet. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://doi.org/10. 5281/zenodo.6539275. Ethics statement The studies involving human participants were reviewed and approved by Comitato Etico, ASL 3, Genova; Prot. IIT_UVIP_COMP_2019 N. 02/2020, 4 July 2020. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions FL performed testing and data analysis. FL wrote this manuscript with contributions from MG and AS. All authors developed the study concept, contributed to the study design, and approved the final version of the manuscript for submission. Funding This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (grant agreement nos. 948349, MYSpace and 804388, wHiSPER). the iCube, Marco Jacono for the development of the first version of the software, Marcello Goccia for the current version of the software running the iCube, and Alessia Tonelli and Alice Bollini for helping in organizing the tests with visually impaired persons. Special thanks to all participants.
2022-09-09T14:07:26.986Z
2022-09-08T00:00:00.000
{ "year": 2022, "sha1": "8c8c5aea2c3361f95ef6844fbf39b8c725308757", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8c8c5aea2c3361f95ef6844fbf39b8c725308757", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
258987501
pes2o/s2orc
v3-fos-license
Recasting Self-Attention with Holographic Reduced Representations In recent years, self-attention has become the dominant paradigm for sequence modeling in a variety of domains. However, in domains with very long sequence lengths the $\mathcal{O}(T^2)$ memory and $\mathcal{O}(T^2 H)$ compute costs can make using transformers infeasible. Motivated by problems in malware detection, where sequence lengths of $T \geq 100,000$ are a roadblock to deep learning, we re-cast self-attention using the neuro-symbolic approach of Holographic Reduced Representations (HRR). In doing so we perform the same high-level strategy of the standard self-attention: a set of queries matching against a set of keys, and returning a weighted response of the values for each key. Implemented as a ``Hrrformer'' we obtain several benefits including $\mathcal{O}(T H \log H)$ time complexity, $\mathcal{O}(T H)$ space complexity, and convergence in $10\times$ fewer epochs. Nevertheless, the Hrrformer achieves near state-of-the-art accuracy on LRA benchmarks and we are able to learn with just a single layer. Combined, these benefits make our Hrrformer the first viable Transformer for such long malware classification sequences and up to $280\times$ faster to train on the Long Range Arena benchmark. Code is available at \url{https://github.com/NeuromorphicComputationResearchProgram/Hrrformer} Introduction Self-attention has risen to prominence due to the development of transformers (Vaswani et al., 2017) and their recent successes in machine translation, large language modeling, and computer vision applications. The fundamental con- struction of self-attention includes a triplet of "queries, keys, and values", where the response is a weighted average over the values based on the query-key interactions. This results in a quadratic memory and computational complexity, that has inhibited the use of Transformers to those without significant GPU infrastructure and prevented applications to longer sequences. Ever since, a myriad of approaches has been proposed to approximate the self-attention mechanism, with the vast majority trading some amount of accuracy for speed or memory use. The "market" of self-attention strategies currently offers various trade-offs in the total package of speed, memory use, and accuracy. We test our method in two settings: using the Long Range Arena (LRA) to compare with prior approaches and a real-world task in malware detection. These results show several benefits to the Hrrformer: it is near state-of-the-art in terms of accuracy, and one of only two methods to improve upon the original Transformer for all tasks in the LRA. The Hrrformer sets a new benchmark for state-of-the-art speed and memory use, processing 28× more samples/second and using 79.15% less memory than the best prior art for each respective metric. The Hrrformer converges in 10× fewer epochs and is effective with just a single layer. Combined this makes the Hrrformer up to 280× times faster to train. On our malware classification task, we find that the relative accuracies of Transformer models change from the LRA benchmark, but that our Hrrformer still obtains the best accuracy and scales the best with sequence length up to T = 131, 072, as demonstrated in Figure 1. The remainder of our manuscript is organized as follows. Work related to our own, as well as adjacent techniques beyond our study's scope, is reviewed in section 2. The recasting of attention in our Hrrformer is a simple procedure demonstrated in section 3, which redefines the Attention function using HRR, and multi-headed self-attention then continues as normal. We then demonstrate these benefits in section 4, showing Hrrformer is consistently one of the best methods with respect to accuracy and considerably faster thanks to reduced memory usage, the number of layers, and epochs needed to converge. In section 5 we draw conclusions from out work. Related Works Since the introduction of the Self-Attention mechanism and the transformer architecture, considerable research has occurred to mitigate its computational burdens. Though not explicit in much of the current literature, many of these approaches resemble strategies for improving Support Vector Machines that have similar complexity. This includes projection (Kaban, 2015) to a lower dimension , finding/creating sparse structure in the correlations (Wang et al., 2014) by (Kitaev et al., 2020;Child et al., 2019;Tay et al., 2020b;Beltagy et al., 2020;Zaheer et al., 2020), using randomized features (Rahimi & Recht, 2007;Sinha & Duchi, 2016) by (Choromanski et al., 2020), factorized or budgeted representations (Si et al., 2016;Wang et al., 2010) by (Xiong et al., 2021;Ma et al., 2021), and creating simplified linear approximations (Wang et al., 2011;Kantchelian et al., 2014) by (Katharopoulos et al., 2020). Other more differentiated approaches include the hierarchical decomposition of the correlations (by (Zhu & Soricut, 2021)), and approaches that replace self-attention entirely with alternative "mixing" strategies (Tay et al., 2020a;Lee-Thorp et al., 2021). To the best of our knowledge, ours is the first work that attempts to re-create the same logic of self-attention with the HRR. Among these prior methods, we note that F-Net (Lee- Thorp et al., 2021) is the most closely related as both F-Net and HRR rely upon the Fast Fourier Transform (FFT) as a fundamental building block. While F-Net does not approximate self-attention so much as replace it with an alternative "mixing" procedure, we include it due to its relevance in using the FFT. Our results will show significant improvement over F-Net, highlighting the value of a neuro-symbolic approach to reconstructing the same logic as opposed to using the FFT as a generic differentiable mixing strategy. The HRR has seen successful use in cognitive science research (Jones & Mewhort, 2007;Blouw & Eliasmith, 2013;Blouw et al., 2016;Eliasmith et al., 2012;Singh & Eliasmith, 2006;Bekolay et al., 2014), but comparatively little application in modern deep learning. The symbolic properties have been previously used in knowledge graphs (Nickel et al., 2016) and multi-label classification (Ganesan et al., 2021). There is limited use of HRRs for sequential modeling. (Plate, 1992) proposed an HRR-based Recurrent Neural Network (RNN), while other work has used complex numbers inspired by HRRs but not actually used the corresponding operations (Danihelka et al., 2016). An older alternative to the HRR, the Tensor Product Representation (TPR) (Smolensky, 1990) has been used to endow associative memories (Le et al., 2020) and RNNs with enhanced functionality (Huang et al., 2018;Schlag & Schmidhuber, 2018). Compared to these prior works, we are re-casting the logic into HRRs, rather than augmenting the logic. However, we slightly abuse the assumptions of HRRs to make our method work. A strategic design allows us to effectively remove additionally created noise via the softmax function. In addition, the TPR's complexity is exponential in the number of sequential bindings, making it a poor choice for tackling the scaling problems of self-attention. Other recent approaches to sequential modeling such as Legendre Memory Units (Voelker et al., 2019), IGLOO (Sourkov, 2018), and State Space Models (Gu et al., 2022;Goel et al., 2022;Gu et al., 2021; are highly promising. We consider these, along with RNNs, beyond the scope of our work. Our goal is to explore the value of recasting self-attention within the neuro-symbolic framework of HRR. As such, other sequence modeling approaches are out of scope. The need for both less memory and extension to very long sequences is also important in malware detection. Processing malware from raw bytes has been found to be one of the most robust feature types in the face of common malware obfuscations (Aghakhani et al., 2020), but simple ngram based features have been maligned for being unable to learn complex sequential information when executable can be tens of kilobytes on the small side and hundreds of megabytes on the larger side (Kephart et al., 1995;Abou-Assaleh et al., 2004;Kolter & Maloof, 2006;Raff et al., 2019;Zak et al., 2017). Given that a maximum T = 200M is realistic, many strategies to handle such sequence lengths have been developed. These include attempts to create "images" from malware (Nataraj et al., 2011;Liu & Wang, 2016), using compression algorithms as a similarity metric (Li et al., 2004;Walenstein & Lakhotia, 2007;Borbely, 2015;S. Resende et al., 2019;Menéndez et al., 2019;Raff & Nicholas, 2017;Raff et al., 2020), and attempts to scale 1Dconvolutional networks over raw bytes (Krčál et al., 2018;Raff et al., 2018;. We will use the Ember (Anderson & Roth, 2018) dataset for malware detection as a real-world test of our new selfattention for processing long sequences. It has been observed empirically that "best practices" developed in the machine learning, computer vision, and natural language processing communities do not always transfer to this kind of data. For example, this phenomenon has been observed with CNNs (Raff et al., 2018) and Transformers for malicious URL detection (Rudd & Abdallah, 2020). Most recently, (Rudd et al., 2022) attempted to apply Transformers to raw byte prediction and had to use a chunked attention that limits the attention window (Sukhbaatar et al., 2019). Using Hrrformer we show much longer sequence processing than this prior work, while simultaneously demonstrating that our method generalizes to a domain that is notorious for a lack of transfer. This increases our confidence in the effectiveness of our method. Notably, the two current stateof-the-art Transformers as measured by the Long Range Arena (LRA) (Tay et al., 2020c) benchmarks do not pass this test, performing considerably worse on the malware task. Attention with Holographic Reduced Representations The HRR operation allows assigning abstract concepts to numerical vectors, and performing binding ( ) and unbinding operations on those concepts via the vectors. One could bind "red" and "cat" to obtain a "red cat". The vectors can also be added, so "red" "cat" + "yellow" "dog" represents a "red cat and yellow dog". An inverse operator † is used to perform unbinding. One can then query a bound representation, asking "what was red?" by unbinding "red cat and yellow dog" "red" † to get a vector ≈ "cat", where the resulting vector is necessarily corrupted by the noise by combining multiple vectors into a single fixed size representation. To perform this symbolic manipulation the binding operation can be defined as where F denotes the FFT and ⊙ an element-wise multipli-cation 1 . The inversion is defined as y † = F −1 1 F (y) . Combined Plate showed that the response B ⊤ y † should be ≈ 1 if the vector y ∈ B, and ≈ 0 if not present. These properties hold in expectation provided that all vectors satisfy the sufficient condition that their elements are I.I.D. sampled from a Gaussian with zero mean and variance 1/H, where H is the dimension of the vectors. We will now show how to apply the same general logic of attention using HRR operations, creating an alternative (but not mathematically equivalent) form of self-attention that runs in linear time with respect to the sequence length. This is a slight "abuse" of the HRR, as our vectors will not be I.I.D. sampled random values, but results from prior layers in the network. Our design circumvents this issue in practice, which we will discuss shortly. We note this is a satisfying, but not required condition. Deviating from this adds more noise (our vectors are the outputs of prior layers in the network), but a softmax operation will act as a cleanup step to work without this condition. Attention can be represented using queries Q, keys K, and values V matrices where the final output is computed as the weighted sum of the values. A query vector can be mapped to a set of linked key-value pairs to retrieve the value vector associated with the associated key. The concept of binding and unbinding operations of HRR is applied to link the key-value pair (i.e., bind the terms together), and then query a single representation of all key-value pairs to find the response values. For this reason, we will define the steps in an element-by-element manner that more naturally corresponds to the HRR operations, but our implementation will work in a batched manner. For this reason, we will discuss a single query q t ∈ R H , against the set of T key/value pairs k t , v t ∈ R H , where H is the dimension of the representation and t ∈ 1, 2, · · · T . Thus K = [k 1 , k 2 , . . . k T ] is a matrix of shape (T, H), and similar for Q and V . First, we will create a superposition β ∈ R H of the keyvalue pairs, meaning that all vectors entering the superposition β are also similar (to some degree) to the final result. This is done by binding ( ) each key-value pair to associate them, and summing the results to form the superposition: This now gives us a single vector β that represents the entire sequence of T different key-value pair bindings. Now for each query we are interested in, we can obtain a vector that approximately matches the values v 1,2,...,T via the symbolic property of HRRs that The queries are checked against the representation of all keyvalue pairs β, where each q t will contribute a corresponding value based on the response of the bound key, and the HRR framework allows us to perform them jointly. This now gives us a representationv t ∈ R H that represents the set of values present given the keys that respond to the input queries. We can then approximately determine the values present using the dot-product test that present values should result in ≈ 1 scalars, performing: Each a t is a scalar given the match between the original value v t against the HRR extractedv t , and is repeated for all T values to give us a response on the relative magnitude of each value present. With these approximate responses, we can compute a weighted distribution w ∈ R T by computing the softmax over all a 1,2,...,T responses, giving w = softmax(a 1 , a 2 , . . . , a T ) 2 . While each a t will be highly noisy due to the inherent noise of HRR's superposition β, and an amplified level of noise due to the use of non-I.I.D. Gaussian elements, the softmax has the practical effect of removing this noise for us. This occurs because the HRR results in similar magnitude noise across each a t , and the softmax operation is invariant to constant additions to all elements. For notational convenience to express this in more detail, letΠ h (x 1 , . . . , x k ) denote the pairwise interactions of the h'th term in evaluating an expression of where all bold symbols are H dimensional vectors. The response of any query of the form q = x m + z takes the form . In doing so we see that any noise vector z has a similar magnitude impact regardless of the target vector x m . Because the softmax is invariant to uniform magnitude adjustments to all inputs, and we have the same noise occurring for each computation, we get the behavior of the softmax effectively denoising the response due to the magnitude impacts. We discuss this further in Appendix D. This softmax-based cleanup step is necessary because attempting to usev t directly results in degenerate randomguessing performance due to the noise of the HRR steps. With w in hand, we obtain the final Attention result 2 We find no meaningful difference in results when using a temperature softmax(exp(α)[a1, . . . , aT ]). returning a weighted version of the original values V , approximating the standard attention's response. Critically, this process is linear in T and approximates an all pairs interaction between queries and keys, as shown by Theorem A.1 The rest of self-attention works in the same manner as the standard Transformer. The Attention function's inputs and outputs are altered by linear layers, and instead of performing single attention, we split the feature vector H of the query, key, and value into h heads each having a feature size of H ′ = H/h. The attention is computed in parallel in each head and then merged into single attention which is projected to get the final output. The Hrrformer is implemented using JAX and a code snippet of the self-attention mechanism is presented in Appendix A. The block diagram representation of the Hrrformer self-attention is presented in Figure 2. The diagram is shown for single head and single batch elements for brevity. A high-level overview of the architecture in a multi-head setting is presented in Figure 3 showing the analogy between Hrrformer and Transformer. This simple approach allows us to have fully replicated the same overall logical goals and construction of the attention mechanism first proposed by (Vaswani et al., 2017). The correspondence is not exact (e.g., returning weight original values instead of approximate value constructions), but allows us to avoid the non-I.I.D. issue of using arbitrary Q, K, and V as learned by the network. This neuro-symbolic re- construction yields several benefits, as we will demonstrate in the next section. Simply replacing the self-attention in a standard Transformer with our HRR-based self-attention gives the "Hrrformer" that we will use to judge the utility of this new derivation. Experiments and Results The proposed Hrrformer is designed as an inexpensive alternative to the self-attention models for longer sequences. Experiments are performed to validate the effectiveness of the method in terms of time and space complexity in known benchmarks. Our first result is running many of the current popular and state-of-the-art (SOTA) xformers on the real-world classification task of the Ember malware detection dataset (Anderson & Roth, 2018). This provides an example where the need to handle ever longer sequences exists and demonstrates that Hrrformer is one of the fastest and most accurate options on a problem with complex real-world dynamics. In doing so we also show that current SOTA methods such as Luna-256 do not generalize as well to new problem spaces, as our Hrrformer does. Our second result will use the Long Range Arena (LRA) (Tay et al., 2020c) which has become a standard for evaluations in this space. The primary value of these results is to compare our Hrrformer with numerous prior works, establishing the broad benefits of faster time per epoch, convergence in 10× fewer epochs, requiring only a single layer, and competitive overall accuracy. In addition, the LRA results are more accessible to the broader ML comunity and allow us to show visual evidence of HRR based attention learning to recover complex structure from a one-dimensional sequence. EMBER EMBER is a benchmark dataset for the malware classification task (Anderson & Roth, 2018). The benchmark contains 600K labeled training samples (300K malicious, 300K benign) and 200K labeled test samples (100K malicious, 100K benign). The maximum sequence length of this dataset is over 100M which is not feasible for any of the self-attention models to train with. We experiment with relatively shorter sequence lengths starting from T = 256 and doubling up to T = 131072 by truncating or padding the bytes until this maximum length is reached. In this benchmark, Hrrformer is compared with Transformer (Vaswani et al., 2017), H-Transformer-1D (Zhu & Soricut, 2021), Luna-256 (Ma et al., 2021), Performer (Choromanski et al., 2020), Linformer , and F-Net (Lee- Thorp et al., 2021). All use 8 heads of a single encoder with 256 embedding size and 512 hidden size of the feed-forward network. Because this is a binary classification task, the encoder output is mapped into 2 logits output using back-to-back dense layers with ReLU activation. During training, the softmax cross-entropy loss function is optimized. For sequence length 256, the batch size is set to be 256. In the experiment, as the sequence length doubles, we halved the batch size to fit the data and the model to the memory which can be expressed as max(2 16−log 2 T , 1). This is done to push other models to the maximum possible length, and keep the batch size consistent between experiments. Additionally, a timeout limit of 10, 000s per epoch is set before experiments are terminated. The dropout rate is chosen to be 0.1, the learning rate is 10 −3 with an exponential decay rate of 0.85. Each of the models is trained for a total of 10 epochs in 16 NVIDIA TESLA PH402 32GB GPUs. Figure 1 shows the classification accuracy of each of the methods for incremental sequence length from 512 to 131072. As the sequence length increases, Hrrformer outperforms the rest of the models achieving the highest 91.03% accuracy for maximum sequence length 16384. In terms of execution time F-Net is the only model that is faster than ours, however the accuracy of F-Net is an absolute 4.53% points lower (Table 1). Even after exponentially decaying batch size, we could not fit the standard Transformer model to the memory for the sequence length 8196 indicating out-of-memory (OOM) in all figures. H-transformer-1d and Luna-256 crossed the timeout limit for sequence length 16384 indicated out-of-time (OOT) in the figure. The detailed numeric results are presented in Appendix B with additional results for the sequence length of 256. The execution time for linear time complexity methods seems quadratic in the figure; this is due to the exponential decay of the batch size with the increase of sequence length, which was necessary to push each model to its maximum possible sequence length. The more detailed timing information can be seen in Figure 4, where all models but F-Net and Hrrformer run out of time or memory before reaching the maximum sequence length. Note as well that as the sequence length increases, the already small difference in runtime between F-Net and Hrrformer reduces to near-zero. 2 9 2 10 2 11 2 12 2 13 2 14 2 15 2 16 2 17 Maximum Sequence Length Of significant importance to our results is that Luna-256 performs considerably worse than all other options, compared to its top accuracy in the LRA. We hypothesize that the Ember task requires more complex reasoning and feature extraction over time and because Luna performs aggressive compression and approximation of the time component of the model it suffers in terms of accuracy. Our Hrrformer on the other hand has consistent behavior across Ember and the LRA: high accuracy, able to handle longer sequences, and convergence in few epochs, a requirement for working on this dataset which is 1 TB in size and is otherwise prohibitive in its scale. Long Range Arena The Long Range Arena (LRA) (Tay et al., 2020c) benchmark comprises 6 diverse tasks covering image, text, math, language, and spatial modeling under long context scenarios ranging from 1K to 16K. ListOps -task inspects the capability of modeling hierarchically structured data in a longer sequence context with mathematical operators MAX, MEAN, MEDIAN, and SUM MOD enclosed by delimiters. This is a ten-way classification problem with a maximum sequence length of 2K. Text -is a byte/character level classification task using the IMDB movie review (Maas et al., 2011) dataset. Character-level language modeling makes the models reason with compositional unsegmented data.This is a binary classification task with a maximum sequence length of 4K. Retrieval -evaluates the model's ability to encode and compress useful information for matching and retrieval by modeling similarity score between two documents. For this task, the ACL Anthology Network (Radev et al., 2013) dataset is used in a character level setup. This task has a maximum sequence length of 8K and this is a binary classification task. Image -is an image classification task of 10 classes that uses grayscale CIFAR-10 dataset in a sequence of length 32 × 32 = 1024. This task allows assessing the model's ability to process discrete symbols. Pathfinder -task evaluates the model's performance over long-range spatial dependency. This is a binary classification task that classifies whether two circles are connected by a line which is introduced in (Linsley et al., 2018), and includes distractor paths. The images have dimension 32 × 32 which is reshaped into 1024. Path-X -is extremely difficult version of pathfinder task which contains images of dimension 128 × 128 = 16384 with additional distractor paths. In Hrrformer, we use the same number or fewer parameters as mentioned in the LRA benchmark (Tay et al., 2020c) across the tasks and a list of hyper-parameters used in each task is provided in Appendix B. Global average pooling is applied to the output of the encoder sequences and subsequently back to back dense layers are used with ReLU activation to get the final logits output. During training, the softmax cross-entropy loss function is optimized using the Adam optimizer. We use the exponential decay learning rate with the initial value of 10 −3 , and the final value of 10 −5 . For all the tasks, Hrrformer is trained for a total of 20 epochs both in the case of single-and multi-layer which is 10× less training than previous works. The results in terms of accuracy in all the tasks of the LRA benchmark are presented in Table 1. 3 Ours is one of only two methods that improve accuracy upon the Transformer and consistently displayed higher performance in all the tasks. We show the performance for both single and multiple layers. In 3 of the 5 tasks (ListOps, Text, Image), Hrrformer achieves the second-best results using Figure 5. Visualization of weight vector w ∈ R 1024×1 reshaped to 32 × 32, the shape of the original image of the CIFAR-10 dataset used in the LRA Image classification task. A single-layer Hrrformer is able to learn the 2D structure from the 1D sequence of the image. This is particularly noticeable in the Airplane, dog, Frog, and Horse images. Note context sensitive Head activation can be observed comparing Head 3 for dog vs Frog, where activation occurs for different pixel intensities indicating the model is not naively activating for simple color intensity. The ability to learn with a single layer aids in both throughput and memory use. The result is surprising, and in visualizing the weight vector w we can confirm that a single layer Figure 6. Performance (y-axis), Speed (x-axis, log-scale) of different xformers, and memory footprint on GPU are illustrated by the size of the circles. Hrrformer is in the top-right of the graph, with the smallest circle size, indicating it is the fastest and most memory efficient for training (this does not factor in convergence speed). is sufficient to learn the structure. We show this for the Image task of single-layer Hrrformer in Figure 5 (multi-layer in Appendix C). Here, the weight vector w ∈ R 1024×1 is reshaped to 32×32, the shape of the original grayscale images of the CIFAR-10 dataset for visualization. From the figure, it is clear that the Hrrformer is learning to identify the 2D structure from the 1D sequence of the Image classification task. We also compare against the standard Transformer in Appendix Figure 10, where it is less obvious how the model's weights might correspond to the 2D structure of the image. Hrrformer's benefits go beyond accuracy and convergence speed: it is fast and consumes the least amount of memory on GPU of the alternatives tested. Figure 6 compares all the self-attention models in terms of LRA score, speed (training examples per second), and memory footprint (size of the circle). LRA score is the mean accuracy of all the tasks in the LRA benchmark. Speed and memory footprint is calculated on the byte-level text classification task per epoch. To measure these results, a single NVIDIA TESLA PH402 32GB GPU is utilized with a fixed batch size of 4 and a maximum sequence length of 4000 with an embedding size of 32 and feature size of 64. For all the models 6 layers of the encoder are used. Both single-and multi-layered Hrrformer are 28× and 10× faster than the Luna-256 (Ma et al., 2021) which has achieved the highest accuracy in the LRA benchmark. Hrrformer also consumes the least amount of memory, taking 79.15% and 70.66% less memory compared to Luna-256 in the case of single and multi-layered Hrrformer, respectively. The detailed numeric results of Figure 6 are given in Appendix B. Hrrformer also reduces the amount of overfitting between training and test performance. We compare the training and test accuracy, and amount of overfitting of the Image classification task to the other self-attention models presented in LRA benchmark (Tay et al., 2020c) and for which data are available 4 . Table 2 exhibits that the Hrrformer acquires the best results on the test set with an 6.83% train/test gap. The learning curves of all the task is also presented in Appendix Figure 8 demonstrating the lower overfitting nature of the Hrrformer across the tasks. Table 2. Training and test accuracy of different self-attention models on the Image classification task. Among all the models, Hrrformer achieves the best test accuracy with the least amount of overfitting (lower is better). Model Train Accuracy (%) ↑ Test Accuracy (%) ↑ Overfitting (%) ↓ Hrrformer's inference time is also faster than other options for long sequences. As an example, the time to make predictions for the text classification task is given in Appendix Table 7, where the single-layer Hrrformer is the fastest op-tion, followed by the multi-layer Hrrformer. We also find Hrrformer's inference time is relatively faster regardless of the batch size. The inference time for the Hrrformer with a batch size of 2 is still 5× faster than the inference time for the Transformer with a batch size of 32. More details are presented in Appendix Table 6. Conclusion The Hrrformer is a neuro-symbolic reconstruction of selfattention. The proposed method is faster in compute and consumes less memory per layer. We have tested Hrrformer on known LRA and EMBER benchmarks. In the LRA benchmark, Hrrformer has achieved the near state-of-the-art accuracy of 60.83% using a single layer of an encoder. In terms of speed, it is 28× and 10× faster than the current SOTA in the case of single and multiple layers, respectively. Additionally, it takes 79.15% and 70.66% less memory on GPU compared to Luna-256 for single and multiple layers of Hrrformer. Moreover, it converges 10× faster than other self-attention models. In the EMBER malware classification dataset, Hrrformer attained the highest 91.03% accuracy for a maximum sequence length of 16384 with a significantly faster processing rate. In conclusion, Hrrformer is ≈ 280× faster to train and a single layer of the encoder is sufficient to learn the structure of the input. B. Hyperparameters & Numeric Results The hyperparameters used in each task of the Long Range Arena (LRA) benchmark and EMBER malware classification task are presented in Table 3. In all of the tasks, the Adam optimizer is used with an exponential decay learning rate. The starting learning rate is 10 −3 and the final learning rate is 10 −5 . The decay rate indicates the amount of learning rate decay per epoch. MLP dim indicates the number of features used in the first linear layer of the MLP block after the attention block. (Ma et al., 2021) in the LRA score. However, in terms of speed, single-and multi-layered Hrrformer are 28× and 10× faster than Luna-256. Moreover, Hrrformer consumes 79.15% and 70.66% less memory than Luna-256 in the case of single and multi-layered Hrrformer, respectively. The numerical results of EMBER malware classification are presented in Table 5. From the table, it can be observed that as the sequence length increases, Hrrformer surpasses the other models, and for the sequence length 16, 384, has achieved the highest accuracy of 91.03%. In addition we provide the time to perform inference over the entire LRA text classification task for batch sizes varying between 2 and 32. This is shown in Table 6, where the time decreases as batch size increases due to reduced overhead and higher GPU compute efficiency. As can be seen the Hrrformer is uniformly faster, and more consistent in total run-time. Similarly, our method is faster for larger and small batch sizes, a particularly valuable benefit in inference where batching is not always possible. This can be seen in Table 7, where the inference time for the Hrrformer with a batch size of 2 is still 5× faster than the inference time for the Transformer with a batch size of 32. Where prior works required 200 epochs of training, we can see that 20 epochs are sufficient for our Hrrformer. In most of the tasks, the 10-epoch performance of our Hrrformer is still highly competitive. C. Weight Visualization The weight vector w is visualized for LRA image classification task. In this task, grayscale images of the CIFAR-10 dataset of dimension 32 × 32 are reshaped into a sequence of length 1024. Therefore, the weight vector has the shape of R 1024×1 . This vector is reshaped back to 32 × 32 for visualization which shows where in the image the weight vector of each head puts its attention. Figure 9 demonstrates the attention map of the 4 heads in each of the 3 layers of Hrrformer for all the CIFAR-10 classes. For the standard Transformer, the responses are a matrix of cross-correlations rather than a single vector. This makes the response more difficult to interpret. To visualize in the same manner we average the response of correlations with respect to a single item t to get the same 1024 shape, and visualize the results in Figure 10. As can be seen, the identification of structure is not as obviously. D. How Softmax "Denoises" Dot Product To understand how we can use the softmax operation as a kind of denoising step, consider the H dimensional vectors a, b, c, d, and z. If each element of all these vectors is sampled from N (0, 1/H), then we would expect that (a⊗b+c⊗d) ⊤ a † ≈ 1. Similarly, the value z is not present, so we expect that (a ⊗ b + c ⊗ d) ⊤ z † ≈ 0. Now let us consider our use case, where the I.I.D. property is not true, and the query that is a noisy version of a present item. For simplicity of notation, we will use the explicit case of H = 2 dimensions. We can query for a + z get: Similarly if we query with c + z we instead get: Notice that in both cases we have shared terms that are multiplied and added together. Under the sufficient conditions of I.I.D Gaussian, the linearity of expectation results in these terms canceling out into a single random variable with a zero mean. However, these also have the artifact in our application that for a non-present query, the response magnitude will have a similar value due to the repeated shared terms. We can simplify our understanding of this by imagining that there is an additional noise constant ϵ that we must add to each noise term. Then when we apply the softmax operation, we obtain the benefit that the softmax function is invariant to constant shifts in the input, i.e., ∀ϵ ∈ R, softmax(x + ϵ) = softmax(x). Thus, we get the practical effect of softmax removing noise that we incur for not using I.I.D. Gaussian as the elements of our vectors.
2023-06-01T01:16:03.078Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "ddcfdcab9e339e38bfb27e862c41e38169f809d9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ddcfdcab9e339e38bfb27e862c41e38169f809d9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
92804637
pes2o/s2orc
v3-fos-license
Effect of planting density and training on plant health and seed quality of bell pepper ( Capsicum annuum L . ) under protected conditions A study was conducted at the Department of Seed Science and Technology, Dr. Y. S. Parmar University of Horticulture and Forestry, Nauni, Solan (H.P.), India during Kharif 2012 to evaluate the effects of the different planting densities and training systems on plant health (powdery mildew severity) and seed quality of bell pepper cv. Solan Bharpur under protected conditions (polyhouse). Experiment was laid out in naturally ventilated polyhouse using three different planting densities (S1 45×15 cm, S2 45×30 cm and S3 45×45 cm) and four training levels (T1 single shoot, T2 two shoots, T3 three shoots and T4 four shoots)with three replicates. The combination S2T2 (plants spaced at 45x30 cm and trained to two shoots) was found superior over all other treatments in terms of seed yields i.e. per plant and per hectare (18.00 g and 959.87 kg, respectively) and was at par with important quality characters. The treatment combination S3T1 (plant spaced at 45×45 cm and trained to single shoot) resulted in least powdery mildew severity (21.21 %) and performed best for seed quality characters viz. 1000 seed weight, germination percentage, seedling length, seedling dry weight, seedling vigour index-I & II (6.32 g, 95.75%, 10.86 cm, 3.26 mg, 1039.77 and 312.34, respectively) but it gave lower seed yield and thus it is uneconomic. Therefore, planting density 45×30 cm in combination with two shoot training system can be recommended for commercial seed production of bell pepper under protected conditions. INTRODUCTION Bell pepper belongs to family solanaceae, is adapted to a wide variety of climates but production is concentrated in a few warm and rather dry areas (Thouraya and Leila, 2015).The diverse climate of India and agro -climatic conditions of the plains sometimes proves to be non-conducive for seed production of bell pepper due to various biotic and abiotic stresses.The crops grown in open field are often exposed to fluctuating levels of temperature, rain, humidity, wind flow etc. which may affect the bell pepper seed production adversely.These complications in seed production can be overcome by carrying out the seed production programme in the protected environment with improved agrotechniques.Green house, the latest word in Indian agriculture is one such mean, where the plants are grown under controlled or partially controlled environment and thus polyhouses or greenhouses can be utilized to get the higher yields of quality seed.In greenhouse cultivation appropriate cultural practices such as plant densities and training systems are emphasized to enhance the productivity by utilizing the available space and resources.Optimum plant density allows doing cultural practices easily and reduces the competition ISSN : 0974-9411 (Print), 2231-5209 (Online) All Rights Reserved © Applied and Natural Science Foundation www.jans.ansfoundation.orgamong the plants for resources and also prevents the diseases to proliferate.By choosing appropriate plant density one can properly utilize his land by accommodating more number of plants over an area.The yield of sweet pepper has been reported to be dependent on the number of plants accommodated per unit area (Duimovic and Bravo, 1979).Spatial arrangement of plants is a crop management practice that has been used to increase yield per unit area in greenhouse sweet pepper (Ahirwar and Hedau, 2015).On the other hand, a suitable training system will not only facilitate better management and uniform light to the plants, but also permit closer planting, early ripening of fruits and give higher yields of good quality seed.In greenhouse bell pepper fruit development is controlled by limiting the branching pattern to 2, 3 or 4 main stems (Jovicich et al., 2004).Training systems varies with different growth habits and plant densities.A good combination of spacing and training level enhance the air circulation which reduces the relative humidity and thus prevent the disease proliferation.Therefore developing appropriate training and spacing practice under protected conditions can positively accelerate the production level in the same piece of land with similar inputs as required under open field conditions.Moreover, seed production under the polyhouse conditions with such agro-techniques has been practiced only in few crops.Therefore, keeping in view the above facts, the present study was planned to find out the optimum planting density and training system for quality seed production of bell pepper under the protected conditions. MATERIALS AND METHODS The present investigation was carried out in the Department of Seed Science and Technology, Dr. YS Parmar University of Horticulture and Forestry, Nauni, Solan during Kharif 2012 in bell pepper cv.Solan Bharpur.The experiment was laid in Randomized Complete Block Design (Factorial) in field and completely randomized design (Factorial) in laboratory with four replicates.The seedlings were transplanted at three planting densities (S 1 -45×15 cm, S 2 -45×30 cm and S 3 -45×45 cm) in naturally ventilated (top and side ventilated) polyhouse equipped with the drip irrigation system.After the establishment, the plants were trained to four levels (T) viz., T 1 (single shoot), T 2 (two shoots), T 3 (three shoots) and T 4 (four shoots).Recommended doses of manures and fertilizers were applied.The observations in field were made on five randomly selected plants from each replication.Per cent disease index (PDI) was calculated to assess the powdery mildew severity by using the formula given by Wheeler (1969) based on the leaf area affected scale, starting from peak vegetative stage and subsequently taken after 15 days interval as proposed by Ullasa et al. (1981) i.e. 1 no symptoms; 2 with 10 % leaf affected; 3 with 11-20 % of leaf area affected; 4 with 21-50 % of the leaf area affected and 5 with 51 % or more of leaf area affected.For seed yield parameters, the seeds were extracted to calculate number of seeds per fruit.Seeds were dried to 8 % moisture content in shade then weighed and averaged to work out seed yield per plant and per hectare.In the laboratory, the test weight (1000 seed weight) was worked out.From all replicates 100 seeds were subjected to germination at 25 ⁰C using paper roll method to assess the percent seed germinated, seedling length, seedling dry weight (dried at 60 ⁰C for 48 hours) and seedling vigour.The statistical analysis of the observations was done as per design of the experiment as suggested by Gomez and Gomez (1984). RESULTS AND DISCUSSION Effect on health of bell pepper: Bell pepper (Capsicum annuum L.) grown at spacing 45×45 cm and trained to single shoot were least susceptible to powdery mildew severity 21.21 % (Table 1).Savinove (Bidari et al., 1985).Low temperature after the dry spell is predisposing factor for this disease to proliferate.Yarwood et al. (1954) considered that among plant pathogens the powdery mildews have low optimum temperature averaging 21⁰C.Wider spaced plants with low shoot density allow the air and light pass through them which hinder the conidia to germinate and also optimum spacing between the plants prevents the disease to spread from infected plant to healthy one.Moreover, adequate plant spacing also helps to obtain good coverage when fungicides are used.On the other hand high plant and shoot density create the congenial environment for the proliferation of disease. Effect on seed yield : The combination S 3 T 1 (45×45 cm and single shoot) produced fruits with maximum number of seeds per fruit (198.14).Wider spaced and single shoot plants bear larger sized fruits having bold seeds and also, the size of fruit is correlated to number of seeds (Wien, 1997).But according to Khurana et al. (2002) spacing has no effect on number of seeds per fruit in capsicum.Significantly higher seed yields per plant and per hectare (18.00 g and 959.87 kg, respectively) were resulted from S 2 T 2 i.e. plants spaced at 45×30 cm with two shoots (Table 1).Sanchez et al. (1993) and Lal et al. (2014) in bell pepper reported maximum seed yield per plant at wider spacing.While Singh et al. (1989) and Khurana et al. (2002) in chilli reported lower seed yield per m 2 and per hectare with wider spacing.Well management of pest and diseases, favourable conditions prevails for healthy growth and extended growing season in the polyhouse added up to the production of quality seed.Moreover the CO 2 released by the plants during respiration could not escape easily from the polyhouse.The fact that C 3 plants responds to higher CO 2 concentration by showing in-creased rates of photosynthesis leading to higher productivity has been used for greenhouse crops such as bell pepper and tomatoes.Another role polyhouse played is that it maintained the temperature during the growth cycle resulted in healthy growth of the plants. The dark reactions being enzymatic are temperature controlled and optimum temperature ranging from 25-35 °C is required for good photosynthetic rate (Rani et al., 2011).The occurrence of cool temperatures during fruit set could also reduce the number of seeds per fruit, and thus lower seed yield (Rylski, 1973).In the field bell pepper can be harvested over a period of 2-3 months but in greenhouse production season can be extended for 8 months (Wien, 1997).This extended crop season resulted in the higher seed yields of bell pepper. Effect on seed vigour attributes: The maximum 1000 seed weight, germination percentage, seedling length, seedling dry weight, seedling vigour index-I & II (6.32 g, 95.75%, 10.86 cm, 3.26 mg, 1039.77and 312.34, respectively) were recorded from spacing 45×45 and single shoot training system (Table 2).The results for 1000 seed weight are in conformity with the findings of Everett and Subramanya (1985) and Dharmatti and Kulkarni (1988), who reported that wider spacing resulted in higher 1000 seed weight in bell pepper.Sanchez et al. (1993) reported that increasing the 1000 grain weight results in increasing the fresh and dry weight of tomato seedling.Sajjan et al. (2004) in okra observed higher seedling length and dry weight with wider spaced plants.Therefore it can be concluded that bold seeds of bell pepper with more test weight produces the more vigorous seedlings.Optimum plant density and shoot pruning increase the quality of fruit and seed which in turn leads to better establishment of more vigorous plants of polyhouse grown bell pepper. Conclusion From the present study it can be concluded that the treatment combination S 2 T 2 (bell pepper plants spaced at 45×30 cm and trained to two shoots) was found superior for seed yield whereas S 3 T 1 was superior for seed health and also for seed vigour attributes such as 1000 seed weight, germination and seedling vigour but resulted in lower yields.Moreover the treatment combination S 2 T 2 was at par with the best treatments for most of the other important characters.Therefore planting density S 2 (45×30 cm) in combination with training system T 2 (two shoots) may be recommended to get higher yields in commercial seed production of bell pepper under protected conditions. Table 2 . Effect of planting density and training on seed quality of bell pepper under protected conditions during the year 2012. Table 1 . Effect of planting density and training on powdery mildew severity and seed yield of bell pepper under protected conditions during the year 2012.
2019-04-03T13:09:25.679Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "cee85b01819ec8a9f817b2b8a057e5fbdbf834c3", "oa_license": "CCBYNC", "oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/944/903", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cee85b01819ec8a9f817b2b8a057e5fbdbf834c3", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
30231436
pes2o/s2orc
v3-fos-license
Group Organizational Citizenship Behavior in the Stages of Group Development It has been shown that group organizational citizenship behavior (GOCB) has a great impact on group outcomes. Because of its importance on group performance and effectiveness, previous research has explored GOCB from various perspectives. However, very little attention has been paid to the effect of group developmental stages on GOCB. In this article, we develop a theoretical model of GOCB that incorporates stages of group development. By examining GOCB in the context of group developmental stages, our model provides important theoretical and practical implications. Introduction As business environments have become much more complex, organizations have increasingly relied on groups to attain organizational goals and success (Kearney, Gebert, & Voelpel, 2009).Because of the impact of groups on organizational outcomes, research on groups has increased dramatically.Among various research topics, there has been a strong focus on group performance (Kozlowski & Bell, 2003).A potential explanation provided by the literature is that a group can perform better when group members go above and beyond their formal role requirements.This particular phenomenon has generally been labeled as group organizational citizenship behavior (GOCB).Although GOCB has been investigated from various perspectives, how group developmental stages affect the extent to which GOCB is exhibited by a group has been largely ignored. We strive to address this issue by linking stages of group development to GOCB, which we defined as the normative level of organizational citizenship behavior (OCB) performed within a group (Ehrhart, 2004).It is important to note that GOCB in this article refers to OCB at the group level.Thus, GOCB and OCB are conceptually distinct.Specifically, GOCB concerns the extent to which a group as a whole engages in OCB within the group (Chen, Lam, Schaubroeck, & Naumann, 2002).GOCB is important as group and organizational effectiveness are typically affected by collective OCB (Schnake & Dumler, 2003).Given the recent OCB research and the shift to the group-level analysis, (e.g., Choi & Sy, 2010;Ehrhart & Naumann, 2004), our focus will be on GOCB and how the developmental stages of a group contributes to the extent of GOCB exhibited by the group.The inclusion of group developmental stages is important because the nature of interaction among group members and the group developmental phenomena, such as power struggles, conflict, conflict resolutions, and roles and norms, can affect both individual and group behavior (Wanous, Reichers, & Malik, 1984). The body of this article is organized as follows.In the second section, we provide a brief review of the literature on the stages of group development.We then develop our theoretical model in the third section.Specifically, we provide arguments on how the different stages of group development affect the extent of GOCB exhibited by a group.To analyze this relationship systematically, we apply Tuckman's (1965) developmental sequence in small groups.As we present the arguments about GOCB in each stage of group development, we specify our propositions that can be tested empirically by future research.The fourth section of this article discusses the implications for future research and future managerial practice as well as the limitations of this article.The final section then concludes this article with a brief summary and how researchers and practitioners can extend the theoretical model offered by this article. Literature review As the complexity of internal and external environments increases, organizations have recognized the importance of the use of small groups (Kearney et al., 2009).It is suggested that the effective use of small work groups can lead to higher job motivation, better decision making, and higher organizational performance (Heinen & Jacobson, 1976;Kaczka & Kirk, 1967;Perry-Smith & Shalley, 2003).Because groups are essential managerial tools (Gersick, 1988), increasing scholarly attention is being devoted to group effectiveness and performance.One stream of research has sought to investigate the antecedents of group performance and effectiveness.For instance, Cannon and Edmondson (2001) showed that shared beliefs about failure in work groups significantly reduce group performance.Chang and Bordia (2001) found that group performance is positively related to group cohesion and group cohesion is an antecedent, not a consequence, of group performance.Meglino, Lester, and Korsgaard (2002) discovered that charismatic leadership behaviors and communication-cooperation processes affect group potency, which in turn leads to high levels of group effort and group performance.Marrone, Tesluk, and Carson (2007) uncovered a positive relationship between team boundary-spanning behavior and team performance.Choi and Sy (2010) demonstrated that GOCB is related to group performance and this relationship is mediated by relation-oriented attributes, such as gender and task-related attributes, such as tenure. A second stream of research has attempted to explore the impact of group characteristics on group performance and effectiveness.For example, Chang and Bordia (2001) performed a longitudinal study to examine the relationship among group cohesion, task cohesion, social cohesion, and group performance and revealed that group cohesion significantly affected group performance, task cohesion was a predictor of self-rated performance, and group cohesion was an antecedent, but not the consequence, of group performance.Boone, Van Olffen, and Van Witteloostuijn (2005) studied team financial performance in a decision-making context using team information acquisition, team locus-of-control composition, and leadership structure and found that information acquisition mediated the relationships between locus-of-composition and performance.Ng and Van Dyne (2005) examined a cross-level and group-level model of helping behavior in work groups and discovered that cooperative norms influenced individual helping behavior and the least and the most helpful group members affected group performance significantly.Bezrukova, Jehn, Zanutto, and Thatcher (2009) tested how social category and information-based group faultlines affect the performance of groups and showed that group identification moderated the performance of groups with information-based faultlines.In Choi's (2009) study, three different operationalizations of group-level helping were used to examine their impact on group outcomes.The results demonstrated that diversity in gender and education reduced group-level helping, whereas diversity in tenure increased group-level helping.Moreover, group-level helping was found to be positively related to perceived competence of group members. A third stream of research focuses on macro-level factors.For instance, Erez and Somech (1996) analyzed the degree to which cultural collectivism affects group performance loss and found that group performance loss is less likely to occur in a highly collectivistic subculture than in an individualistic subculture.Sparrowe, Liden, Wayne, and Kraimer (2001) investigated the impact of advice network centrality and density on individual and group performance and showed that the density of a work group's advice network has no impact on group performance.However, the advice network centrality and the density of a work group's hindrance network were found to be negatively related to group performance.Balkundi and Harrison (2006) studied how members' and leaders' social network structures affect team effectiveness and demonstrated that team task performance is positively associated with the density of member's interpersonal ties and the leader's network centrality.Similarly, Parise and Rollag (2010) examined the impact of existing social network structures on group performance and showed that the density of pre-existing work and friendship and emergent work network density affect group performance significantly.Frontiera (2010) employed a qualitative study that was designed to identify successful organizational culture change cycle.He identified five themes (i.e., symptoms of a dysfunctional culture, my way, walk the talk, embedding new culture, and our way) that formed an initial model for organizational culture change, which leads to effective professional sport team performance. The above three research streams have provided organizations and management with various recommendations and suggestions on approaches for group effectiveness and performance.Among various research efforts devoted to group performance and effectiveness, an increasing amount of research has investigated the phenomenon from a group behavioral perspective.In this perspective, one of the important research focuses is GOCB.It is suggested that GOCB is a product of various group functions such as the development of norms and social interaction among members, inter-member relationships, and interpersonal dynamics (Choi & Sy, 2010;Ehrhart, 2004;Ehrhart, Bliese, & Thomas, 2006).Because OCB "in the aggregate" affects group outcomes (Organ, 1988), the study of OCB at the group level provides us with a better understanding of OCB as it was originally theorized as a group-level phenomenon (Organ & Ryan, 1995). Given the perceived importance of GOCB, previous research has explored GOCB and its outcomes extensively.For instance, Podsakoff, Ahearne, and MacKenzie (1997) examined the effect of OCB on the quantity and quality of the work group performance and found that both helping behavior and sportsmanship improve performance quantity while helping behavior affects performance quality.Chen, Lam, Naumann, and Schaubroeck (2005) applied Chan's (1998) referent-shift consensus model and showed that GOCB is positively related to procedural justice climate and work group leadership support.Moreover, group cohesiveness and group-organizational goal congruence were found to be the predictors of GOCB.Choi (2009) analyzed the effect of group characteristics on group-level helping and demonstrated that within-group diversity in gender and education decreases group-level helping, supportive and transformational management increase group-level helping, and trustworthiness of group members positively affects group-level helping.Choi and Sy (2010) investigated antecedents and intermediate processes that predict GOCB in small work groups and found that task conflict increases GOCB, whereas relationship conflict decreases GOCB.Gong, Chang, and Cheung (2010) employed a collective social exchange approach to study collective OCB and demonstrated that a high performance work system is positively associated with collective OCB through collective affective commitment.Shinh and Choi (2010) studied the effect of group-organizational fit and group-task fit on GOCB and found that the positive relationship between group-organization fit and GOCB is positively mediated by cohesion, whereas group-task fit has a direct influence on GOCB. Although research on GOCB has increased dramatically in the past few years (Choi & Sy, 2010), what has been missing in the literature is the relationship between group developmental stages and GOCB.Stages of group development are essential when examining GOCB as different sets of interpersonal relationships and task behaviors are exhibited in different group developmental stages.In addition, groups are generally changing in social and work processes throughout time (Miller, 2003).From this standpoint, it is important to include stages of group development when examining GOCB. Theoretical model We intend to explore the missing piece in the GOCB literature.More specifically, we develop a theoretical model describing the extent of GOCB exhibited by a group in each of the developmental stages.From among various group development models, we apply Tuckman's (1965) stages of small group development as it has been considered the most widely recognized model in the literature (Cissna, 1984;Gersick, 1988;Miller, 2003;Worchel, 1994).Tuckman's (1965) initial small group development model identifies four sequential stages resulting in effective group functioning.These four stages include the forming, storming, norming, and performing stage.In 1977, Tuckman and Jensen added a fifth stage, the adjourning stage, to the original model.The first stage of group development is the forming stage.In this stage, group members are introduced to the task and members.In the second stage, the storming stage, interpersonal conflicts occur and group members become hostile.After successfully moving into the norming stage, group members establish group norms and develop group cohesion.Then, the group moves into the fourth stage, the performing stage.In this stage, group members become flexible and adaptive in terms of their roles and functions, which result in effective group performance.The last stage of group development is called the adjourning stage.The addition of the adjourning stage to Tuckman's (1965) model intends to reflect group termination and separation. Thus far, we have very briefly presented the stages of group development using Tuckman's (1965) and Tuckman and Jensen's (1977) models.We now address the relationship between each stage of group development and GOCB and derive testable propositions in detail.These propositions are developed in accordance with our theoretical model and for future empirical research that will uncover GOCB exhibited in the stages of group development.Figure 1 shows our proposed theoretical model. Stage 1: The forming stage The first stage of Tuckman's model is called the forming stage.In this stage, task and rules are introduced to group members.In order to perform in the group, members assess the tasks, social behaviors, and norms within the group.In addition, members identify the nature and boundaries of the tasks and determine what resources are required for the tasks (Bonebright, 2010;Miller, 2003).Because group members test the degree of dependency and inclusion and exchange personal stories that are often considered irrelevant to the group goals, group productivity in the forming stage tends to be low (Wheelan, 2009). In addition to assessing social and task behaviors, group members may elect leader(s) and establish relationship with the leader(s) (Bonebright, 2010).Although the emergence of leadership can be viewed as an initial step of establishing group structure, group members in the forming stage tend to have the characteristics of relying on the leader(s), experiencing anxiety, and having concerns about inclusion (Wheelan & Conway, 1991).Because the group is just formed and tasks are just introduced, the stability of group memberships is low and expectations of interpersonal relationships are ambiguous.Group members would exhibit moderate levels of OCB in order to stabilize the memberships and to create an expectation of positive interpersonal relationship.Moreover, because group members in the forming stage are also characterized by dependence and testing (Tuckman, 1965;Wheelan & Conway, 1991), moderate levels of GOCB would be observed as they attempt to solidify or preserve the interpersonal relationships (Van Dyne, Cummings, & McLean Parks, 1995).This aspect is important for a group in the forming stage to move into subsequent developmental stages. From the impression management perspective (see Bolino, 1999;Hui, Lam, & Law, 2000;Rioux & Penner, 2001), group members might engage in OCB in order to cultivate positive impressions.Because group members in the forming stage are expected to work collaboratively in the future, exhibiting certain levels of OCB then become an effective means for a group member to strengthen the impression of his or her positive work behaviors, such as being helpful and capable, onto other group members.This argument supports our first proposition: Proposition 1: Moderate levels of group organizational citizenship behavior will be exhibited in the forming stage of a group. Stage 2: The storming stage Once a group moves into the storming stage, group members experience intergroup conflicts (Tuckman, 1965).Specifically, groups in this stage are characterized by lack of unity, polarization, conflicts among members and between members and leader(s), and disagreements among themselves (Bonebright, 2010;Wheelan, 2009;Wheelan & Conway, 1991).Because of the presence of conflicts, group members question the existing norms and search for new unified goals, norms, and values, which are essential for the group to accomplish assigned tasks and move into the subsequent stages. Conflict in this article refers to disagreements about personal preferences and interpersonal interactions among group members (Jehn, Northcraft, & Neale, 1999).Because of its impact on interpersonal relationships, a few studies have investigated the relationship between conflict and OCB or GOCB.For example, De Dreu and Van Vianen (2001) found that group-level helping and compliance are negatively affected by conflict.Hodson (2002) performed in-depth observational studies to identify the nature and consequences of management of citizenship behavior.He found that the conflict between employees and managers and infighting among employees are reduced when behaviors that conform to prevailing norms for leadership and for respecting worker's right are exhibited.Tjosvold, Hui, Ding, and Hu (2003) analyzed the impact of conflict behavior and conflict attitude on OCB and found that positive conflict attitude and conflict approach behavior positively affect courteous and conscientious behaviors.Choi and Sy (2010) examined small work groups in various industries and found that task conflict increases GOCB in a work group, whereas relationship conflict reduces GOCB in a work group. The above review of the literature on conflict and citizenship behavior reveals that citizenship behavior can be exhibited when conflict is task-based versus relationship-based.Because group members in the storming stage tend to exhibit emotional responses to one another (Bonebright, 2010), conflicts in the storming stage are often considered interpersonal relationship conflicts rather than task conflicts.Since relationship conflicts have a negative impact on citizenship behavior, we expect that there will be no or low levels of GOCB when a group is in the storming stage.This leads to our second proposition. Proposition 2: No or low levels of group organizational citizenship behavior will be exhibited in the storming stage of a group. Stage 3: The norming stage Although conflict is generally viewed as destructive, a group is unable to develop culture, structure, and cohesion without it (Deutsch, 1971;Lewin, 1936;Theodorson, 1962).Thus, after navigating conflicts successfully, a group then enters the third phase, the norming stage.In this stage, group members establish roles and norms, work as an entity, develop feelings, and seek to maintain and perpetuate the group (Bonebright, 2010;Tuckman, 1965).Moreover, because group members work through conflicts that occurred in the storming stage, they show high levels of trust to other members, commitment to the group, and willingness to cooperate (Wheelan, 2009).Furthermore, norms developed in this stage facilitate the development of interpersonal relationships, which in turn enhances group cohesiveness, openness, and information exchange (Miller, 2003). Previous studies have sought to investigate the relationship between group cohesion and OCB or GOCB.For instance, George and Bettenhausen (1990) suggested that cohesiveness could have a positive impact on OCB as it affects group members' affective states.Kidwell, Mossholder, and Bennett (1997) claimed that OCB is facilitated in highly cohesive groups as cohesiveness promotes group social identity and members' desires to help one another.Ehrhart et al. (2006) studied military units and found a positive relationship between unit cohesion and unit-level helping behavior.Employing structural role theory, Lamertz (2006) demonstrated that cohesiveness is related to OCB.Studying service employees, Frenkel and Sanders (2007) discovered that group cohesion has a strong direct effect on co-worker assistance.Although the literature has largely supported the relationship between group cohesiveness and OCB, Kidwell et al. (1997) emphasized that this relationship is dependent on group members' perceptions of whether OCB is important to group functioning.Because group members in the norming stage emphasize the development of shared values, norms, and work models and the establishment of effective work methods (Bonebright, 2010;Tuckman, 1965), one could expect that group members would be very concerned with group functioning and its effectiveness.From this perspective, high levels of GOCB are expected to be exhibited in the norming stage of a group. In addition to group cohesiveness, trust and commitment are found to be significant predictors of OCB.For example, when investigating the impact of demographic dissimilarity on OCB, Chattopadhyay (1999) showed that trust in peers positively mediates the negative relationship between demographic dissimilarity and OCB.Examining employees in the aerospace industry, Coyle-Shapiro, Morrow, Richardson, and Dunn (2002) demonstrated that trust is positively associated with OCB and employee cooperation.Studying service employees in the financial service industry and food services industry, Donavan, Brown, and Mowen (2004) found support for the positive relationship between commitment and OCB.Restubog, Hornsey, Bordia, and Esposo (2008) conducted both longitudinal and cross-section research using working adults in business organizations and employees in public organizations and reported that trust fully mediates the relationship between psychological contract breach and OCB.Lavelle et al. (2009) analyzed medical clinic employees and found that commitment is positively related to OCB. Our above review of the literature suggests that when group members have high levels of trust, commitment, and willingness to cooperate, group members tend to exhibit high levels of collective OCB.Because high levels of trust, commitment, cohesion, and willingness to cooperate are generally found in the norming stage of a group (Bonebright, 2010;Miller, 2003;Tuckman, 1965;Wheelan, 2009), one could expect that high levels of GOCB would be exhibited when a group is in the norming stage.This suggests a third proposition: Proposition 3: High levels of group organizational citizenship behavior will be exhibited in the norming stage of a group. Stage 4: The performing stage The final stage of Tuckman's (1965) small group development model is labeled as the performing stage.In this stage, group members become flexible and adaptive in terms of their roles and functions, which in turn enhance task performance and facilitate group energy (Tuckman, 1965).Groups in the performing stage are characterized by emphasis on functional roles, task activities, task performance, and problem solving (Bonebright, 2010;Miller, 2003;Tuckman, 1965).As group members in the performing stage direct much attention to the tasks and little attention to emotional interactions, group productivity is expected to be high. Previous studies have discussed and examined the relationship between citizenship behavior and productivity at the unit level.For instance, Organ (1988) claimed that OCB is aggregated over individuals and time and thus should have a positive impact on group-level productivity.George (1990) further commented that the analysis of GOCB and productivity at the group level is plausible.Similarly, Podsakoff, MacKenzie, Paine, and Bachrach (2000) noted that the impact of collective OCB on group performance and productivity is likely to occur as group members are able to develop the best practices through helping other members.Recent empirical research has also shown the positive relationship between productivity and GOCB.For example, when exploring the effect of GOCB on the work group and the organization, Chen et al. (2005) found that GOCB is positively associated with group performance and negatively related to turnover intentions.Bachrach, Powell, Collins, and Richey (2006) conducted a laboratory study analyzing the effect of task interdependence on the impact of helping and group performance and found that the helping form of GOCB leads to better group performance, but this relationship depends on the level of task interdependence.Bommer, Dierdorff, and Rubin (2007) compared the effect of group-level citizenship behavior and individual-level citizenship behavior on job performance and discovered that group-level citizenship behavior significantly moderates the relationship between individual-level citizenship behavior and job performance.Podsakoff, Blume, Whiting, and Podsakoff (2009) conducted a meta-analysis and examined the consequences of OCB at the individual and organizational level.These researchers observed a strong positive relationship between OCB and unit-level performance.Another meta-analytical study performed by Whitman, Van Rooy, and Viswesvaran (2010) showed that unit-level OCB has a moderately strong relationship with unit-level performance. Our review of the previous studies reveals that there is a positive relationship between GOCB and group performance.Given high levels of group performance and productivity can be found in the performing stage of a group, one could expect that high levels of GOCB will be exhibited in the performing stage of a group.This supports our fourth proposition: Proposition 4: High levels of group organizational citizenship behavior will be exhibited in the performing stage of a group. Discussion We have intended to develop a theoretical model describing the extent of GOCB exhibited in each developmental stage of a group.Our purpose is to establish a GOCB model that includes different group characteristics affected by its development in time.This approach has been largely ignored in the GOCB literature.Specifically, the majority of previous GOCB studies have overlooked the importance of group developmental stage.This could result in overestimating or underestimating GOCB as we have argued earlier that a group can exhibit different levels of GOCB in different group developmental stages.Thus, our basic assumption is that the stages of group development can affect the extent of GOCB exhibited by a group. Regarding the relationship between group development and GOCB, we have argued that different levels of GOCB will be exhibited in different group developmental stages.Specifically, in the forming stage, moderate levels of GOCB will be exhibited.This is because a group is just formed and tasks are just introduced; some GOCBs, such as courteous and conscientious behaviors, may be needed in order to stabilize group memberships and task and behavioral expectations.However, high levels of GOCB will not be exhibited in the forming stage as group members are still engaging in certain behaviors that are not relevant to the task such as exchanging personal stories (Wheelan, 2009).Once the group moves into the storming stage, the presence of intergroup conflicts and hostility impedes and minimizes GOCB.In the norming stage, norms are established and cohesion is developed as group members show high levels of acceptance for other members' opinions and feelings (Bonebright, 2010).Thus, it is expected that higher levels of GOCB will be exhibited in the norming stage of a group.In the performing stage, group members develop adaptive functional roles and relatedness (Tuckman, 1965).Task performance, therefore, is facilitated.Because the GOCB literature has largely supported the positive association between GOCB and task performance, it is expected that high levels of GOCB will be exhibited in the performing stage of a group. Implications for empirical research In this section, we provide the implications of our theoretical model for future empirical research.Proposition 1 to 4 could be tested by employing a survey and/or an in-depth interview approach.Specifically, future researchers can conduct an experimental study by forming a task group.After a group is formed and members and tasks are introduced, GOCB then can be measured.Discussions on the procedures of measuring OCB at the group level can be found in Chen et al. (2005), Euwerna, Wendt, and van Emmerik (2007), and Choi and Sy (2010).Moreover, the literature has suggested two techniques to obtain GOCB.Specifically, future research can measure GOCB using group descriptive items or using individual/self-referenced items (see Klein, Conn, Smith, & Sorra, 2001).When using individual/self-referenced items, both self-reported and supervisor-rated survey can be employed as both techniques have been used in the literature extensively (e.g., Bommer, Miles, & Grover, 2003;Ilies, Scott, & Judge, 2006;Podsakoff et al., 1997;Restubog et al., 2008).After group members work through the forming stage and start to experience conflicts, fights, and emotional disagreements, researchers then can use self-reported or supervisor-rated survey to obtain GOCB exhibited in the storming stage and test proposition 2. To test proposition 3, researchers need to observe whether conflicts have been successfully navigated, whether group norms have been established, and whether shared mental models and most effective ways to work have been discovered by group members as described in Neuman and Wright (1999) and Ehrhart and Naumann (2004).If a group successfully demonstrates group characteristics shown in Tuckman's (1965) model, a self-or supervisor-rated survey and/or in-depth interview then can be employed to obtain GOCB in the norming stage. To evaluate proposition 4, the presence of "functional role relatedness" (Tuckman, 1965: 387) is essential.Specifically, group members play and adapt roles that will enhance task outcomes (Bonebright, 2010).In other words, the key characteristics of a group in the performing stage are group members' abilities and willingness to engage in problem-solving and adaptive behaviors.To assess these characteristics, researcher can apply the survey methods discussed in Manners (1975), Hendrick (1979), Waller (1999), andJohnson, Hollenbeck, Humphrey, Ilgen, Jundt, andMeyer (2006).Once a group demonstrates the characteristics of the performing stage described in Tuckman's (1965) model, a self-or supervisor-reported survey and/or in-depth interview can be administered. So far, we have discussed how future empirical studies can be conducted to validate our propositions.The key determinant of testing the propositions is whether a group has successfully demonstrated characteristics in each stage described in Tuckman's (1965) model.As mentioned earlier, Tuckman and Jensen (1977) provided an updated model of his original small group development model by adding a fifth stage, "the adjourning stage".The adjourning stage describes the final stage of a group, separation and termination.It is suggested that the adjourning stage in group development is an important phenomenon to study as strong interpersonal feelings and emotions can still be exhibited (Tuckman & Jensen, 1977).Because of its importance, the characteristics in the adjourning (some studies use separation, termination, disengagement, or ending) stage has been discussed in various small group development models (e.g., Bratten, 1975;Gibbard & Hartman, 1973;Yalom, 1970).However, we do not discuss the degree of GOCB exhibited in the adjourning stage as it has limited relevance to the GOCB literature and practice. Implications for practice If validated by future empirical studies, our theoretical model could have important practical implications.First, understanding GOCB in the context of group development may provide insight into group performance and effectiveness as organizations and managers can employ organizational practices that facilitate group developmental process, which in turn fosters an environment where GOCB is maximized.For instance, Tuckman's (1965) model suggests that group members establish roles and norms in the norming stage.Meanwhile, we have argued earlier, higher levels of GOCB will be exhibited in the norming stage of a group.Thus, organizations and managers can facilitate and enforce the formation of norms in order to help a group move into the norming stage more quickly.As suggested by Feldman (1984), organizations and managers can foster the development of group norms by explicitly stating norms related to group survival.It is also suggested that powerful group members can facilitate norm development (Whyte, 1955).From this standpoint, managers can proactively and explicitly state group norms and/or can identify powerful group members to establish group norms. Second, by understanding the degree and the various dimensions (i.e., sportsmanship, helping behaviors to include courtesy and conscientiousness, altruism and civic virtue) of GOCB exhibited in each stage of group development, managers will be able to manage group performance more effectively.Specifically, it has been suggested that GOCB is positively associated with group performance (e.g., Nielsen, Hrivnak, & Shaw, 2009;Podsakoff et al., 1997).Thus, managers can encourage GOCB by implementing managerial strategies that are suitable to the developmental stage of a group.For example, in the storming stage, groups are characterized by intergroup conflicts that result from interpersonal issues and the focus of individuality (Tuckman, 1965); and as a result, GOCB is discouraged.Managers in the storming stage then can assign group tasks that are irrelevant to the group purposes but bring unity to the group, such as assessing outgroup products (Brown, Schmidt, & Collins, 1988).Moreover, by minimizing dyadic communication, group conflicts can be reduced (Swabb, Phillips, Diermeier, & Medvec, 2008).Besides the shifting of a group's focus away from interpersonal conflict towards a task orientation in the storming phase, manager's can also help better prepare groups for the storming phase by reinforcing the GOCB dimensions of courtesy and conscientiousness through training and the use of peer-based reward systems (Stewart, Courtright, & Barrick, 2009;Erez, Lepine, & Elms, 2002) in the forming stage. By exploring GOCB in each stage of group development, we provide another perspective to understand GOCB as groups generally go through various developmental stages.We recognize there are various theoretically relevant small group development models in the literature (e.g., Gersick, 1988;Heinen & Jacobson, 1976;Miller, 2003;Wanous et al., 1984).We use Tuckman's (1965) four-stage small group development model as it has been widely recognized in the literature and practice.However, this article is not without limitations.A first limitation is related to Tuckman's (1965) model itself. Specifically, Tuckman (1965) suggested that his model does not represent group development in all settings as his samples were drawn mainly from the therapy groups.Thus, the model developed by Tuckman (1965) might not be applicable to other types of work groups.However, we believe that the use of Tuckman's (1965) model is appropriate as it has been the most commonly referred to and the most widely recognized theory in the literature (Miller, 2003).In addition, the model has been proven to be useful for practice (Bonebright, 2010) and for theory development (Rickards & Moger, 2000). When examining group process and development, previous research has shown that factors outside a group could have a great impact on the group.For instance, leadership styles and leader-follower relations have been demonstrated as having a great impact on GOCB (e.g., Boerner, Dutschke, & Wied, 2008;Bowler, Halbesleben, & Paul, 2010;Ehrhart, 2004;Euwerna et al., 2007).Moreover, it has been shown that organizational factors such as organizational justice (e.g., Niehoff & Moorman, 1993), organizational commitment (e.g., Lavelle et al., 2009), and organizational learning (e.g., Somech & Drach-Zahavy, 2004) could affect GOCB significantly.Thus, a second limitation of this article is that it does not account for those factors.However, our main objective is to introduce a new perspective when investigating GOCB.Future research that includes group-and organizational-level factors, therefore, is needed to validate and strengthen our theoretical model. A final limitation is that our theoretical model focuses much on GOCB as the dependent variable with limited discussion of the dimensions of GOCB, which include altruism, helping, courtesy, conscientiousness, sportsmanship, and civic virtue (Organ, 1988;Organ, Podsakoff, & MacKenzie, 2006).Because groups in different developmental stages are likely to exhibit certain GOCB dimensions, future theoretical and empirical research is needed to assess the distribution of these dimensions throughout the various group developmental stages.Despite the potential limitations, this article provides important implications for empirical research and practice. Conclusion We have sought to develop a theoretical model that explains GOCB by incorporating Tuckman's (1965) small group development model.This emphasis has been neglected in the GOCB literature.Thus, we believe that GOCB can be conceptually better understood when group developmental transitions and time are included.We provide the theoretical model and the propositions that guide future theoretical and empirical research.In addition, we offer managers and organizations suggestions and recommendations on how the proposed theoretical model and propositions can be used to enhance group performance through encouraging high levels of GOCB.Yalom, I. (1970).The theory and practice of group psychotherapy.New York, NY: Basic Books. Figure Figure 1.Proposed Theoretical Model
2017-09-09T01:43:47.829Z
2011-09-29T00:00:00.000
{ "year": 2011, "sha1": "c508473487c9267a4792fef8c15899d4b78e823c", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ijbm/article/download/10238/8767", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c508473487c9267a4792fef8c15899d4b78e823c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Economics" ] }
219983459
pes2o/s2orc
v3-fos-license
Analysis of Red Blood Cell Parameters in Dogs with Various Stages of Degenerative Mitral Valve Disease Abstract Introduction Although peripheral blood analysis has become increasingly automated, microscopy is the only available method for the diagnosis of anisocytosis and poikilocytosis. The aims of the study were to compare RBC volume data obtained with two different analysers and by manual assessment of smears and to compare this data between dogs in various stages of heart failure secondary to degenerative mitral valvular (DMV) disease. The impact of diuretic administration on RBC morphology was also assessed. Material and Methods Sixty-eight dogs, 56 in different stages of DMV disease and 12 as healthy controls, were studied. Impedance and flow cytometry haematological analyses were performed for each animal. Additionally, two smears were prepared for manual analysis. RBC structure, staining, and size differences were recorded. Results There were no significant differences between the blood morphological parameters assessed using haematological analysers nor between dogs receiving diuretic treatment and those not treated. Based on the manual smear, significantly higher erythrocyte anisocytosis was observed in the dogs with symptomatic DMV disease than in the control group. Conclusion Haematological analysers based on impedance and flow cytometry provide reliable and comparable morphological results in dogs with heart failure. However, microscopic assessment of blood smears is a more reliable tool to detect erythrocyte anisocytosis. Introduction Although peripheral blood analysis has become increasingly automatised, microscopic blood analysis remains an important tool in haematological diagnostics and is the only available method for the diagnosis of anisocytosis and poikilocytosis. Many morphological abnormalities can only be detected by the human eye. Hence, morphological blood analysis is offered by most laboratories, as it is a fast, cheap, and widely available technique. The assessment of red blood cell morphology provides valuable information on the functioning of many organs (such as the liver, spleen, and kidneys) and the causes of anaemia. Changes in erythrocyte morphology may be caused by dietary deficiencies or protein dysfunctions as well as disorders in the lipid composition of cell membranes (1,5,14,20). Mechanical factors may damage the cell membrane, leading to erythrocyte fragmentation (20). Red blood cell indicators, such as the mean corpuscular volume (MCV), mean corpuscular haemoglobin, mean cell haemoglobin (MCH), mean corpuscular haemoglobin concentration (MCHC), and red blood cell distribution width (RDW) provide information concerning the size and shape of erythrocytes. MCV is particularly useful in differentiation of anaemia. In dogs, the MCV ranges from 60 to 75 fL (17). An increased MCV value may suggest the presence of large erythrocytes (macrocytes), while a decreased MCV value suggests the presence of small red blood cells (microcytes). The presence of microcytes as well as macrocytes may occur with a normal MCV as this parameter describes the mean erythrocyte volume. MCH and MCHC determine the erythrocyte mass and haemoglobin concentration, respectively. Decreased values indicate hypochromasia, which is visible in a manual blood smear as central pallor within an erythrocyte. Increased MCH and MCHC values are observed in the presence of intra-erythrocyte Heinz bodies as well as in lipaemia and haemolysis. The RDW is a parameter that is routinely assessed by a haematological analyser and is a marker of anisocytosis, i.e. the concomitant occurrence of normocytes, macrocytes, and microcytes in a sample. The RDW is usually increased if there are iron, vitamin B12, or other nutritional deficiencies. It may also be greater following blood transfusions in inflammatory or renal disease or when excessive erythrocyte lysis occurs (11). In humans, RDW is an important diagnostic marker of heart disease (3,4,7) and correlates with patient morbidity and mortality (7). It is also considered a useful prognostic marker for predicting mortality in patients undergoing high-risk gastrointestinal surgery (15). Increased RDW has been observed in dogs with pulmonary hypertension (13). In dogs, there are breed differences in the erythrocyte volume. Greyhounds have a physiologically higher MCV compared to other breeds (16). Macrocytosis is often observed in miniature dog breeds and standard poodles (19), while microcytosis is seen in Asian dog breeds, such as the Akita or Shiba Inu (22). An increase in anisocytosis is observed in regenerative and non-regenerative anaemia caused by dyserithropoiesis (9). Normal erythrocytes are the shape of a doubleconcave disc, the so-called discocyte, and are characterised by little variation in shape and size. Because of their size, the surface-to-volume ratio is greater than in spheroidal cells. This causes more effective gas exchange between cells and their surroundings and ensures high erythrocyte deformability. In effect, this allows erythrocytes to pass through narrow blood vessels and return to their correct shape. Erythrocytes adopt various shapes depending on the disease, these shapes being known as poikilocytes. They may be divided according to shape into echinocytes, acanthocytes, schistocytes, dacrocytes, codocytes, keratocytes, ovalocytes, stomatocytes, and spherocytes. Echinocytes contain numerous irregular protrusions on their cell surface. They are present in glomerulonephritis, uraemia, lymphomas, alkalosis, and following the administration of furosemide and doxorubicin. They may appear as artefacts if the collected blood volume is too low in relation to the EDTA content in the vial (5). Acanthocytes contain from two to ten characteristic projections, do not contain central pallor and are a non-specific sign of disorders in lipid metabolism. Their presence is usually associated with liver disorders, as well as DIC, angiosarcoma, and nephritis in dogs (5,10). Schistocytes, which are fragments of erythrocytes, are characteristic of disseminated intravascular coagulation syndrome (DIC) and are observed in iron deficiency, chronic kidney failure, vasculitis, and heart valvular disease (5,23). Dacrocytes, which are drop-shaped erythrocytes, occur in myeloproliferation and splenomegaly. They are also present in ruminants with iron deficiency (6). Codocytes, also known as target cells, have an excessive cell membrane compared to the cell haemoglobin content. They occur in association with liver damage, kidney disease, and bile duct and spleen disorders. They may be present in iron deficiency anaemia (5,23). Keratocytes are characterised by the presence of pseudovacuoles in the periphery of the cell, which may form a cavity within the erythrocyte. They form as a result of cell membrane damage following doxorubicin administration and are also observed in iron deficiency anaemia and liver disease (6,9). Ovalocytes are oval erythrocytes that may appear in the blood in the presence of kidney and liver diseases (6). The same shape usually describes the central pallor in stomatocytes, which may appear in increased erythropoiesis and a hereditary disease known as stomatocytosis (occurring in Alaskan dwarf Malamutes and miniature schnauzers) (6). Spherocytes, which are small cells without a central pallor, are characteristic of hereditary spherocytosis and autoimmune haemolytic anaemia (9). Although manual smears and their analysis are frequently performed in dogs, the changes in the shape of erythrocytes in various stages of heart failure and caused by drugs such as diuretics have not been previously assessed. The primary aim of the study was to compare the morphological data on erythrocyte volume obtained with two different analysers and manual assessment of smears. The secondary aim of the study was to compare the erythrocyte volume between dogs in various stages of heart failure in the course of chronic mitral valve disease. In addition, the impact of diuretic administration on the erythrocyte morphology was assessed. Material and Methods The study was performed on 68 dogs (43 males and 25 females) with a mean age of 10.8 ± 2.5 years. Following the cardiac examination that consisted of learning the patient history and carrying out a physical examination, chest radiography, and transthoracic echocardiography, the dogs were assigned to study groups. Group A included healthy controls and comprised 12 dogs with a mean age of 8.2 ± 3.0 years. Two mixed-breed dogs, five Beagles, two Border collies, one Nova Scotia duck tolling retriever, one miniature schnauzer, and one Cavalier King Charles spaniel were the group's members. Dogs with different stages of degenerative mitral valvular (DMV) disease were assigned according to the American College of Veterinary Internal Medicine (ACVIM) consensus guidelines classification for DMV disease into respective groups. Group B1 included asymptomatic dogs with no echocardiographic evidence of cardiac enlargement secondary to DMV disease as well as those with minimal cardiac remodelling not severe enough to qualify them for group B2. This group was made up of eight dogs and one bitch. Group B2 consisted of asymptomatic dogs with echocardiographic findings of heart enlargement: left atrium-to-aorta ratio of ≥1.6 and left ventricular internal diameter in diastole ≥1.7 normalised for body weight. The sex distribution was six dogs and four bitches. Overall for group B, the mean age was 10.9 ± 2.6 years and the breeds represented were mixed (six animals), dachshund (four), miniature schnauzer (three), shih-tzu (two), fox terrier, cocker spaniel, Yorkshire terrier, and Cavalier King Charles spaniel (one of each breed). Group C included dogs with past clinical signs of heart failure secondary to DMV disease and its size was 27 animals, these being 18 dogs and 9 bitches aged 11.4 ± 1.9 years. The group combined 12 mixed breed dogs, four miniature schnauzers, three Cavalier King Charles spaniels, three shih-tzu, two dachshunds, one medium poodle, one Pekingese, and one bull terrier. Group D included 10 canine patients with end-stage DMV disease with clinical signs of heart failure refractory to standard pharmacological treatment. Aged 11.8 ± 1.9 years as the mean, the animals had breed allegiances of mixed (four), Cavalier King Charles spaniel, bull terrier, Yorkshire terrier, miniature poodle, dachshund, and Pekingese (one of each). Dogs with acute heart failure were not included in this study (2,12). Venous blood was collected from all the patients for a haematological examination. It was collected into two sample tubes with dipotassium ethylenediaminetetraacetic acid (K2EDTA) maintaining the correct anticoagulantto-blood ratio. Next, the blood was gently mixed using a haematological mixer. Two haematological analyses on two different apparatuses were performed within 30 min of blood collection. The first analyser (Scil Vet Animal Blood Counter (ABC), Horiba ABX, Montpellier, France) exploits blood impedance and the second (LaserCyte Dx, IDEXX Laboratories, Westbrook, MN, USA) is based on flow cytometry. Two manual smears were prepared on glass slides immediately after the haematological analysis. They were dried for 12 h at room temperature in a dust-free room, and then stained using the Pappenheim method and the May-Grunwald and Giemsa stains. First, the samples were immersed in 1mL of an undiluted May-Grunwald solution for 5 min. Then, they were left in 1mL of pH 7.2 buffer for 3 min. Next, they were immersed in 1:9 diluted Giemsa solution and stained for 20 min. The dye was then rinsed off with water and the samples were left to dry at room temperature. The samples were initially examined under low magnification (10×) to determine the quality of the smear, the distribution of erythrocytes and leukocytes, the presence of cell aggregations, and free spaces between cells. The size, shape, and staining of the erythrocytes were assessed under high magnification (40×). Then, a single erythrocyte layer was examined under very high magnification (100×), where half of the cells were in contact with one another. The leukocytes were counted according to the Schilling differential cell count. The erythrocyte structure and staining and size differences were recorded. Anisocytosis and poikilocytosis were examined using a 4-grade scale proposed by John W. Harvey (10,24) (Table 1). This scale provides a means of grading anisocytosis and poikilocytosis severity into four levels. A score of 1+ denotes finding the smallest number of cells with changed morphology and a score of 4+ signifies finding the largest number of these cells according to the stepped ranges presented in Table 1. If echinocytes were present, the smear was repeated and dried using two techniques. The first technique was immediate smear drying following its preparation. When the second technique was used, the smear was left to dry freely, which eliminated the possibility of incorrect smear preparation as a cause for echinocyte formation. The data underwent statistical analysis using the GraphPad Prism 5.0 package (GraphPad, San Diego, CA, USA). The Shapiro-Wilk normality test was performed to assess data normality. Two groups of related variables were assessed using the paired t-test, and any correlations between groups of variables were determined using the Pearson correlation coefficient. P ≤ 0.05 was considered statistically significant. Results Full blood and blood serum from the dogs were analysed. The results of complete blood counts in individual groups are presented in Table 2, and those of the semi-quantitative microscopic erythrocyte analysis are presented in Table 3. There were no statistically significant differences in the morphological parameters between the studied groups ( Table 2). In addition, there were no significant differences between the values of the blood morphological parameters assessed using the Laser Cyte IDEXX and the Vet ABC analyser. The MCV in dogs is much smaller than in humans, which may render changes in the RDW less evident (10). For this reason, it can be assumed that this marker has less diagnostic value in dogs. A positive correlation of the erythrocyte results obtained with both types of analysers for all the patients (r = 0.865, P < 0.0001), only healthy patients (r = 0.904, P < 0.0001), and only patients with heart failure (r = 0.841, P < 0.0001) was observed. There were no differences in the number of erythrocytes (P = 0.86) or RDW (P = 0.95) between dogs receiving furosemide and those with no diuretic treatment (P = 0.12). Based on the manual smear, significantly higher erythrocyte anisocytosis was observed in dogs with symptomatic DMV disease (groups C and D) compared to the control group (P < 0.0001) ( Table 3). In the control group, a 1+ degree of anisocytosis was observed in two out of eight specimens. In group C, 15/26 of the specimens were found to have a 1+ degree of anisocytosis, while 5/26 had a 2+ degree (Fig. 1). In group D, three out of eight dogs had 1+ anisocytosis and one dog had 2+. Macrocytosis was observed in smears from groups B2, C, and D. In group C, the presence of macrocytes was seen in 7/26 smears and was rated as 1+ in 6/26 cases and as 2+ in 1 case. No macrocytes were found in groups A (control) or B1. Echinocytes were present in all groups, their highest percentage occurring in group C, where 7/26 specimens where classified as 2+ (Fig. 2). Codocytes were present in one out of eight dogs from the control group, and their presence was assessed as 3+. In the remaining groups B1, B2, C, and D, codocytes were observed in more than one smear (Fig. 3). Codocytes were found in 9/26 smears from group C to an extent scored as 1+. Various amounts of dacrocytes were found in groups B1, C, and D (Fig. 4). A 2+ intensity of dacrocytes was observed in 5/26 smears in group C. No dacrocytes were found in the control group or in group B1. Keratocytes were found in all groups of dogs with DMV disease (B, C, D) but not in the control group. The largest percentage content of karyocytes was observed in groups C (4/26 specimens) and D (2/8 specimens). Schistocytes, ovalocytes, and stomatocytes ( Fig. 5) were only detected in groups C and D. Acanthocytes were present in one smear from group C. A larger number of erythrocytes with crystallised haemoglobin was found in animals with a high degree of heart failure than in the dogs without heart failure. Crystallised haemoglobin was present in 17% of the specimens from group C, 22% and 20% of the specimens in groups B1 and B2, respectively, 33% of the specimens in group C, and 37% of the specimens in group D (Fig. 6). Presented as mean values ± standard deviation (SD) Discussion DMV disease predominantly affects miniature and small breed dogs over eight years of age and is found to occur more frequently in males than females, which is why males were in the majority in the present study. All the dogs were treated according to the 2019 ACVIM guidelines (12). In the present study, two analysers using different measurement methods were used to assess blood morphology. The Scil Vet ABC uses the impedance method for analysis, which means that it counts most erythrocyte components and platelets based on their volumes and a change in current conductivity. The second analyser, the Laser Cyte flow cytometer, uses a combination of optic detection and flow fluorescence to perform measurements. This technique assesses not only cell size, but also the intracellular structure, i.e., the size and shape of the nucleus and its segmentation and granulation, which enables more accurate cell identification and improves result reliability. The results of the morphological analyses performed by the two analysers did not differ significantly. The RDW results are interesting. RDW is a parameter that is routinely assessed by haematological analysers and is a marker of anisocytosis, i.e., the simultaneous occurrence of normocytes, macrocytes, and microcytes. In humans, the mean erythrocyte volume ranges from 80 to 96 fL and is much larger than in dogs (60-75 fL). The presence of anisocytosis in humans may be associated with larger differences in erythrocyte size than in dogs. Hence, it may be assumed that commercial analysers accurately calculate RDW and differences in cell sizes in humans. This may explain why RDW in humans is of material importance in heart disease, while it remains a controversial parameter in dogs (3,4,7,8,13,21). Some authors observed changes in the RDW in dogs with DMV disease without pulmonary hypertension, DMV disease with postcapillary pulmonary hypertension and DMV disease with precapillary pulmonary hypertension. They found that the RDW was increased in dogs with pre-and post-capillary pulmonary hypertension as well as severe pulmonary hypertension compared to the control groups of healthy dogs (13,21). Due to the discrepancies concerning the diagnostic value of an automatically determined RDW, the authors chose to manually assess the blood smears, which provided additional information. While the RDW results obtained from both analysers were within the reference range for dogs, the manual microscopic analysis of the blood smears revealed the presence of erythrocyte anisocytosis. In the dogs with advanced heart failure, the degree of poikilocytosis was significantly greater. Codocytes and dacrocytes were observed in the dogs with heart failure. Moreover, schistocytes, ovalocytes, and stomatocytes were found in the manual blood smears. In addition, crystallised haemoglobin was discovered in the dogs with advanced heart failure. Echinocytes may appear following furosemide administration. However, in the present study, no association was found between poikilocytosis and the use of diuretics. The presence of echinocytes in all studied groups is most likely associated with the presence of the EDTA anticoagulant, which may cause cell contraction in vivo (10). In healthy animals, mainly mature and well-formed erythrocytes called normocytes enter the bloodstream. However, in certain pathological conditions, bone marrow may release cells with abnormal morphology. The presence of poikilocytosis and anisocytosis in dogs with heart failure may therefore become a helpful diagnostic marker. The assessment of the manual smear can be performed quickly and cost-effectively, which should also prompt the routine performance of this examination when heart failure is suspected. It is important to note that a complete blood count with the use of haematological analysers and microscopic evaluation has its limitations. For proper evaluation of blood smears it is necessary to prepare them within an hour of collection. The result of smear evaluation is influenced by four components: the quality of the smear, staining, the quality of the microscope, and the experience of the examiner. In the case of haematological analysers, the result is affected by haemolysis and lipaemia, which distorts the results for haemoglobin, MCH, and MCHC. Platelet aggregates affect proper analysis of the platelet count, falsely increasing the mean corpuscular volume of erythrocytes. An accurate MCV determination is crucial for RDW measurement, as it is calculated from MCV. For this reason, regardless of the method used for automatic measurement of complete blood count, verification of the result should be based on manual evaluation of the blood smear. In conclusion, haematological blood analysers based on impedance and flow cytometry provide reliable and comparable results of blood morphology in dogs with heart failure. The red blood cell distribution width measured automatically did not differ between the dogs with mitral valve regurgitation and healthy controls, while erythrocyte poikilocytosis increased with the severity of heart failure. Hence, the microscopic assessment of blood smears is a more reliable tool to detect erythrocyte anisocytosis. Conflict of Interests Statement: The authors declare that there is no conflict of interests regarding the publication of this article.
2020-06-24T13:43:31.481Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "baa0842bb7a8cf8bd805f2438bc6129234bee704", "oa_license": "CCBYNCND", "oa_url": "https://content.sciendo.com/downloadpdf/journals/jvetres/64/2/article-p325.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "baa0842bb7a8cf8bd805f2438bc6129234bee704", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49334232
pes2o/s2orc
v3-fos-license
The effectiveness of synthetic glucocorticoids on the disease course , treatment , and outcome of severe sepsis and septic shock Due to the high morbidity and mortality rates, severe sepsis and septic shock, present a major global public health problem. The treatment and management of severe sepsis and septic shock is based on a set of international guidelines called the Surviving sepsis campaign (SSC). SSC guidelines suggest the use of hydrocortisone at a dose of 200 mg IV in adult patients with septic shock refractory to fluid resuscitation and vasopressors. During the last century, the importance of cortisol as an essential hormone at times of stress, such as septic shock, has been recognized. Therefore, the therapeutic use of synthetic glucocorticoids has been the subject of many studies during the last few decades but their results are not consistent. Two of the largest studies (CORTICUS and French study) showed different effects of synthetic glucocorticoids on mortality rates, but they both showed a significantly shorter duration of septic shock. There is an ongoing large, multicenter, double-blind, randomized study (ADRENAL study) and its primary goal is to assess the impact of hydrocortisone therapy on 90-day survival rate. It is expected that the results of this study might provide answers to the questions raised by previously conducted studies. This review will summarize the pathophysiology of sepsis and septic shock and the role of cortisol in its development, and will highlight the most important clinical studies pertaining to synthetic glucocorticoid administration in sepsis and septic shock. Introduction The incidence of severe sepsis and septic shock is estimated to be between 56 to 91 per 100,000 population per year.Due to the high morbidity and mortality rates, sepsis and severe sepsis present a major global public health problem [1].In their study in 2001, Rivers et al. [2] found that early goal-directed therapy (EGDT), a specific treatment in patients with sepsis, reduced mortality, and this was incorporated into the Surviving sepsis campaign (SSC), the international guidelines for the management of sepsis, severe sepsis, and septic shock [3].These guidelines include specific procedures, monitoring, and treatment, which among other things include synthetic glucocorticoid therapy [3,4], as seen in Table 1.The SSC recommends synthetic glucocorticoid therapy only in patients with septic shock in whom sepsis-induced hypotension persists despite adequate fluid resuscitation and vasopressor treatment (recommendation level 2C) [3].The recommended dosage of intravenous hydrocortisone is 200 mg per day, and it is suggested that clinicians taper the dose of steroid therapy when vasopressors are no longer required.Synthetic glucocorticoid therapy and its effect on the course of septic shock has been the subject of many clinical studies.So far, studies have yielded inconclusive results, and their use in this clinical setting remains controversial. The clinical course of sepsis Sepsis is a clinical syndrome characterized by a systemic response to an infection.It is defined as the presence (probable or documented) of infection together with systemic manifestations of infection [4].Criteria for sepsis diagnosis are: confirmed or suspected infection, and the presence of two or more criteria for the systemic inflammatory response syndrome (SIRS) that represents the response to an insult from either an infectious or noninfectious origin.Clinically, SIRS includes at least two of the following criteria: a) body temperature >38°C or <36°C, b) heart rate >90/ min, c) respiratory rate >20/min or PaCO2 <32 mmHg, d) leukocyte count >12,000 or <4,000 or >10% non-segmented leukocytes (Table 2).Sepsis that results in tissue hypoperfusion or dysfunction of one or more organs is defined as severe sepsis [4].Further worsening leads to septic shock, defined as sepsis-induced persistent hypotension (systolic blood pressure <90 mm Hg, mean arterial pressure <70 mmHg or reduction of systolic blood pressure >40 mmHg from baseline) despite adequate rehydration (Table 3, Table 4) [4].Multi-organ dysfunction syndrome (MODS) is the most common cause of death in patients with severe sepsis, and it is defined as the presence of altered organ functions requiring medical intervention [5].MODS develops due to the release and activation of pro-and anti-inflammatory cytokines, activation of the coagulation system, and the release of acute phase proteins and hormonal and neuronal mediators [6][7][8].The immune response imbalance leads to leukocyte accumulation, disseminated intravascular coagulation, and the formation of intra-and extravascular fibrin clots.Coagulation factor consumption and platelet dysfunction leads to hemorrhagic diathesis, bleeding, microcirculatory dysfunction, and cellular hypoxic damage, resulting in necrosis of parenchymal cells and consequently to MODS (Table 4.) [3,7,[9][10][11][12][13].Respiratory dysfunction manifests as acute lung injury (ALI) and acute respiratory distress syndrome (ARDS), defined as the presence of acute non-cardiogenic pulmonary edema, absence of left atrium hypertension (PACP <18mmHg) and a hypoxemia ratio of PaO2/ FIO2 ≤300 (ALI) or ≤200 (ARDS).Acute tubular necrosis and the sudden and rapid decline in glomerular filtration leads to renal dysfunction and acute renal injury (ARI).Gastrointestinal tract dysfunction leads to bacterial translocation and translocation of digestive endotoxins and proteases into the systemic circulation that can cause paralytic ileus, stress ulcers, and bowel ischemia.Liver damage manifests as the reduced synthesis of acute phase proteins and coagulation cascade proteins, reticuloendothelial system dysfunction, increased liver enzymes, bilirubin, and ammonia, as well as hypotension, leading to central nervous system dysfunction, which can be manifested by quantitative and qualitative disorders of consciousness. Cortisol in septic shock The hypothalamic-pituitary-adrenal gland axis (HPA axis) is the main coordinator in the stress response.Corticotropin-releasing hormone (CRH) is released from the hypothalamus, which signals adrenocorticotropic hormone (ACTH) release from the pituitary with subsequent synthesis of cortisol from the adrenal cortex [14].Cortisol is a hormone that is essential at times of stress, such as septic shock.Cortisol affects the synthesis and release of proand anti-inflammatory mediators, modifies monocytes in inflammatory conditions, raises the blood pressure by increasing the reactivity of catecholamine receptors, and by favoring the extracellular space, affects the redistribution of body fluids [15,16]. During physiological stress, the HPA axis is continuously activated causing the loss of diurnal cortisol secretion and resulting in elevated cortisol levels [17][18][19][20].In addition, the increase in blood cortisol levels during septic shock is a consequence of reduced cortisol metabolism, renal failure, increased peripheral resistance to cortisol, and reduced albumin and cortisol binding globulin levels, leading to higher free cortisol levels, the active form of the hormone [20][21][22][23]. Clinical significance of serum cortisol analysis Many studies have been conducted trying to resolve why there is a big difference between serum cortisol levels in patients with septic shock, and whether there is a link between cortisol levels and mortality [24][25][26][27][28][29][30].However, studies have shown conflicting results.While some studies found that lower serum cortisol levels were associated with increased mortality, others found that higher cortisol levels were associated with increased mortality, and some studies found no association [26,28,[31][32][33][34][35][36].In one of these studies, a new theory of relative adrenal insufficiency was presented. It was based on an inadequate response or an inadequate increase in serum cortisol levels after a synthetic ACTH (cosyntropin) stimulation test, and it was thought that these patients could benefit from the hydrocortisone therapy [32].However, the SSC does not recommend adrenal function assessment with ACTH stimulation tests in order assess the need for synthetic glucocorticoids in the case of relative adrenal insufficiency.This recommendation is based on recent studies that showed no statistically significant potential link between the use of cortisol and adrenal function, nor a difference in mortality in patients with or without relative adrenal insufficiency that were treated with hydrocortisone [37,38]. As mentioned before, decreased cortisol binding globulin levels increase the level of free, active forms of cortisol.Therefore, free cortisol levels might better represent the activation of the hypothalamic-pituitary-adrenal gland in patients with hypoproteinemia.This was shown in a study of critically ill patients that had 2-3 times higher total serum cortisol levels in comparison to healthy individuals, and 7-10 times higher free cortisol levels [39].However, in the cited study, patients with septic shock were not included and the level of hemodynamic instability at the time of sampling was not specified, and therefore the connection between cortisol levels with disease severity cannot be evaluated [40].A subsequent study showed that patients with septic shock had higher free cortisol levels than patients with sepsis and healthy individuals, and free cortisol levels better correlated with disease severity than total cortisol levels [41].The clinical application of these studies is questionable because analysis of free cortisol levels is not available in most centers around the world. Synthetic glucocorticoid therapy in the past Throughout history, a number of studies have been performed on the therapeutic use of synthetic glucocorticoids in septic shock.Even in the first half of the 20th century, the importance of the adrenal glands in the protection of the body against bacterial infections was recognized, and later, with the discovery of cortisone, the clinical use of synthetic glucocorticoids in the treatment of infectious conditions began [42][43][44][45][46][47].In 1976, Schumer presented a study that showed reduced mortality in patients with septic shock treated with methylprednisolone [48].After this study, methylprednisolone at a dose of 30 mg/kg became a standard part of septic shock treatment.However, studies conducted in the eighties investigating the use of high-dose synthetic glucocorticoid therapy did not show lower mortality rates in patients treated with methylprednisolone, but instead they showed an increase in morbidity due to secondary infections [49][50][51]. During the nineties, several smaller studies compared the effects of placebo and lower doses of hydrocortisone (200-400 mg per day) in patients with septic shock.The results showed that patients treated with hydrocortisone had a higher survival rate and faster reversal of septic shock [52,53]. In 2002, a larger multicenter study (French study), involving 300 patients with septic shock, was conducted in 19 intensive care units in France [37].The ACTH stimulation test was performed with 250 mg cosyntropin and depending on the results, the patients were divided into two groups: patients with relative adrenal insufficiency (non-responders) defined as an increase of total cortisol ≤9 mg/dL after stimulation, and patients with normal adrenal function [37].Comparing the placebo effect with the effect of hydrocortisone and fludrocortisone in lower doses, patients were randomly assigned to receive either hydrocortisone (50-mg intravenous bolus every 6 hours) and fludrocortisone (50µg tablet once daily) (n = 151) or matching placebos (n = 149) for 7 days.Annan and colleagues came to the conclusion that the group of patients who received corticosteroids had lower mortality and a shortened need for vasopressor therapy (Figure 1) [37].In addition, no significant difference in the rate of side effects was found. The next large multicenter study was The Corticosteroid Therapy of Septic Shock study (CORTICUS), which included 499 patients with septic shock who received placebo or hydrocortisone in a total dose of 200 mg per day [38].The drugs were administered as a 50-mg intravenous bolus every 6 hours for 5 days, then tapered to 50 mg intravenously every 12 hours for days 6 to 8, 50 mg every 24 hours for days 9 to 11, and then stopped.Patients were divided into two groups: patients with relative adrenal insufficiency and patients with normal adrenal function.In both groups, patients receiving hydrocortisone did not have a decrease in mortality rates compared to patients who received placebo.However, this study also demonstrated the beneficial role of hydrocortisone on shortening the duration of septic shock (Figure 2) [38]. The different results regarding mortality rates in these two studies could be explained by the difference in their methodology.Patients in the French study had higher Simplified Acute Physiology Score II (SAPS II) than those in the CORTICUS study.In the French study, septic shock was defined as a systolic blood pressure <90 mmHg for more than one hour despite adequate fluid resuscitation and vasopressor administration, and in CORTICUS as a systolic blood pressure <90 mmHg despite adequate fluid resuscitation, or the need for vasopressor administration for more than one hour.Furthermore, the duration of hydrocortisone treatment was shorter in the CORTICUS study (5 days vs. 7 days, in the French study) and fludrocortisone was not used.Despite these differences, the results of both studies showed that hydrocortisone therapy reverses septic shock quicker, and subsequent analysis of data from the CORTICUS study showed that patients achieved more rapid hemodynamic stability and possibly faster recovery of renal function [54]. Since the publication of the above-mentioned studies, several meta-analyses including studies published from January 1993 to December 2008, were performed.The results showed that the treatment of septic shock with synthetic glucocorticoids significantly shortens the duration of septic shock but does not reduce the 28-day mortality [55,56].It should also be noted that the results were independent of adrenal gland function.Currently, there is an ongoing multicenter, double-blind, randomized study Adjunctive Corticosteroid Treatment in Critically Ill Patients with Septic Shock (ADRENAL) [57].The study began in June 2012 and its completion is expected in June 2016.It is being conducted in centers in Australia, New Zealand, Saudi Arabia, the United Kingdom, and Denmark.It includes 3800 patients with vasopressor dependent septic shock, and its primary goal is to assess the impact of hydrocortisone therapy at a daily dose of 200 mg on 90-day survival rate.Since it is a large multicenter study, it is expected that the results could provide answers to many questions raised with previously conducted studies. Conclusion Due to the high morbidity and mortality, sepsis, severe sepsis, and septic shock are an important global public health problem.Treatment recommendations have changed over the years, and one controversial issue that has remained is the use of synthetic glucocorticoids.Over the years, numerous studies have been conducted, investigating the role of synthetic glucocorticoids in septic shock; however, the results have been inconsistent.For now, the risk-benefit ratio of synthetic glucocorticoids remains unclear.In most of the world, the therapeutic approach for sepsis follows the SSC guidelines, which, for now, suggest the use of hydrocortisone at a dose of 200 mg IV in adult patients with septic shock refractory to fluids resuscitation and vasopressors.Although synthetic glucocorticoids may accelerate the recovery of unstable patients, one should always keep their severe side effects in mind.Therefore, the clinician must play a central role in the decisions-making process and assess the advantages and disadvantages of synthetic glucocorticoids in each patient individually. Blood cultures before antibiotic therapy ( 1 C) Imaging studies performed promptly to confirm potential source of infection (1 C) Administration of broad-spectrum antibiotic therapy within one hour of diagnosis of septic shock ( 1B) and severe sepsis without septic shock ( 1 D) Reassessment of antibiotic therapy with microbiology and clinical data to narrow coverage, when appropriate (1 C) The usual 7-10 days of antibiotic therapy guided by clinical response (1 D) Administration of either crystalloid or colloid fluid resuscitation (1B) Fluid challenge to restore mean circulating filling pressure (1 C) Reduction in rate of fluid administration with rising filing pressures and no improvement in tissue perfusion (1 D) Vasopressor preference for norepinephrine or dopamine to maintain the initial target Of MAP >65 mmHg (1 C) Dobutamine inotropic therapy when cardiac output remains low despite fluid resuscitation and combined inotropic/vasopressor therapy (1 C) Stress-dose steroid therapy given only in septic shock after blood pressure is identified to be poorly responsive to fluid and vasopressor therapy (2 C) Recombinant activated protein C in patients with severe sepsis and clinical assessment of high risk for death (2B except 2C for postoperative patients) In the absence of tissue hypoperfusion, coronary artery disease, or acute hemorrhage, target a hemoglobin of 70-90 g/L (1B) A low tidal volume (1B) and limitation of inspiratory plateau pressure strategy (IC) for acute lung injury (ALI)/ acute respiratory distress syndrome (ARDS) Application of at least a minimal amount of PEEP in acute lung injury (1 C) Head of bed elevation in mechanically ventilated patients unless contraindicated (1 B)Grades of Recommendation, Assessment, Development and Evaluation (GRADE) system.The GRADE system classifies recommendations as strong (Grade 1) or weak (Grade 2), and quality of evidence as high (Grade A), moderate (Grade B), low (Grade C)' or very low (Grade D).The grade of strong or weak is considered of greater clinical importance than the difference in the letter level of quality of evidence.GRADE working group.Grading quality of evidence and strength of recommendations.•Dellinger RP, Levy MM, Rhodes A, Annane D, Gerlach H, Opal SM, Sevransky JE, Sprung CL, Douglas IS, Jaeschke R, Osborn TM, Nunnally ME, Townsend SR, Reinhart K, Kleinpell RM, Angus DC, Deutschman CS, Machado FR, Rubenfeld GD, Webb SA, Beale RJ, Vincent JL, Moreno R: Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2012.Crit Care Med 2013, 41:580-637 Hemodynamic variablesArterial hypotension (SBP < 90 mm Hg, MAP < 70 mm Hg, or an SBP decrease > 40 mm Hg in adults or less than two SD below normal for age) Organ dysfunction variables Arterial hypoxemia (Pao2/Fio2 < 300) Acute oliguria (urine output < 0.5 mL/kg/h for at least 2 hs despite adequate fluid resuscitation) Creatinine increase > 0.5 mg/dL or 44.2 μmol/L Coagulation abnormalities (INR > 1.5 or aPTT > 60 s) Ileus (absent bowel sounds) Thrombocytopenia (platelet count < 100,000 μL-1) Hyperbilirubinemia (plasma total bilirubin > 4 mg/dL or 70 μmol/L) Tissue perfusion variables Hyperlactatemia (> 1 mmol/L) Decreased capillary refill or mottling WBC = white blood cell; SBP = systolic blood pressure; MAP = mean arterial pressure; INR = international normalized ratio; aPTT = activated partial thromboplastin time.Adapted from Levy MM, Fink MP, Marshall JC, et al: 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference.Crit Care Med 2003; 31:1250-1256. Figure 2 .Figure 1 . Figure 2. The CORTiCUS study showed a decreased time reversal of septic shock in patients receiving hydrocortisone.(Adopted from Sprung CL et al.The effects of high-dose corticosteroids in patients with septic shock.A prospective, controlled study.N Engl J Med 1984; 311:1137) Table 2 . Diagnostic Criteria for Sepsis Infection Documented or suspected and some of the following: Plasma C-reactive protein more than two SD above the normal value Plasma procalcitonin more than two SD above the normal value
2018-06-22T14:34:23.038Z
2016-03-15T00:00:00.000
{ "year": 2016, "sha1": "05b97ec6357f88a459c9eb50a1e572520b7fde91", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.21040/eom/2016.2.8", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1f2b56f0f8bb73cd3e0c1032b50634bfff4b7845", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233870281
pes2o/s2orc
v3-fos-license
Disproportionality Analysis of Safety Signals for a Wide Variety of Opioid-Related Adverse Events in Elderly Patients Using the Japanese Adverse Drug Event Report (JADER) Database Opioids are widely used for the treatment of moderate/severe pain in cancer and noncancer patients. In this study, we searched for safety signals for a wide variety of opioid-related adverse events (AEs) in elderly patients by disproportionality analysis using the Japanese Adverse Drug Event Report (JADER) database. Data from the JADER database from April 2004 to May 2018 were obtained from the Pharmaceuticals and Medical Devices Agency website. Safety signal detection of opioid-related AEs in elderly patients was defined using the relative elderly reporting odds ratio (ROR). Among the analyzed AEs, opioid-induced neurotoxicity (OIN) was assessed based on the time to onset using the Weibull shape parameter. The following safety signals were detected in elderly patients: respiratory depression, somnolence, hallucinations, akathisia and OIN. Fentanyl, tramadol, oxycodone and morphine exhibited a large relative elderly ROR for OIN. The median time to onset of OIN of transdermal fentanyl, oral tramadol, oral oxycodone and oral morphine was 13.5, 6, 9, and 6 d, respectively. These opioids were classified as early failure types using the Weibull distribution. Our results showed that elderly patients who are administered opioids should be closely monitored for AEs, such as respiratory depression, OIN and akathisia. INTRODUCTION Pain strongly affects the QOL of patients. Cancer pain has been reported in 30 to 60% of patients. 1) Opioids are widely used for the treatment of moderate/severe pain in cancer and noncancer patients. The most widely accepted algorithm for the treatment of cancer pain was developed by WHO. 2,3) It suggests that patients with cancer pain should be treated with acetaminophen, nonsteroidal anti-inflammatory drugs (NSAIDs) and opioids. Opioids are effective for treating cancer and noncancer pain; however, adverse events (AEs) frequently occur with opioid therapy. Opioid-induced AEs include constipation, nausea, respiratory depression, delirium and somnolence. 4) Furthermore, "opioid-induced neurotoxicity (OIN)" is a distressing symptom in palliative care patients receiving opioids. 5) Symptoms of OIN include delirium, drowsiness, hallucinations, myoclonus and seizures. 5) OIN was seen in 15% of cancer patients receiving opioids as part of inpatient palliative care. 5) However, there have been only a limited number of studies on OIN. AEs to prescribed medications among elderly patients are frequent causes of hospital admission and death. 6,7) AEs were observed in 6 to 15% of hospitalized elderly patients in Japan. 8) Polypharmacy and inappropriate prescriptions are known risk factors for AEs in elderly patients. To assess potential inappropriate prescriptions in elderly patients, the Screening Tool of Older People's Prescriptions (STOPP) and the Screening Tool to Alert to Right Treatment (START) represent important criteria. 9) For example, elderly people taking anticholinergic drugs are at increased risk for cognitive decline and dementia. 10) The use of benzodiazepines is associated with falls in elderly patients. 11) Thus, the possibility of AEs to prescription medications should be kept in mind when evaluating elderly patients. The Japanese Adverse Drug Event Report (JADER) database of the Pharmaceuticals and Medical Devices Agency (PMDA) is a large spontaneous reporting system that reflects the realities of clinical practice in Japan. The JADER database is utilized to analyze the safety signal detection of AEs. The reporting odds ratio (ROR) is the odds ratio used to analyze safety signals that are important for the detection of AEs. [12][13][14][15] Previous studies demonstrated that safety signals of respiratory depression were detected for opioids, such as fentanyl, morphine, oxycodone and pentazocine. 15) Furthermore, morphine showed a large ROR with statistical significance in elderly (≥70 years old) patients. 15) In a secondary analysis of a retrospective cohort study, the risk of respiratory depression increased substantially after 60 years of age. 16) In a randomized double-blind placebo-controlled study, older (≥65 years old) participants receiving opioid therapy were more likely than those younger than 65 to report constipation, fatigue and anorexia. 17) However, the risk of opioid-related AEs in elderly patients has not been fully investigated. In addition, data are lacking on the incidence of OIN. In this study, we searched for safety signals for the wide variety of opioid-related AEs in elderly patients by disproportionality analysis using the JADER database. Additionally, we analyzed the time to onset of OIN using the JADER database. Data Selection of Opioid-Related AEs The AEs names were defined using the Medical Dictionary for Regulatory Activities/Japanese version 21.1 (MedDRA/J). As opioidrelated AEs, we extracted 286 preferred terms (PTs) and 5 standardized MedDRA queries (SMQs) ( Table 1). The analyzed PTs were presented in ≥6 reports among the total cases. Definition of Cancer Patients The primary diseases in the HIST tables are based on the MedDRA. In this study, and similar to previous reports, 15) Safety Signal Detection The ROR was calculated using two-by-two contingency tables as follows: a: the number of patients experiencing the AE of interest after using opioids, b: the number of patients experiencing all other AEs after using opioids, c: the number of patients experiencing the AE of interest after using other drugs, and d: the number of patients experiencing all other AEs after using other drugs. The safety signal was considered significant if the lower limit of the 95% confidence interval (CI) exceeded 1: Safety signal detection in elderly patients was defined using the relative elderly ROR reported by the European Medicines Agency. 20) The relative elderly ROR was calculated as follows: elderly younger ROR Relative elderly ROR ROR = When the lower limit of the 95% CI of the ROR elderly and relative elderly ROR was greater than 1, it was defined as a safety signal in elderly patients. Data analyses were performed using JMP, version 5.0.1 (SAS Institute Inc., Cary, NC, U.S.A.). Time-to-Onset Analysis The median duration, quartiles and Weibull shape parameter were utilized in the evaluations of the time to onset of OIN. The Weibull shape parameter test is used for statistical analysis of time-to-onset data and can describe the nonconstant rate of the incidence of AEs. 15) The time to onset from the JADER database was calculated from the time of the patient's first prescription to the occurrence of the AEs. Patients with complete AE occurrence and prescription start date data were used for the time-to-onset analyses. When the onset time of the AEs was 365 d or longer after the initiation of administration, the calculation was performed using 365 d as the onset time. The scale parameter α of the Weibull distribution determines the scale of the distribution function. A larger scale value stretches the distribution. A smaller scale value shrinks the data distribution. The shape parameter β of the Weibull distribution indicates the hazard without a reference population. When β is equal to 1, the hazard is estimated to be constant over time. When β is greater than 1 and the 95% CI of β excludes 1, the hazard is considered to increase over time. When β is less than 1 and the 95% CI of β excludes 1, the hazard is considered to decrease over time. Time-to-onset analyses were performed using JMP, version 5.0.1 (SAS Institute Inc.). Time to Onset of OIN in Elderly Patients and Patient Background Many of the symptoms of OIN included "Delirium," "Somnolence" and "Hallucination" (Fig. 1a). The most common route of administration for fentanyl was transdermal. Tramadol, oxycodone and morphine were commonly administered orally (Fig. 1b). The time to onset of OIN was profiled using the Weibull distribution. For the time-to-onset analysis, we extracted combinations for which complete information regarding the date of treatment initiation and the date of AE onset were available. Younger patients had few cases of complete information and thus were not analyzed. The median time to onset of OIN for transdermal fentanyl, oral tramadol, oral oxycodone and oral morphine was 13.5, 6, 9, and 6 d, respectively ( Table 5). The β values of transdermal fentanyl, oral tramadol, oral oxycodone and oral morphine were 0.6 (95% CI: 0.5-0.8), 0.6 (95% CI: 0.5-0.7), 0.6 (95% CI: 0.5-0.7) and 0.6 (95% CI: 0.5-0.8), respectively. Figure 2 presents a histogram of the number of cases of OIN between 0 and 100 d. The peaks of reports for any opioids were all within 5 d. DISCUSSION In this study, we searched for safety signals of the wide variety of opioid-related AEs in elderly patients by disproportionality analysis using the JADER database. Additionally, we analyzed the time to onset of OIN using the JADER database. Our results suggest that several safety signals were detected for opioids in elderly patients. A safety signal of OIN was detected for total cases, fentanyl, tramadol, oxycodone and morphine in elderly patients. Among these drugs, a safety signal of OIN was detected for total cases, oxycodone and morphine in elderly cancer patients. Furthermore, we demonstrated that OIN tended to occur early after the initiation of opioid administration to elderly patients. In general, the risk of AEs is higher in elderly patients than in younger patients. 21) Previous reports showed that AEs are frequent in patients ≥65 years old. 22,23) Elderly patients exhibit reduced hepatic and renal function, which can affect drug pharmacokinetics and pharmacodynamics. 24) Furthermore, the risk of AEs in elderly patients increases with comorbidity, polypharmacy, and inappropriate prescribing. 21) The package inserts of opioids describe risks for respiratory depression in elderly patients, but information regarding other AEs is unclear. The risk of opioid-related AEs in elderly patients is not defined by the STOPP criteria. Opioids are needed to control moderate/severe pain in cancer and noncancer elderly patients. Consequently, our results have important implications for elderly patient management. Recent reports have described the risk of AEs using the JADER database in elderly patients. Sugawara et al. reported that the safety signal of respiratory depression was detected for opioids in elderly patients. 15) Chisaki et al. reported 27 combinations of drugs and AEs with an increased risk in elderly patients over 70 years old compared with younger patients. 25) Hatahira et al. reported that several drugs (such as calcium channel blockers, benzodiazepines, and drugs for herpes zoster virus infection) and increased patient age are both associated with fall-related AEs. 26) This is the first study to detect a wide variety of safety signals for opioids in elderly patients using the JADER database and RORs. From a disproportionality analysis using the Food and Drug Administration Adverse Event Reporting System (FAERS) database, Andreaggi et al. reported that opioid related-depression and suicide self-injury were more likely to be reported for ages ≥65 compared to age group 18 to 64. 27) In our analysis using relative elderly ROR, the safety signal of suicide attempt was detected in the total cases, while depression was not detected. However, the ROR elderly and ROR younger values for depression were 2.8 (95% CI: 1.6-5.0) and 0.9 (95% CI: 0.4-2.5), respectively. Furthermore, the relative elderly ROR of depression was not significant but was high at 3.0 (95% CI: 0.96-9.4). Depression in older adults is associated with an increased risk of morbidity and suicide. 28) On the other hand, depression is common in patients with pain, especially cancer patients. It is unclear whether depression and suicide attempts are associated with AEs of opioids. However, we need to monitor the mental health of elderly patients who are administered opioids. Notably, several central nervous system (CNS) safety signals were detected, such as OIN and akathisia. Zedler et al. reported that a risk factor for serious opioid-related respiratory or CNS depression was age ≥55 years old. 29) Albrecht et al. reported increased sensitivity of the CNS to midazolam in elderly subjects. 30) Similarly, increased sensitivity of the CNS to opioids is also assumed in elderly patients. In this study, four safety signals were detected for morphine. Morphine is converted to the morphine-6-glucuronide (M6G) metabolite, which has analgesic activity. After conversion, M6G is excreted in the urine. Renal function is reduced with age, and M6G can accumulate to higher levels in elderly patients. 31) Three safety signals were detected for fentanyl. Previous studies demonstrated that serum fentanyl concentrations were higher in elderly patients than in younger patients. 32) Decreased fentanyl clearance in elderly patients might be a result of several factors, including decreases in hepatic blood flow or hepatic microsomal enzyme activity, or increases in drug protein binding. 32) Similarly, oxycodone and tramadol clearance decrease in elderly patients and patients with renal dysfunction. 31) In addition to pharmacokinetic changes, enhanced pharmacodynamic sensitivity is seen with opioids in elderly patients. 31) In this study, safety signals of an overdose in elderly patients were detected for total cases and fentanyl. Therefore, the use of these opioids for elderly patients may increase the risk of AE occurrence due to changes in their pharmacokinetics and pharmacodynamics. OIN is a distressing condition seen in palliative care patients receiving opioids. Symptoms of OIN include delirium, hallucinations, somnolence, hyperesthesia, epilepsy, seizures and myoclonus. Risk factors for OIN include escalating doses of opioids, dehydration, renal failure, end-stage disease and advanced age. 33) On the other hand, the risk of OIN in elderly patients has not previously been assessed in an observational or database study. The primary treatment for OIN is hydration, dose reduction or discontinuation of opioids and opioid rotation. Failure to appropriately diagnose and treat OIN will lead to poor symptom control and the potential for seizures. 34) We found that the safety signal of OIN was detected in elderly patients. The most common symptoms of OIN cases were delirium, somnolence and hallucination. This result is consistent with a previous report. 5) The most common routes of administration of fentanyl and the other three drugs (tramadol, morphine and oxycodone) in the OIN cases were transdermal and oral, respectively. This result was considered to reflect the typical dosage form of each drug used in Japan. Furthermore, we evaluated the time to onset of OIN using Weibull distribution parameters. The usefulness of the Weibull distribution for profiling the time to the onset of AEs has been reported. 12,15,35) Sugawara et al. reported that the time to onset of respiratory depression associated with opioids was classified as the early failure type. 15) Similarly, our results indicated that transdermal fentanyl, oral tramadol, oral oxycodone and oral morphine-induced neurotoxicity were early failure types. Opioid receptors (mu-, delta-, kappa-opioid receptors) exist throughout the central and peripheral nervous systems and are linked to a variety of neurotransmitters. Opioids have immediate clinical effects by directly stimulating those receptors. Similarly, opioid-related AEs, such as OIN, may develop almost immediately by directly stimulating those receptors. Oral tramadol and oral morphine-induced neurotoxicity were reported in 75% of patients within approximately 2 weeks after initiating their use. Transdermal fentanyl and oral oxycodone-induced neurotoxicity were reported in 75% of patients within approximately one month and exhibited delayed onset compared to oral tramadol and oral morphine neurotoxicity. Our results suggest that neurotoxicity, including delirium, hallucinations and somnolence, should be carefully monitored for the first month and especially the first 5 d, among patients who are administered opioids. Furthermore, we defined cancer patients based on MedDRA considering the confounding nature of classification by indi-cations. The results showed that a safety signal of OIN was detected in elderly cancer patients prescribed oxycodone and morphine. When these drugs are administered for the management of cancer pain in elderly patients, the potential for neurotoxicity should be carefully assessed. On the other hand, a safety signal of OIN was not detected in elderly cancer patients administered fentanyl and tramadol. The number of AEs for these two drugs was reduced based on an exclusive analysis of cancer patients. Subgroup analyses showed benefits in both sensitivity and precision over crude analyses for the larger databases; however, for the smaller databases, a gain in precision tended to result in some loss of sensitivity. 36) We believe that the fentanyl and tramadol results were affected by the decreased number of patients based on the exclusive analysis of cancer patients. Delirium is the most common neuropsychiatric complication observed in patients with cancer. 37) The influence of disease on OIN requires further investigation. Our study has several limitations. Spontaneous reporting systems, such as the JADER database, are associated with various biases, including overreporting, underreporting, missing data, and the lack of a denominator. [12][13][14][15] In addition, we did not evaluate drug-drug interactions and opioid dose. The time to onset of OIN in young patients has not been investigated and was not compared with elderly patients. In this study, "total cases" referred to a collection of 12 opioids. However, it is likely that each opioid has different patterns of AEs in elderly patients. We have not been able to consider why some safety signals were detected, such as anaphylactoid reaction and large intestine perforation. Finally, the ROR indicates an increased risk of AE reporting and not a risk of AE occurrence. 15) Further studies, such as observational studies, are needed to evaluate the risk of opioid-related AEs in elderly patients. In summary, this study was the first to evaluate the association between opioids and a wide variety of AEs in elderly patients using the JADER database and the relative elderly ROR. We demonstrated that opioid-related AEs, such as respiratory depression, OIN and akathisia, in patients ≥60 years old, are potentially increased compared to those in patients <60 years old. Our results showed that elderly patients who are administered opioids should be closely monitored for AEs.
2021-05-07T06:22:54.284Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b9d5ce69d5a5be01b314a1d06909025a4846cb5d", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/bpb/44/5/44_b20-00904/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1d913d356736ee53dc98d90351e7da2e1ca5ee80", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229548193
pes2o/s2orc
v3-fos-license
Comparison of formaldehyde tropospheric columns in Australia and New Zealand using MAX-DOAS, FTIR and TROPOMI . South-eastern Australia has been identified by modelling studies as a hotspot of biogenic volatile organic compound (VOC) emissions; however, long-term ob-servational VOC studies are lacking in this region. Here, 2.5 years of multi-axis differential optical absorption spectroscopy (MAX-DOAS) formaldehyde (HCHO) measurements in Australasia are presented, from Broadmeadows, in northern Melbourne, Australia, and from Lauder, a rural site in the South Island of New Zealand. Across the measurement period from December 2016 to November 2019, the mean formaldehyde columns measured by the MAX-DOAS were 2 . 50 ± 0 . 61 × 10 15 molec. cm − 2 at Lauder and 5 . 40 ± 1 . 59 × 10 15 molec. cm − 2 at Broadmeadows. In both locations, the seasonal cycle showed a pronounced peak in Austral summer (December–January–February) consistent with temperature-dependent formaldehyde production from biogenic precursor gases. The amplitude of the seasonal cycle was 0 . 7 × 10 15 molec. cm − 2 at Lauder, and it was 2 . 0 × 10 15 molec. cm − 2 at Broadmeadows. The Lauder MAX-DOAS HCHO measurements are compared with 27 months of co-located Fourier transform infrared (FTIR) observations. The seasonal variation of Lauder MAX-DOAS HCHO, smoothed by the FTIR averaging kernels, showed good agreement with the FTIR measurements, with a linear regression slope of 1.03 and an R 2 of 0.66 for monthly averaged formaldehyde partial columns (0–4 km). In addition to ground-based observations, a clear way to address the VOC measurement gap in areas such as Australasia is with satellite measurements. Here, we demonstrate that the TROPOspheric Monitoring Instrument (TROPOMI) can be used to distinguish formaldehyde hotspots in forested and agricultural regions of south-eastern Australia. The MAX-DOAS measurements are also compared to TROPOMI HCHO vertical columns at Lauder and Melbourne; very strong monthly average agreement is found for Melbourne (regression slope of 0.61 and R 2 of 0.95) and a strong agreement is found at Lauder (regression slope of 0.73 and R 2 of 0.61) for MAX-DOAS vs. TROPOMI between May 2018 and November 2019. This study, the first long-term satellite comparison study using MAX-DOAS in the Southern Hemisphere, highlights the improvement offered by TROPOMI’s high resolution over previous satellite products and provides the groundwork for future studies using ground-based and satellite DOAS for studying VOCs in Australasia. Abstract. South-eastern Australia has been identified by modelling studies as a hotspot of biogenic volatile organic compound (VOC) emissions; however, long-term observational VOC studies are lacking in this region. Here, 2.5 years of multi-axis differential optical absorption spectroscopy (MAX-DOAS) formaldehyde (HCHO) measurements in Australasia are presented, from Broadmeadows, in northern Melbourne, Australia, and from Lauder, a rural site in the South Island of New Zealand. Across the measurement period from December 2016 to November 2019, the mean formaldehyde columns measured by the MAX-DOAS were 2.50 ± 0.61 × 10 15 molec. cm −2 at Lauder and 5.40 ± 1.59 × 10 15 molec. cm −2 at Broadmeadows. In both locations, the seasonal cycle showed a pronounced peak in Austral summer (December-January-February) consistent with temperature-dependent formaldehyde production from biogenic precursor gases. The amplitude of the seasonal cycle was 0.7 × 10 15 molec. cm −2 at Lauder, and it was 2.0 × 10 15 molec. cm −2 at Broadmeadows. The Lauder MAX-DOAS HCHO measurements are compared with 27 months of co-located Fourier transform infrared (FTIR) observations. The seasonal variation of Lauder MAX-DOAS HCHO, smoothed by the FTIR averaging kernels, showed good agreement with the FTIR measurements, with a linear regression slope of 1.03 and an R 2 of 0.66 for monthly averaged formaldehyde partial columns (0-4 km). In addition to ground-based observations, a clear way to address the VOC measurement gap in areas such as Australasia is with satellite measurements. Here, we demonstrate that the TROPOspheric Monitoring Instrument (TROPOMI) can be used to distinguish formaldehyde hotspots in forested and agricultural regions of south-eastern Australia. The MAX-DOAS measurements are also compared to TROPOMI HCHO vertical columns at Lauder and Melbourne; very strong monthly average agreement is found for Melbourne (regression slope of 0.61 and R 2 of 0.95) and a strong agreement is found at Lauder (regression slope of 0.73 and R 2 of 0.61) for MAX-DOAS vs. TROPOMI between May 2018 and November 2019. This study, the first long-term satellite comparison study using MAX-DOAS in the Southern Hemisphere, highlights the improvement offered by TROPOMI's high resolution over previous satellite products and provides the groundwork for future studies using ground-based and satellite DOAS for studying VOCs in Australasia. effective method for constraining VOC emissions and for studying the role of VOCs in atmospheric reactivity (see Kefauver et al., 2014, and references therein). Formaldehyde has atmospheric mixing ratios ranging from several hundred parts per trillion (ppt) in unpolluted marine air (Mahajan et al., 2010;Peters et al., 2012) to tens of parts per billion (ppb) in polluted urban air (e.g. Zhu et al., 2017). Primary sources of formaldehyde include direct emission from fossil fuel combustion and wild fires. The main secondary sources of HCHO are the oxidation of methane, isoprene and monoterpenes. Methane is considered to be the primary background HCHO source globally (Pfister et al., 2008), and because it is a potent greenhouse gas, studying background formaldehyde levels has important climate change implications. Isoprene and monoterpenes emitted from vegetation constitute the main source of biogenic carbon to the atmosphere (Guenther et al., 2012). While methane is considered the most important OH sink in background oceanic air, isoprene and monoterpenes constitute the largest OH reactivity over land; hence, these biogenic VOCs play a crucial role in determining oxidative capacity (Fuentes et al., 2000;Lelieveld et al., 2008). Isoprene and monoterpenes are also thought to play a strong role in the climate system through radiative forcing by secondary formation of organic aerosols (Henze et al., 2008). Photolysis and reaction with OH limit the lifetime of formaldehyde to several hours during the daytime which facilitates the comparison of colocated measurements and also means that spatially resolved HCHO measurements closely resemble the distribution of its VOC sources (Zhu et al., 2016). Biogenic VOC emissions in Australasia are among the highest in the world due to the abundance of Australian endemic eucalyptus trees, known to be high isoprene and monoterpene emitters (Winters et al., 2009;Guenther et al., 2012). Global-scale modelling has suggested that Australia has the highest isoprene-derived formaldehyde levels of any other continent (Pfister et al., 2008); however, constraining biogenic VOC emissions has proven challenging in Australia to date. Formaldehyde measurements, such as those from satellites, are common proxies for biogenic VOC emissions, but the accuracy of these measurements under low-NO x conditions has not been observationally verified (Zhu et al., 2016;Wolfe et al., 2016), which is likely due to uncertainties in differentiating HCHO from different anthropogenic, isoprene and monoterpene sources. Emmerson et al. (2016Emmerson et al. ( , 2018 highlighted this by demonstrating that the Model of Emissions of Gases and Aerosols from Nature (MEGAN) biogenic emissions scheme, used in numerous global-and regional-scale chemistry and climate models, overestimates isoprene and underestimates monoterpenes in the thickly eucalyptus-forested south-east of Australia. Therefore, reliable, long-term biogenic VOC measurements are needed in the Australasian region. The multi-axis differential optical absorption spectroscopy (MAX-DOAS) technique, a passive spectroscopic method which uses scattered solar radiation, can facilitate this through measurement of formaldehyde. In the last decade HCHO MAX-DOAS measurements have been reported from many locations worldwide, (Hoque et al., 2018a, b;Heckel et al., 2005;Pinardi et al., 2013;Peters et al., 2012;Vigouroux et al., 2009), but none have been reported in Australasia so far. Developments in satellite sensors and retrievals of atmospheric trace gases over the past 2 decades can offer new insights into air quality and composition (Martin, 2008). Validation by ground-based instrumentation is an important step in understanding the utility of such satellite data products. Because satellite instruments and MAX-DOAS share the same spectroscopic technique for retrieving UV and visible absorbing trace gases, MAX-DOAS is an ideal validation tool as demonstrated for HCHO in several previous papers (e.g. Chance et al., 2000;Thomas et al., 1998;Hoque et al., 2018b;De Smedt et al., 2015;Vigouroux et al., 2009;Lee et al., 2015;Kurosu et al., 2007). However, no such validation studies have been published for the Australasian region to date. Measurements in two locations are discussed in this paper: Broadmeadows, on the northern fringe of Melbourne in south-eastern Australia, and Lauder, a remote locality in the South Island of New Zealand, as shown in the map in Fig. 1. Australia's Bureau of Meteorology has operated an EnviMeS MAX-DOAS instrument on a laboratory roof at its training facility at Broadmeadows (37.690 • S, 144.947 • E; 110 m a.m.s.l.) since December 2016. This location is close to some significant pollution sources, including factories and major roadways. MAX-DOAS measurements of nitrogen dioxide and nitrous acid at the Broadmeadows site have been reported in Ryan et al. (2018). Lauder is located in Central Otago, New Zealand (45.038 • S, 169.684 • E; 370 m a.m.s.l.), surrounded by irrigated farmland, ringed by distant mountain ranges and lying approximately 30 km north-east of the nearest large town, Alexandra. An EnviMeS MAX-DOAS has been operational at Lauder since November 2016, allowing a significant period of overlap between the Lauder and Melbourne time series. The National Institute of Water and Atmospheric Research (NIWA) EnviMeS MAX-DOAS demonstrated good performance at the CINDI-2 international comparison campaign held in the Netherlands in 2016 (Kreher et al., 2020). Both Broadmeadows and Lauder have regular co-located meteorological, aerosol, radiation and trace gas measurements; the Lauder site is part of numerous international atmospheric monitoring networks Pollard et al., 2017;Tradowsky et al., 2018). In addition, formaldehyde vertical columns measured at Lauder using Fourier transform infrared (FTIR) spectroscopy (Vigouroux et al., 2018) are available for comparison with the MAX-DOAS measurements. The paper is structured as follows: Sect. 2 presents the MAX-DOAS and FTIR HCHO retrieval approach used in MAX-DOAS measurements MAX-DOAS measurements at Broadmeadows were made with a 2D EnviMeS instrument pointing to a fixed azimuth direction of 208 • . The measurement, completed over 12 min, consisted of the elevation angles 90, 30, 20, 10, 5, 3, 2 and 1 • , as described in Ryan et al. (2018). At Lauder, a 1D En-viMeS instrument was used pointed at a fixed azimuth of 30 • , and the elevation angles used were 90, 40, 20, 10, 5, 3 and 2 • . Dark current and offset corrections were made for each dataset using calibration spectra collected nightly, and initial wavelength and line-shape calibrations were facilitated by laboratory-measured mercury emission lamp spectra. MAX-DOAS spectral analysis The MAX-DOAS data analysis process consists of two parts: calculation of differential slant column densities (dSCDs) from the raw spectra and an inversion algorithm to retrieve vertical trace gas profiles from the dSCD information. The spectral retrieval was done in QDOAS (http://uv-vis. aeronomie.be/software/QDOAS/, last access: 10 June 2020). Cross sections used in the analysis were NO 2 at 220 and 298 K (Vandaele et al., 1998), O 4 at 298 K (Thalman and Volkamer, 2013), O 3 at 223 and 243 K (Serdyuchenko et al., 2014), HCHO at 297 K (Meller and Moortgat, 2000), BrO at 223 K (Fleischmann et al., 2004), HONO at 298 K (Stutz et al., 2000) and a Ring cross section at 250 K (Grainger and Ring, 1962). All cross sections were pre-convolved with the line shape of the instrument and fifth-order polynomial and second-order offset terms were also included in QDOAS. Differential slant column densities (dSCDs) of O 4 , used in MAX-DOAS aerosol retrievals, were determined using the wavelength range from 338 to 370, as in studies such as Ryan et al. (2018) and Kreher et al. (2020). A simple sensitivity study was run to determine the appropriate wavelength range for formaldehyde retrieval given that two wavelength ranges are common in previous papers: 324.5-359 and 336-359 nm. Formaldehyde absorption bands for formaldehyde are, in theory, measurable by the MAX-DOAS UV spectrometers used in this work down to 300 nm. Published research to date, however, tends to avoid fitting below 320 nm due to strong ozone absorption. Retrieval strategies in other work use a fitting range from 336 to 359 nm (e.g. Kreher et al., 2020;Heckel et al., 2005;Pinardi et al., 2013;Vigouroux et al., 2009) encompassing the three highest UV HCHO absorption features. Here a simple sensitivity study was run to determine if any benefit can be derived from additional absorption bands in the extended range (e.g. Chan et al., 2019;Johansson et al., 2009;Wang et al., 2017b;Franco et al., 2015). Data for this test were chosen from a clearsky autumn day at Broadmeadows with maximum HCHO dSCDs of ≈ 7.5 × 10 16 molec. cm −2 at a 3 • elevation angle. The calculation of fit error in QDOAS depends on the linear fit parameters, the residuals and the information content of the retrieval, which, in turn, depends on the number of wavelengths in the fit. Neither the residual root mean square (RMS) (Fig. 2c) nor the magnitude of the dSCD (Fig. 2a) were substantially impacted by the choice of the wavelength range, suggesting that the improvement in fit error for the 324.5-359 nm range (Fig. 2b) results from increasing the information content of the retrieval. As a result of the increased information content and resulting lower fit errors, the 324.5-359 nm range was adopted in this paper for formaldehyde. An example HCHO DOAS fit is shown Fig. 2d and demon- strates the convincing retrieval of formaldehyde dSCDs using the extended range. MAX-DOAS profile retrievals Formaldehyde vertical columns and profiles from Broadmeadows and Lauder were retrieved from dSCDs using the Heidelberg profile retrieval algorithm (HEIPRO; Frieß et al., 2006). HEIPRO has previously been used for NO 2 and HONO gas profile retrievals at Broadmeadows (Ryan et al., 2018). In an initial step, aerosol profiles were determined from dSCDs of the O 4 dimer. These were used as input information on the light path for calculating air mass factors and HCHO vertical column density (VCD) in the second retrieval step. Vertical profiles were retrieved on a 20layer grid with a 200 m resolution from 0 to 4 km, aerosol retrievals were calculated at 360.8 nm and HCHO retrievals were calculated at 338.9 nm. A priori profiles used in the inversion were chosen to be exponentially decreasing functions of altitude, characterized by a set surface mixing ratio and scale height, which were 0.5 ppb and 1 km respectively for formaldehyde. HEIPRO was run in 15 min intervals ensuring that each measurement set contained a full set of elevation angles. MAX-DOAS retrievals were filtered for results with less than one independent piece of information and for the presence of clouds. At Broadmeadows this was determined using an empirical algorithm based on colour indices (e.g. Gielen et al., 2014;Wagner et al., 2014;Wagner et al., 2016), also described in Ryan et al. (2018), and at Lauder it was determined using the SkyNet AOD (aerosol optical density) flag which is calculated using the method outlined in Khatri and Takamura (2009). The errors associated with the MAX-DOAS retrieval include systematic errors, which derive primarily from the HCHO cross section uncertainty of around 9 % (Vigouroux et al., 2009). Random errors include model parameter uncertainty (such as uncertainty in a priori parameters), which is estimated to be 10 % following the methodology outlined in Ryan et al. (2018), along with retrieval noise and smoothing errors, which were calculated in HEIPRO. An example MAX-DOAS HCHO retrieval from HEIPRO is shown in Fig. 3, including the model-measurement comparison, retrieved and a priori profile and averaging kernels. These example averaging kernels at Broadmeadows show the highest sensitivity at the surface as well as 3.4 degrees of freedom (DoFs) for signal. The Lauder retrievals consistently have reduced surface sensitivity and lower DoFs compared with Melbourne, which is likely related to the lower amounts of formaldehyde at Lauder and the fact that 2 • is the lowest possible elevation angle for MAX-DOAS at Lauder due to proximate mountain ranges. Across the whole measurement period, the average DoFs value was 2.25 ± 0.34 (1σ ) at Broadmeadows and 1.27 ± 0.11 (1σ ) at Lauder. Detection limits for the MAX-DOAS vertical column densities at Lauder and Broadmeadows have been estimated using the method outlined in Peters et al. (2012): where R avg is the average residual RMS, XS max is the maximum value of the cross section (1.32 × 10 −19 for HCHO) and A is the air mass factor taken here as 15 for low elevation angles. R avg was 4.5 × 10 −4 at Broadmeadows, giving DL VCD (HCHO) as 4.9 × 10 14 molec. cm −2 . The average residual RMS was lower at Lauder, 2.9 × 10 −4 , giving a calculated detection limit of 3.2 × 10 14 molec. cm −2 . Over the whole measurement period, the average vertical column was 2.50 ± 0.61 × 10 15 molec. cm −2 at Lauder and 5.40 ± 1.59 × 10 15 molec. cm −2 at Broadmeadows, meaning that HCHO VCDs were generally above the detection limit but measurements at Lauder were closer to the detection limit than at Broadmeadows. shows the retrieved and a priori profiles and panel (c) shows the averaging kernels for this retrieval. FTIR retrieval Solar FTIR measurements have been made since the early 1990s at Lauder as part of the Network for Detection of Atmospheric Composition Change (NDACC; Jones et al., 1994;De Mazière et al., 2018). Measurements are made on all possible clear-sky days, throughout the day, using Bruker high-resolution (0.0035 cm −1 ) spectrometers (https: //www.bruker.com/, last access: 10 June 2020). Initial retrievals of HCHO from the Lauder 1992-2005 FTIR dataset are described in detail in Jones et al. (2009). The HCHO retrieval strategy (under the auspices of the NDACC infrared working group) was harmonized across the network as detailed in Vigouroux et al. (2018). Lauder spectra HCHO reprocessing was part of this harmonization activity and is the retrieval strategy used to provide HCHO data in this study. The same HCHO dataset is also used in a TROPOMI comparison study comprising globally distributed ground-based FTIR measurements (Vigouroux et al., 2020). These studies show that HCHO abundances over Lauder exhibit a seasonal cycle peaking in the summer (DJF, December-January-February). Pertinent to this study, and paraphrasing details in Vigouroux et al. (2018), the Lauder FTIR retrievals are performed on a 48-layer atmosphere (0.37-100 km) of which 15 layers are between 0.37 and 10 km. The retrievals use a static a priori originating from WACCM_v4 (Whole Atmosphere Community Climate Model, version 4) climatechemistry model simulations (Garcia et al., 2007), and the retrievals are constrained using Tikhonov regularization (L1, α = 100). Combined with a measurement signal-to-noise ratio of 400, the retrieval strategy has sensitivity over the altitude range from 0.37 to 26 km with an average total column DoFs of 1.4 ± 0.2 (1σ ). The highest sensitivity is in the upper troposphere peaking at 8km with a full width at half maximum of 16-18 km. This differs from the MAX-DOAS measurements which has maximum sensitivity in the boundary layer. An example Lauder FTIR formaldehyde retrieval from 8 January 2018 is shown in Fig. 4. Attributed uncertainty analysis of the total column measurement gives an estimate of ≈ 2 % and ≈ 12 % for random and systematic error respectively. The systematic error is dominated by spectroscopic line strength uncertainty, whereas the major component of the random error is measurement noise. Satellite details The TROPOspheric Monitoring Instrument (TROPOMI) is a nadir-viewing imaging spectrometer aboard the European Space Agency's Copernicus Sentinel 5 Precursor (S5P) satellite. S5P launched in October 2017 and is a low (afternoon) polar orbit (≈ 824 km) mission providing daily global coverage for a range of UV, visible and infrared absorbing trace gases (Veefkind et al., 2012). The S5P overpass time is 13:30 LT (local time), and the spatial resolution of TROPOMI is 3.6 × 7.2 km (before 6 August 2019) and 3.6 × 5.6 km (after 6 August 2019). Formaldehyde slant column densities (SCDs) are retrieved from the analysis of absorption features over the wavelength range from 328.5 to 359 nm. The SCDs are converted to vertical columns using air mass factors calculated at 340 nm with HCHO a priori vertical profiles simulated by the TM5-MP global chemistry transport model as described For this study, TROPOMI data were regridded to 0.1 × 0.1 • , (approximately 10 × 10 km). The recommended quality control (QC) filtering was applied, excluding retrieved values where the QC flag was less than 0.5 (on a scale of 0-1), which ensures that scenes with a cloud radiance fraction (at 340 nm) < 0.5 are excluded from the comparisons. Given that the satellite overpass was around 13:30 LT, MAX-DOAS results between 13:00 and 14:00 LT were averaged for the comparisons. The Ozone Monitoring Instrument (OMI) is also a UV-Vis nadir-viewing spectrometer providing near-global daily coverage, housed on the National Aeronautics and Space Administration's Earth Observing System Aura satellite (Levelt et al., 2006). The spatial resolution of OMI is 13 × 24 km, and the overpass time is also around 13:30 LT. Formaldehyde slant columns retrieved from OMI using a wavelength range of 327.5-356.5 nm (González Abad et al., 2015) are used along with GEOS-Chem simulated a priori profiles to calculate HCHO vertical columns (Bey et al., 2001). For comparison with the Broadmeadows MAX-DOAS dataset, OMI HCHO columns were regridded to 0.25 × 0.25 • , meaning that columns approximately 25 km either side of the measurement site were used, and as with TROPOMI, cloudy scenes were excluded from the comparison. Lauder vs. Melbourne HCHO The time series of monthly formaldehyde vertical columns from Broadmeadows and Lauder MAX-DOAS measurements are presented in Fig. 5a. Following the example of Jones et al. (2009), the seasonal cycle of formaldehyde was fitted with a cosine function described by the following equation: where C(t) is the formaldehyde vertical column as a function of time (in units of days since 1 January 2016), φ is the phase term (in units of day of the year) and K = 2π/365. Also fitted in the linear regression are a 2 (amplitude of the seasonal cycle), a 0 (the initial mean column amount) and a 1 (the magnitude of the linear trend in HCHO over time). At Lauder, the mean HCHO VCD was 2.5 × 10 15 molec. cm −2 , and the amplitude of the fitted seasonal cycle was 6.9 × 10 14 molec. cm −2 ; at Broadmeadows the average HCHO VCD was 5.4 × 10 15 molec. cm −2 , and the amplitude of the fitted seasonal cycle was 2.0 × 10 15 molec. cm −2 . A comparison of results from Broadmeadows and Lauder, including a breakdown of uncertainty components, is provided in Table 1. The HCHO seasonal cycle from Lauder MAX-DOAS measurements is consistent with that found from FTIR measurements at Lauder from July 2002 to July 2017 (Vigouroux et al., 2018). The fact that both the magnitude of the HCHO VCDs and amplitude of the seasonal cycle are much smaller at Lauder than Broadmeadows could be due to higher anthropogenic VOC precursors, as Melbourne is a large city, and/or due to higher biogenic VOC emissions from forests surrounding Melbourne. The seasonal cycle of formaldehyde shows a distinct austral summer peak in both locations. This would be expected from the biogenic production of formaldehyde (e.g. from isoprene), which depends strongly on temperature (Duncan et al., 2009;Palmer et al., 2006;Zhu et al., 2014). The phase of the cosine fit in each location is 31 d, indicating that the HCHO seasonal cycle peaks at the end of January. This is also consistent with the results for Lauder in Vigouroux et al. (2018) and suggests that the same background mechanisms may be responsible for summertime HCHO production at Lauder and Broadmeadows. Polar bivariate plots showing the relationship between formaldehyde and wind direction and speed at Broadmeadows and Lauder are given in Fig. 5b and c respectively. At Broadmeadows, HCHO concentrations are highest with wind from the northern and eastern sectors, aligning with the direction of rural and densely forested regions, suggesting an important role for biogenic HCHO sources at this location. The dominant source directions from forested and rural regions, along with the summertime peak, are also consistent with biomass burning being a source of formaldehyde in Melbourne. At Lauder, maximum column amounts correspond to moderate wind speeds from the east. While over the course of the MAX-DOAS dataset the wind came from this direction less than 10 % of the time, the same key source directions including the strong "easterly maximum" are observed in polar bivariate plots of the 2001-2019 FTIR dataset (not shown). There is a large variation in vegetation types across New Zealand's South Island, including temperate rainforest in the west, dryland agricultural in the Central Otago region, and intensive irrigated pasture in much of the east, south and south-east, which might be expected to produce different volatile organic emissions and formaldehyde amounts. The highest population density in the South Island, including the cities of Dunedin and Christchurch, lies along the east coast. Given that the lifetime of formaldehyde is of the order of hours, transport of the order of a hundred kilometres is possible, meaning that the different source directions can reasonably be compared. Based on the available evidence, it could be hypothesized that the agricultural and more densely populated eastern sector is a stronger source of formaldehyde to Lauder than the forested west coast. MAX-DOAS vs. FTIR at Lauder One previous study, carried out on the tropical Reunion Island, highlights a comparison between MAX-DOAS and FTIR formaldehyde columns (Vigouroux et al., 2009). In that paper, the comparison period was 4 months. In this work, colocated measurements over a period of 27 months are compared, from November 2016 to January 2019, allowing for the comparison of HCHO over two annual cycles. The com-parison method used here has been adapted from Vigouroux et al. (2009) and Rodgers and Connor (2003). Partial column amounts have been compared in the lowest 4 km of the atmosphere, which is the region of expected formaldehyde production and the region of highest sensitivity for MAX-DOAS measurements. Because the FTIR instrument is less sensitive to the HCHO partial column in the lowest 4 km (as is evident from the averaging kernels in Figs. 3a and 4), the MAX-DOAS partial columns have been smoothed by the FTIR total averaging kernel using the method outlined in Vigouroux et al. (2009). As in Vigouroux et al. (2009), the equation for the smoothing is simplified by the fact that the same a priori profile was used to retrieve MAX-DOAS and FTIR profiles, allowing the smoothed DOAS column to be given by the following equation: where A F is the FTIR total column averaging kernel matrix (from 0 to 4 km), which is unitless (calculated as mixing ratio/mixing ratio); C a is the common a priori column amount; x D is the original retrieved MAX-DOAS profile; x a is the common a priori profile; and C DOAS,smooth is the smoothed MAX-DOAS column amount. Only columns between 08:00 and 18:00 LT contributed to the monthly averages examined here. The time series of monthly averaged results is presented in Fig. 6a, showing that both measurements capture the same broad seasonal cycle at Lauder and that monthly average columns for both measurements were clearly above the calculated MAX-DOAS detection limit. The month-to-month variation in formaldehyde is in especially good temporal agreement for summer (DJF) 2017-2018, whereas both the timing and magnitude of HCHO in summer 2016-2017 and 2018-2019 are poorly replicated by the FTIR. Due to the higher sensitivity of the MAX-DOAS to the lower troposphere, this suggests that HCHO plumes were lower in 2016-2017 and 2018-2019; therefore, they were not captured as well by the FTIR in 2016-2017 and 2018-2019 as they were in the summer of 2017-2018. There is a clear offset between the MAX-DOAS and FTIR columns, with the FTIR consistently lower across the comparison period. Comparing the measurements by linear (Deming method, incorporating errors in both the x and y ordinates), the offset is found to be 2.92×10 15 molec. cm −2 and almost constant, as indicated by the regression slope (1.17, see Fig. 6b). The time series also shows that smoothing the DOAS partial columns brought them more into line with the FTIR columns, especially in the peak months (November-March). The R 2 value of 0.65 (n = 27) for the regression in Fig. 6b highlights the moderate temporal agreement. Considering daily averages, a slope of 1.31 and an R 2 of 0.42 (n = 810) were found, whereas the slope of the Deming regression was 1.19 with an R 2 = 0.47 (n = 116) for weekly averages. The weekly and daily average time series and scatter plots are shown in Fig. A1 in Appendix A. The differences and errors on the differences between MAX-DOAS and FTIR columns were calculated for the smoothed and original MAX-DOAS columns following the method outlined in Vigouroux et al. (2009). For the raw MAX-DOAS columns, the difference (MAX-DOAS -FTIR, ±1σ ) was 15.1 ± 26.3 %, whereas it was 10.1 ± 26.1 % for the smoothed comparison. These results and the breakdown of random and systematic errors on the differences are compiled in Table 2. The differences and standard deviations of the column comparisons are slightly larger here than for the results found in the Reunion Island comparison (Vigouroux et al., 2009), where no significant offset between measurements was observed. In contrast to their study, the smoothing was found to improve the mean difference between the columns in this work. The greater mean difference and standard deviations of the differences at Lauder compared with Vigouroux et al. (2009) likely reflect the much longer comparison period, incorporating variations across a much wider range of atmospheric conditions, and the fact that only the altitude range of 0-4 km is examined in this work rather than the 0-10 km range used in Vigouroux et al. (2009). In addition, differences in site characteristics may play a role in the greater offset observed at Lauder. Reunion Island, being a coastal site, is likely to be measuring marine background formaldehyde, as indicated by the fact that the 2007 measurements in Vigouroux et al. (2009) rarely exceeded 7.7 × 10 15 molec. cm −2 , with little local surface HCHO production. In comparison, the mean smoothed DOAS column across the 27-month comparison period was 7.7 × 10 15 molec. cm −2 , suggesting greater local production, which will occur at the surface where the MAX-DOAS sensitivity is greatest and the FTIR least sensitive. MAX-DOAS vs. TROPOMI In this section, MAX-DOAS formaldehyde columns are compared with satellite results. Firstly, Lauder HCHO MAX-DOAS columns are examined alongside results from TROPOMI. Following the example of MAX-DOAS vs. satellite formaldehyde comparisons in Hoque et al. (2018b) andDe Smedt et al. (2015), vertical columns are compared rather than profiles. TROPOMI reports an uncertainty on the column amount; however, it was found that this uncertainty was highly correlated with the magnitude of the column amount. Therefore, we estimated the uncertainty on the satellite column retrievals from the number of retrievals contributing to the averaged column in the 0.1 × 0.1 • grid box (number per cell, N pc ) and the standard deviation of those retrievals (SD T ): More measurements were available from TROPOMI over Broadmeadows than at Lauder, with an average N pc across the comparison period, considering TROPOMI pixels 0.1 • either side of the ground-based station, of 1.18 in New Zealand and 2.76 in Melbourne. Because N pc was often below one for a 0.1 • resolution, comparison with MAX-DOAS results was carried out at a 0.2 • resolution. The final compared results filtered out pixels with N pc < 1, giving an average N pc of 1.84 for Lauder and 2.94 for Broadmeadows. The discrepancy in N pc could be due to more cloud over New Zealand than Victoria, or because HCHO columns over Lauder are low enough to be approaching the detection limit. TROPOMI results showed greater spatial variation over New Zealand than Victoria, as illustrated in the example map in Fig. 7a. This is reflected in the standard deviation (SD T ) of HCHO retrievals contributing to the Lauder and Broadmeadows average TROPOMI columns: the mean ±SD T was 1.66 × 10 15 ± 1.50 × 10 15 and 7.53×10 15 ±1.10×10 15 molec. cm −2 for Lauder and Broadmeadows respectively. Overall, these factors combined to give a high mean percentage variance for Lauder TROPOMI columns of 129 %, whereas the mean percentage variance was only 9.7 % for Broadmeadows. Nevertheless, the average summer (DJF) 2018-2019 TROPOMI retrieval map for the central New Zealand South Island, shown in Fig. 7b, supports the conclusion (from the MAX-DOAS measurements) that the highest formaldehyde amounts are in the agricultural and more densely populated eastern parts of the island. There are no standout HCHO hotspots in the thickly forested west coast or south-western Fiordland regions. The New Zealand Alps are highlighted in this figure by the lack of formaldehyde, possibly due to minimal vegetation in this region and because the satellite retrieval will not work over areas of high albedo (i.e. snow). The inference that formaldehyde is close to background levels is supported by the fact that the average summer column amounts over the Tasman Sea and Pacific Ocean off the coast of the South Island appear similar to those over land. In comparison, the average summer 2018-2019 map from Victoria highlights some clear features -especially high formaldehyde levels over the densely forested regions in the east of the state. The irrigated agricultural land north of Melbourne stands out compared with the drier grazing country in the west and north-west; these areas highlighted by TROPOMI correspond to the directions of highest measured HCHO at Broadmeadows in Fig. 5b. Formaldehyde columns from TROPOMI and MAX-DOAS at Broadmeadows and Lauder were compared over the course of 18 months (May 2018-November 2019). For the comparison, TROPOMI results (columns and associated a priori profiles and averaging kernels) were averaged 0.2 • either side of the Broadmeadows and Lauder MAX-DOAS locations. MAX-DOAS columns (along with averaging kernels) were averaged between 13:00 and 14:00 LT, around the time of the TROPOMI overpass. TROPOMI vertical profiles are not available for download; hence, in order to accurately compare tropospheric columns across the same altitude range, the MAX-DOAS retrievals for this comparison were run to 10 km rather than 4 km as in the FTIR-MAX-DOAS comparison in Sect. 3.2. For direct comparison of TROPOMI and MAX-DOAS formaldehyde vertical columns, accounting for the different instrumental a priori profiles and vertical sensitivities, the method outlined in Vigouroux et al. (2020) for comparing TROPOMI with FTIR was adapted. Firstly, to account for the fact that the two retrieval methods use different a priori profiles, the following equation was used to produce an adjusted MAX-DOAS profile x D : where x D is the original MAX-DOAS profile, A M is the MAX-DOAS averaging kernel matrix, I is the identity matrix, x D,a is the MAX-DOAS a priori profile and x T,a is the TROPOMI a priori profile expressed on the MAX-DOAS altitude grid. The integrated adjusted column gave an adjusted MAX-DOAS HCHO tropospheric column, which was then smoothed using the TROPOMI averaging kernels (expressed on the MAX-DOAS altitude grid) using the same method as for smoothing the FTIR columns in Sect. 3.2 (Rodgers and Connor, 2003): where C D,smooth is the smoothed MAX-DOAS tropospheric column, C T,a is the TROPOMI a priori tropospheric column and a T is the TROPOMI column total averaging kernel. The monthly average time series of HCHO tropospheric columns at Broadmeadows measured by MAX-DOAS and TROPOMI is shown in Fig. 8a. The seasonal variation in formaldehyde with its strong summer peak is clearly captured by TROPOMI, with all MAX-DOAS and TROPOMI data points above the calculated MAX-DOAS detection limit. The original MAX-DOAS retrieved columns agree well with the magnitude of the TROPOMI observations between October 2018 and June 2019, including over the summer peak, but they are greater than TROPOMI outside of these months. The MAX-DOAS columns adjusted for a priori differences and convolved with TROPOMI averaging kernels agree well with TROPOMI, within uncertainty, for all months except the height of the summer peak in January-February 2019. This discrepancy during times of peak HCHO production in the boundary layer highlights the much greater sensitivity of the MAX-DOAS to the lower atmosphere than TROPOMI. The average difference between TROPOMI and the smoothed and raw MAX-DOAS columns, along with the breakdown of random and systematic errors on the differences (calculated following the methodology outlined in Vigouroux et al., 2009) is presented in Table 2. Smoothed MAX-DOAS columns were on average 5 % higher than TROPOMI; however, for individual measurements, the difference was highly variable (standard deviation 94 %). This small average bias towards MAX-DOAS is consistent with the bias found between ground-based FTIR stations and TROPOMI for locations with comparable average HCHO column amounts in Vigouroux et al. (2020). Figure 8b shows the same as Fig. 8a but for Lauder. As for Broadmeadows, the broad seasonal variation is captured by TROPOMI, and all data points are above the calculated MAX-DOAS detection limit, although TROPOMI error bars are greater than at Broadmeadows and often extend below the MAX-DOAS detection limit, due to the lower number of available TROPOMI retrievals over Lauder. The convolved MAX-DOAS HCHO columns compare well within error for a majority of months. On average, TROPOMI was 29 % lower than MAX-DOAS raw columns and 22 % higher than smoothed MAX-DOAS columns; however, the smoothing process accentuated the largest differences resulting in a standard deviation for the smoothed comparison greater than 100 %. The average bias found for Lauder MAX-DOAS vs. TROPOMI is consistent within the uncertainty with the negative bias for TROPOMI vs. FTIR for Lauder in Vigouroux et al. (2020). The agreement between TROPOMI and MAX-DOAS is further examined using linear Deming regression analysis in Fig. 9. For Lauder, Fig. 9b shows the monthly average scatter plot with overall regression slope of 0.73 and R 2 = 0.61 (n = 18). The majority of data points lie within error of the 1 : 1 line. The regression values for the daily measurements at Lauder were slope = 0.40 and R 2 = 0.22 (n = 510), whereas weekly averages gave a slope of 0.66 and R 2 of 0.45 (n = 73). The resolution selection criterion did not have a large effect on the comparison, with a regression slope of 0.68 (monthly averages) for averaging TROPOMI 50 km either side of Lauder as opposed to 20 km. At Broadmeadows, data points lie along the 1 : 1 line within error except for the highest two values, which are January and February 2019 as highlighted in the time series, giving a regression slope of 0.61. This further highlights the finding, in line with Vigouroux et al. (2020), that the low bias of TROPOMI compared with ground-based measurements is accentuated at high HCHO levels. The very strong temporal consistency is highlighted by an R 2 of 0.95 (n = 18). Considering the individual daily measurements at Broadmeadows, the slope of the regression was 0.77 with R 2 = 0.69 (n = 506), whereas the slope was 0.66 with R 2 = 0.89 (n = 73) for weekly averages (plots for Lauder and Broadmeadows daily measurements and weekly averages are shown in Figs. A2 and A3 in Appendix A). Considering TROPOMI sampled 10 and 50 km either side of Broadmeadows, regression slopes were 0.56 and 0.65 respectively, with the low bias of TROPOMI compared with MAX-DOAS at high HCHO consistent across sampling resolution. The success of this comparison study for formaldehyde with TROPOMI, especially at Broadmeadows, is highlighted by a comparison (2017-2019) at the same Broadmeadows location between OMI and the MAX-DOAS. As shown in Fig. A4, OMI does not clearly capture any of the seasonal formaldehyde variation in Melbourne; as such, it fails to replicate the MAX-DOAS values. The error bars shown in this figure are the quoted uncertainty on the OMI columns, and they represent 67 % of the total column on average, perhaps due to the poorer resolution of OMI compared with TROPOMI, making observation of the seasonal cycle diffi- cult in this data. Monthly OMI HCHO columns are on average 200 % higher than the MAX-DOAS (see Table A1 in Appendix A), which is far greater than any discrepancy reported in the literature for a MAX-DOAS vs. satellite re-trieval. One possibility for the disparity is the fact that OMI is sampled 25 km either side of the measurement location compared with approximately 20 km for MAX-DOAS, thereby taking in more of the background. However, this could not explain why no seasonality is evident in the OMI results. Given that both OMI and TROPOMI retrievals rely on a priori formaldehyde profiles calculated using the same chemical transport model (TM5, De Smedt et al., 2018), a priori differences cannot explain the difference in the comparison. However, previous studies (e.g. De Smedt et al., 2015;Wang et al., 2017a) found that agreement between OMI and MAX-DOAS measurements improved when using the MAX-DOAS a priori profiles to retrieve satellite columns; it would be interesting in future work to do the same for HCHO satellitebased retrievals over Australasia. Examining the influence of a priori profiles calculated by chemical transport models on formaldehyde retrievals is also of particular interest in southeastern Australia given that biogenic VOC emissions have been shown to be poorly simulated in this region (Emmerson et al., 2016(Emmerson et al., , 2018. Conclusions This paper presents comparison studies of MAX-DOAS formaldehyde measurements in two distinctly different environments: the remote Central Otago region in New Zealand and the suburban fringe area of Broadmeadows in Victoria. This work is the first long-term comparison and validation study undertaken using MAX-DOAS measurements in the Southern Hemisphere. For MAX-DOAS measurements between December 2016 and November 2019, the mean formaldehyde column measured by the MAX-DOAS at Broadmeadows was 5.40±1.59×10 15 molec. cm −2 compared with 2.50±0.61× 10 15 molec. cm −2 at Lauder. The amplitude of the seasonal cycle was also greater at Broadmeadows than at Lauder: 2.0 × 10 15 molec. cm −2 compared with 0.7 × 10 15 molec. cm −2 . The seasonal cycles at Lauder and Broadmeadows could be described by a periodic function peaking at the end of January, i.e. at the height of the austral summer, consistent with biogenic temperature-dependent formaldehyde production. At Lauder, 27 months of MAX-DOAS measurements were compared with FTIR formaldehyde partial columns between 0 and 4 km. Smoothing of the FTIR columns using the MAX-DOAS averaging kernels to resolve for the different vertical sensitivities was carried according to the methodology outlined in Rodgers and Connor (2003) and Vigouroux et al. (2009). The seasonal cycle of formaldehyde at Lauder, with a pronounced summer peak, was clearly replicated by both sets of observations, and the smoothed FTIR columns correlated more strongly than the original with the MAX-DOAS results. The timing of the HCHO seasonal cycle peak was very similar between Broadmeadows and Lauder, suggesting similar HCHO sources; however, the source strength at Lauder seems to be weaker with a lower seasonal cycle amplitude. In the first TROPOMI-MAX-DOAS Southern hemispheric comparison study, TROPOMI performed especially well compared to the Broadmeadows monthly average columns in terms of temporal variation and magnitude (R 2 = 0.95, slope = 0.61). This result is a significant improvement in the comparison with OMI both at this location and in previous literature reports. Higher spatial variability and lower absolute amounts of HCHO made the comparison more difficult at Lauder; however, the linear regression analysis also indicated moderate temporal agreement in most months of the comparison (R 2 = 0.61, slope = 0.73). Using maps of average TROPOMI HCHO retrievals, this study also demonstrates the utility of the satellite product to identify hotspot regions of biogenic VOCs, which will be a critical tool in addressing the current gap in the understanding of isoprene and monoterpene chemistry in south-eastern Australia. This TROPOMI comparison study, especially over Melbourne, raises many exciting possibilities for future work. This study shows the importance of long-term time series MAX-DOAS measurements for satellite validation, and it could contribute to international validation efforts. This research could also be extended to consider not only formaldehyde validation but also NO 2 , HONO and glyoxal. This would continue to address the lack of Southern hemispheric satellite validation studies using ground-based remote sensing. This work also shows the utility of the MAX-DOAS technique for studying formaldehyde in the VOC hotspot of south-eastern Australia, and it would be interesting in future studies to deploy MAX-DOAS instruments into the forested areas highlighted in TROPOMI as large formaldehyde source regions. Moreover, this work has shown that improvements in satellite technology, culminating (at this point in time) in TROPOMI, mean that space-based HCHO measurements will also be of great benefit in constraining the temporal and spatial distribution of VOC emissions in this region. With such assurance, related tropospheric oxidation and ozone chemistry, with their associated air quality and climate implications, can be studied on a much grander scale. Appendix A Table A1. Results from this and previous literature studies comparing formaldehyde vertical columns from MAX-DOAS and satellite retrievals. Note that "Diff." represents MAX-DOAS − satellite. Slope is the gradient (m) of the linear regression for Satellite = m× MAX-DOAS +C.
2020-07-09T09:04:31.952Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "6fed301c1c8313d1bf6e952abe27654533ae0ff8", "oa_license": "CCBY", "oa_url": "https://amt.copernicus.org/articles/13/6501/2020/amt-13-6501-2020.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e800cbcec7e75b03bb93a639d80a9b9d34961b78", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Environmental Science" ] }
25318047
pes2o/s2orc
v3-fos-license
Recurrence risk of ictal asystole in epilepsy Objective: To determine the recurrence risk of ictal asystole (IA) and its determining factors in people with epilepsy. Methods: We performed a systematic review of published cases with IA in 3 databases and additionally searched our local database for patients with multiple seizures simultaneously recorded with ECG and EEG and at least one IA. IA recurrence risk was estimated by including all seizures without knowledge of the chronological order. Various clinical features were assessed by an individual patient data meta-analysis. A random mixed effect logistic regression model was applied to estimate the average recurrence risk of IA. Plausibility of the calculated IA recurrence risk was checked by analyzing the local dataset with available information in chronological order. Results: Eighty patients with 182 IA in 537 seizures were included. Recurrence risk of IA amounted to 40% (95% confidence interval [CI] 32%–50%). None of the clinical factors (age, sex, type and duration of epilepsy, hemispheric lateralization, duration of IA per patient) appeared to have a significant effect on the short-term recurrence risk of IA. When considering the local dataset only, IA recurrence risk was estimated to 30% (95% CI 14%–53%). Information whether IA coincided with symptoms (i.e., syncope) or not was given in 60 patients: 100 out of 142 IAs were symptomatic. Conclusion: Our data suggest that in case of clinically suspected IA, the recording of 1 or 2 seizures is not sufficient to rule out IA. Furthermore, the high short-term recurrence risk favors aggressive treatment, including pacemaker implantation if seizure freedom cannot be achieved. Recurrent ictal asystole Are we doing enough to prevent and treat it? Mortality rates are considerably higher in people with epilepsy. 1,2 Sudden unexpected death in epilepsy (SUDEP) is the most frightening consequence of uncontrolled epileptic seizures. 3 The underlying pathophysiology of SUDEP is unclear; however, the best-documented mechanism is the effect of seizures on the heart, either directly or secondary to apnea. 3 Since increased frequency of generalized tonic-clonic seizures is the most important risk factor for SUDEP, the current strategy to prevent SUDEP is to achieve seizure control. 4 Although cardiac rhythm abnormalities occur in people with epilepsy, they are underrecognized and often missed, 1 because a small proportion of patients have video-EEG monitoring, and even then, the simultaneous ECG recording is not being evaluated with enough detail. Studies with long-term implantable heart rhythm monitors (implantable loop recorder) in epilepsy patients suggest that the frequency of cardiac rhythm abnormalities might be higher than usually reported. 5 Implantable ECG loop recorder in a group of 16 patients showed that 4 of them (21%) had bradycardia or periods of asystole; 3 of these had potentially fatal ictal asystole (IA), lasting up to 14 seconds, and underwent permanent pacemaker placement. 5 IA is hypothesized as one of the several potential mechanisms of SUDEP. 6 In the SUDEP era, it is a challenge to detect IA and to decide the best approach to treat and prevent this problem. We need more data to define when, why, and how to treat IA. In this issue of Neurology ® , Hampel et al. 7 performed a systematic review of published cases with IA and added data from the authors' local database to perform the analysis. IA was defined as RR interval longer than 3 seconds and at least twice as long as the previous RR interval. It is important to highlight the fact that this study reports mainly self-limiting IA and does not focus on SUDEP, although the authors included a near-SUDEP patient report lasting 96 seconds with successful resuscitation. Another study described the clinical features associated with IA, and suggested that a sudden collapse in the midst of a seizure, especially in patients with temporal lobe epilepsy, might be an indication of possible IA. 8 However, there is lack of data about the frequency of IA recurrence and how many recorded seizures are needed to rule out IA once it is clinically suspected. Hampel et al. 7 reviewed a good number of IA (182 IA from 80 patients) and estimated a short-term recurrence risk of IA amounting to 40.4%. They also showed that IA does not occur during every seizure and may go unnoticed during short-term monitoring. Thus, for patients in whom IA is suspected, recording 1 or 2 seizures might not be sufficient to rule out IA, 7 making it unrealistic to investigate IA with video-EEG monitoring in clinical practice. Perhaps the best alternative is the use of long-term implantable heart rhythm devices for determining the indication for permanent cardiac peacemaker. However, there is limited information to guide management of IA, which is further complicated by the controversy about the effectiveness of cardiac pacemaker in this condition. 5 Although invasive, the implantation of a cardiac pacemaker should be considered in patients with recurrent episodes of IA and ictal syncope, in particular in those with pharmacoresistant seizures and who are not good candidates for epilepsy surgery. 6,9 Cardiac pacemakers reduce falls and injuries due to seizure-induced syncope 9 ; however, whether its use would prevent SUDEP is unclear. Another systematic review on cardiac arrhythmias related to seizures showed that all IA (n 5 103) were self-limiting (except for one labeled as a near-SUDEP case), while postictal asystoles (n 5 13) were more associated with (near-) SUDEP. 9 That study suggested that since no fatal cases of IA have been reported in the 103 cases reviewed, it should not be considered a SUDEP pathomechanism. Thus, more prospective studies are needed to confirm this hypothesis. Considering the high risk of short-term recurrence of IA reported by Hampel et al., 7 aggressive treatment including cardiac pacemaker implantation should be considered more often. None of the clinical factors (age, sex, type and duration of epilepsy, hemispheric lateralization, and duration of IA) appeared to influence the shortterm recurrence risk of IA. 7 However, clinical data and definition of IA were not always available. The lack of available data was an important limitation of this study. This systemic review highlights the importance of this treatable and underestimated problem, calling attention to the fact that there are not enough data to guide our therapeutic management. Prospective studies and collaborative studies, with more homogeneous data and detailed information, would help us address this serious issue. It would be interesting to try to follow these 80 patients to evaluate their long-term outcome, in particular for tracking cases of SUDEP. This also could be an excellent opportunity to search for biomarkers, including genetic polymorphism that could, for example, predict the occurrence of IA or other epilepsy-related cardiac events. STUDY FUNDING No targeted funding reported. DISCLOSURE Dr. Morita has received fees as a speaker or consultant from UCB Pharma. Dr. Cendes serves on the editorial boards of Neurology, Epilepsia, Epilepsy Research, and Epileptic Disorders; receives research support from Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and Conselho Nacional de Pesquisa (CNPq) Brazil; and acted as consultant to UCB Pharma. Go to Neurology.org for full disclosures.
2017-08-28T14:13:41.615Z
2017-08-22T00:00:00.000
{ "year": 2017, "sha1": "9f6141ecde25732b7e40afbfbe61a3a2bb7d8acd", "oa_license": "CCBYNCND", "oa_url": "https://n.neurology.org/content/neurology/89/8/785.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0252acdc38d62b483cb4a9182ef897daeb5c04bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
251866784
pes2o/s2orc
v3-fos-license
WITHDRAWN: A review of the role of graphene-based nanomaterials in tackling challenges posed by the COVID-19 pandemic This article has been withdrawn at the request of the author(s) and/or editor. The Publisher apologizes for any inconvenience this may cause. The full Elsevier Policy on Article Withdrawal can be found at https://www.elsevier.com/about/ourbusiness/policies/article-withdrawal. COVID-19 infection caused by novel SARS-CoV-2 virus was firstly reported in Wuhan city, China, [1][2][3]. In different ways, COVID-19 disease affected the human population globally [4][5][6]. Unfortunately, the spread of this virus occurs from one person to another or place to place through the respiratory droplets resulting from physical touch, sneezing, and coughing. These drops consist of different sizes and in general, these drops are sub-5µm in size. It is important to note that the drops with size larger than 5 µm do not have the ability to travel long distance, thus, resulting it to settle within 1 m to 2 m due to gravitational force [7] while smaller size remain floating in air for a long period of time, thus, resulting the spreading level of the virus to increase. Globally, the spread of COVID-19 resulted in increase in contagion and deaths, the infections are coming as waves [8]. Governments are facing great challenges to compete with the high demands to protect the economy and the healthcare systems respectively. Furthermore, significant efforts have been devoted by researchers around the world to develop suitable and effective strategies to diminish hospitalization, contain the pandemic and to develop a variety of vaccines [9]. At the initial stage, to provide a physical barrier, surgical masks have been employed to prevent the exposure to respiratory droplets. In addition, several safety measurements were taken to disinfect and decontaminate human and public places namely offices, shopping centres, airports, parks, and hospitals from COVID-19 and various types of sanitization solution and systems were developed and implemented accordingly. In the past years, two-dimensional materials such as the graphene and its related materials have attracted significant attention due to their remarkable physical and chemical properties, which render them powerful components for various applications. Graphene related materials are made up of various carbon two-dimensional materials, each of them having a specific structure and nomenclature [10,11]. As illustrated in Figure 1, graphene, GO, and rGO are the most promising materials among the different graphene related materials [10]. For instance, graphene consists of a honeycomb 2 carbon lattice structure which results this material to have superior strength and flexibility, excellent electrical conductivity, and high lipophilicity. Alternatively, GO is an oxidized form of graphene containing numerous oxygenated groups and some 3 carbons lead this material to be highly hydrophilic and water dispersible, however, there is a decrease in the mechanical and electrical performances [11]. The GO can be further reduced to form reduced graphene oxide (rGO) to partially restore the electrical conductivity but retaining the water dispersibility to a great extent. Interestingly, graphene-based materials exhibit both antiviral and antimicrobial efficiency. For instance, among all graphene derivatives, several studies have revealed that GO has the highest negative charge and also has high affinity for virus that are positively charged [7]. In other words, via hydrogen bonding and electrostatic interaction, the lipid bi-layer of feline COVID-19 can be adsorbed on the GO surface. Furthermore, the viral membrane that were destroyed by the binding of the GO confirmed the efficiency of GO against the viruses. In short, the versatility of the graphene related materials makes this material to be a promising candidate in a wide variety of applications including different device prototypes. Therefore, this review article critically discusses the importance of graphene-based materials for diagnosis, detection, decontamination, and protection against COVID-19 followed by the most relevant applications of graphene-based materials that can be foreseen to battle viral pandemics. The key challenges and future directives for fundamental design and development of technologies based on graphene and its related materials are discussed and lastly, our personal opinions on the appropriate approaches to improve these technologies. In the near future, we strongly believe that the graphene-based materials along with their fascinating properties will pave the way to fight the fatal of SARS-CoV-2. Interaction of graphene and its derivatives-based materials with contagious virus The interaction of graphene and viruses constitutes a distinct addition to the potential successful applications of COVID-19, where Di et al. prepared graphene and antibodycoupled panels respectively that can rapidly detect target virus proteins and thus can be a very useful population screening group [12]. In 2012, the effect of graphene on viruses was reported, whereby it revealed that graphene had a good ability to inhibit viruses. In addition, rGO is a material with very thin sheets and a very large surface area with high conductivity. Akhavan et al. synthesized GO via the modified Hammer method, giving hydrothermal reduction of GO and rGO by exploiting thin particles of tungsten oxide with rGO for photoactivation of stamens under visible light irradiation [13]. Derivatives of graphene have been used in the pharmaceutical industry to obtain antiviral compounds such as grapheneconjugated reverse transcriptase inhibitors for the treatment of HIV and in the treatment against COVID-19 [14][15][16]. The two-dimensional graphene material has remarkable electronic properties and promising applications, including bacterial infection control or detection. Graphene also interacts with light: a single layer of graphene can absorb 2.3% of incident visible light [17]. This property is used to generate heat and sterilize materials, since GO contains oxygen groups, its surface is similar to rGO and graphene. Surface oxygen has the advantage of providing reaction sites for uptake or activation by proteins, enzymes, and nucleic acids, through selective chemical activation, due to the presence of surface oxygen, and GO can also participate in the fight against SARS-CoV-2. Graphene-based technology can be used to treat the COVID-19 disease that works both outside and inside the cells of host cells. Mohamed et al. explored GO, the most widely used two-dimensional material in biomedical applications, as a nano-platform for interaction with SARS-CoV-2 [18]. The molecular docking analyses of GO sheets were performed upon interaction with three different structures: SARS-CoV-2 GO viral spike. Here, the results exhibited a high affinity for the surface of the three structures (6M0J, 6VYB and 6VXX), when comparing the binding affinities and the respective bonding types, GO interacts more strongly with spike or ACE2, compared to 6M0J. Infection experiments were conducted using four types of infectious virus particles for validation purposes. The results showed that the thin GO sheets were able to significantly reduce the transcription of three different viral layers. On the other hand, the results showed the ability of GO leaves to interact with the surface components of SARS-CoV-2 and inactivate infection even in the presence of any mutations on the viral spike. In addition, the disruption of the SARS-CoV-2 pathway using graphene oxide (GO) was investigated by Masahiro et al. It is based on the antiviral effect of GO sheets by three strains of SARS-CoV-2. RT-PCR) as 50 and 98% of the virus in the supernatant can be removed after incubation with (100 μg/ml) GO for 1 and 60 min [16] [19]. Here, the results further confirmed the existence of a two-step virus inactivation method (1) spike adsorption of positively charged SARS-CoV-2 on the negatively charged GO surface and (2) inactivation of SARS-CoV-2 on the GO surface through viral proteolysis. This is due to the damage to S protein with GO since the interaction of S protein with human angiotensin-converting enzyme 2 (ACE2) is required for SARS-CoV-2 entry into human cells, making GO a potential candidate to be used in contributing to the inhibition of the global spread of SARS-CoV-2 and contribute significantly to prevent the spread of the virus. Moreover, graphene and its derivatives are characterized by high antimicrobial activity, which leads to the presence of physical and chemical mechanisms for damage. They are ideal materials for dyeing fabrics such as personal protective equipment, face masks and gloves to control the transmission of SARS-CoV-2 more effectively. Amrit et al. fabricated biosensors using graphene can effectively detect the virus with high accuracy and sensitivity, providing rapid quantification [20]. In other words, accurate, highly sensitive and cost-effective diagnostic tools have been developed using graphene to efficiently monitor and control the spread of COVID-19 and other airborne viruses. The interaction of graphene materials with the viruses is schematically illustrated in Figure 2. For the SARS-CoV-2 Spike S1 protein receptor binding domain interaction with heparin, Palmieri developed a therapeutic line by repurposing heparin and antiviral sulfur derivatives of GO. The light absorption property of graphene can be used to destroy viral particles. Sulfonated magnetic nanoparticles working with rGO have been successfully used to capture and eliminate herpes simplex virus type 1 (HSV-1) by photothermal use by nearinfrared (NIR) light [8]. The results demonstrate how GO capture may be combined with NIR treatments for the lungs. In fact, the presence of highly absorbent materials such as graphene and GO on the window of transparency of the NIR tissue and allow deep penetration into the body of the incident light, this technique is used in the treatment of lung metastases. Interestingly, graphene has unique properties such that its large surface provides the highest contact area for negatively charged sulfate adsorption bonds. Figure 4 is the schematic model of the viral restriction process from graphene and its derivative based coatings in repelling numerous viruses including COVID [30]. In short, these materials discussed above play a vital role to limit the infection endurance time on different covered surfaces and to catch and destabilize the infection structures [37]. synthesized graphene layer based super-hydrophobic low melting temperature non-woven surgical masks which had superior photo-thermal and self-cleaning properties [38]. In addition, after sunlight sterilization, these functional masks are reusable due to the fact that the surface temperature can rapidly increase up to 80 ֩ C under the influence of sunlight and the lifetime usage of these masks increased significantly as it showed the tendency of salt rejection. On the other hand, efficient protection against viruses have been revealed by the superhydrophobic surgical masks that are fabricated via the roll-to-roll method. As illustrated in Figure 5, the SEM representations of the pristine surgical mask revealed the smooth surface of the melt-blown fibers of 20 µm, thus, exhibiting non-super-hydrophobic properties [38]. Moreover, via a static contact angle (CA), the wetting tendency of this mask was observed to be approximately 110֩ , thus, confirming the surface hydrophobicity. However, the droplets being attached to the surface were one of the drawbacks observed with this mask. Illustrated in Interestingly, the presence of nanostructured flakes, size varied from 100 nm to few mm, was observed on the mask surface specifically after employing the dual mode laser-induced forward transfer technique, thus, resulting the CA to increase up to 141֩ ( Figure 5) [38]. The use of graphene-based face masks is a promising approach to reduce the spread of respiratory diseases namely the COVID-19 in affected areas [39][40][41][42][43][44]. A part of a nonpharmaceutical intervention, the face mask is capable to reduce the transmission of respiratory pathogens by creating a specific barrier. Furthermore, a better respiration is ensured by graphene particularly when embedded in air-filtering membranes of the respirators due to the fact that graphene has a two-dimensional honeycomb lattice structure in which the 2 -hybridized carbons are arranged in a hexagonal form [45,46]. In addition, the large surface area of graphene is another key feature that makes it appropriate for interfacial interactions [10,47,48]. In short, the attributes of graphene such as large surface area, hydrophobic nature, high electrical conductivity, and photocatalytic activity have attracted remarkable attention among researchers globally in designing high-quality respirators [49]. Furthermore, due to graphene having an extreme hydrophobic nature and microporous structure, the water droplets, particles, pathogens, and aerosols are therefore restricted to enter hence, to remain in the outer layer of the respirators for long periods. Interestingly, compared to the virus size (0.05 -0.14 µm), the pore size of the graphene membrane (5.7 -25.2 Å) is much smaller, thus, resulting it to serve as a selectively permeable membrane to separate the pernicious SARS-CoV-2 [50,51]. Anti-viral surfaces and coatings In the wake of the Covid-19 outbreak, many countries and individuals across the globe have joined hands to bring back the freedom of pre-Covid. In many attempts to stop or curb the spread of the disease such as social distancing, face masks and many others, many believe that by researching and developing anti-viral surfaces and coatings will help prevent the spread of the disease. This is mainly due to the way viruses spread, specifically the cells of its hosts for survival and replication. For many, this can be seen as a very effective method of preventing Covid and many other viruses which can survive for long durations even outside of hosts. For example, the Covid-19 virus is effectively blocked from an individual using surgical face masks but is able to stay alive for a whole day on the surface of N-95 face masks which is mainly used worldwide to combat Covid-19 [52,53]. During this period, virus can latch on to other surfaces or even when touching the mask for disposal can transport the virus. For this reason, anti-viral surfaces and coatings have been researched with the idea of mass productions in mind. Recently, graphene and its derivatives such as graphene oxide (GO), reduced graphene oxide (rGO), and graphene-based nanoparticles (GBNP), have come under the spotlight for their unique antimicrobial and antiviral characteristics. Graphene and its derivatives, mainly, have been highly focused due to its lack of toxicity towards humans and environment. For example, in GO, the nanowalls of GO form sharp edges, most effective at the least number of nanowall layers, which inhibits cell replication by physically disabling the viral function of the virus, its effectiveness at lesser layers of graphene oxide nanoparticles (GONP), combination of GONP and other metal nanoparticle which boost its antiviral activity, etc. [54]. Graphene oxide has shown promise in many key areas to fend viruses such as H9N2, Pseudorabies and enteric EV71 due to its ability to grab on to viruses and disable it [12]. An explanation to its catching capabilities, GBNPs have sharp edges at which viruses get caught in and is dismantled. Furthermore, it is also capable of shattering the cellular lipid bilayers of viruses and inhibit the protein-protein unfolding as well as conversion of α-helix segments into β-sheets, which causes the structure to become inactive fibrillar agglomerates [10]. Another form of graphene that can help in the battle against the SARS-CoV 2 is rGO which is obtained by reducing the oxygen content of GO by chemical reactions and thermal methods. It was demonstrated that rGO, when paired with carbon dots and polyphenol curcumin can effectively halt the spread of Covid-19 by adding the method into existing production lines of personal protection equipment (PPE) [55]. Prevention of the spread of SARS-CoV-2 virus should be top priority of governments and individuals across the globe, but another prime priority is early detection of the virus in individuals. With early detections, the authority can rightfully segregate and quarantine any close contacts. This is vital as those with close contacts can be contracted and quarantined before carrying the virus to another person. The implementation of rGO in Covid-19 detection is shown where three-dimensional (3D) printed gold coated on rGO was capable of catching and detecting CoV-2 spike S1 antigens, which gives the Covid-19 virus its viral properties [56]. When compared to twodimensional (2D) printed gold coated rGO, there was a significant bump in detection levels, by 2.5 times. This is also vital in combating Covid-19 as freshly infected individuals will have far less CoV-2 Spike S1 antigens in their bodies when compared to already infected individuals. Existing products for detection may not be able to detect this level of antigen as it does not work in nanoscopic levels like rGO does. These graphene derivatives can be used as coatings and surfaces in many places and products, like food packaging, which is passed down from the chef/baker to the food delivery person and/or then to the customer [57]. In this process chain alone, the virus can infect an entire state within a relatively short time when there are no safety precautions. Even with safety precautions, a sliver of the virus will somehow be able to break this free of this chain and start a local outbreak. An even more important application is PPE [58]. As humans are hugely affected by the Covid virus, PPE, when used properly, can help mitigate or even block the spread of the Covid virus from an already infected person to another non-infected person [59]. At the end of the virus pathway is always a host, specifically a human. To coat every contactable surface with antiviral material is an extremely difficult task and the specific thickness is virtually impossible. However, by targeting regularly used surfaces like handrails when climbing up a staircase or holding rails in trains or busses, a huge percentage of surfaces in which the virus can survive on can be eliminated. This can be done by using GO or rGO coated with copper nanoparticles [60]. As illustrated in Figure 6, the implementation of this graphene design can destroy the virus by its Van der Waals interactions with the virus membrane as well as electrostatic interactions [61]. Table 1 are the numerous types of graphene and its derivatives for anti-viral surfaces and coatings.  PRV and PEDV by underlying obliteration before the viral passage into the cell. At even low concentration, the best anti-viral effect was exhibited by GO, GO-PDDA and GO-PVP.  Polylaminate GO was observed to exhibit a much more fragile inhibitory impact compared to mono-layered GO and rGO, though graphite having the non-nanosheet structure showed no antiviral movement, thus, revealing that the nanosheet structure is basic for the antiviral activity. [32] GO The powerful grouping of GSCC was revealed to be nonharmful to the host cells. The viral titer showed four sets of decreases utilizing GSCC. [34] Graphene cytotoxicity Cytotoxicity of a material plays a crucial role when it is utilized in medical-based sectors as it indicated the toxicity level of the material towards living cells concurrently understanding its biological reaction. In context of carbon-based materials particularly graphene, their cytotoxicity level is low together it exhibits antiviral activities allowing the incorporation of graphene for specific antiviral treatments [55]. For instance, Xiao et al. invasion via virtue disruption of the viral envelope with significantly low toxicity towards human cells [64]. Generally, materials with more than nine carbon atoms (long aliphatic chains) is highly destructive for eukaryotic cells due to increase in their toxicity however from the above research it was perceived that long aliphatic graphene-polyglycerol sulfate (contains 11 carbon atoms) is efficient in combating strong viruses such as FCoV and SARS CoV-2 without exhibiting significant cytotoxicity towards eukaryotic cells [64]. Cytotoxicity viability more than 90% indicating it is not toxic for eukaryotic cells [65]. According to Angel et al., concentration of these materials, oxidation degree, cell type, lateral size, and the exposure should also take into account besides the structural difference during cytotoxicity analysis of graphene-based material as the cytotoxicity of the materials differ accordingly [66]. In terms of concentration, rise in the concentration of the material in the treatment increases its cytotoxicity where non-cytotoxic concentration of graphene is efficient in combating mild viruses such as porcine epidemic diarrhea virus (PEDV) and PRRSV which is lesser than 6 µg/mL and 4 µg/mL respectively, meanwhile higher concentration window is required in combating strong viruses like SARS CoV-2 (50 µg/mL) and FCoV (10 µg/mL) [66]. In short, even though higher graphene concentration is used for SARS CoV-2 treatment, yet the toxicity level is still low to cause significant drop in the cell viability making it safe for humans. Howbeit, it is worth to note that the above factors of graphene and graphenebased materials should be taken in account when preparing for the drugs or treatment for SARS CoV-2 because it is closely related to the cytotoxicity of the material and how it reacts with human body. Sensors With the advent of the Coronavirus, major smart materials and their integration with sensor platforms and applications have been used as tools that can be used in the scope of COVID-19 diagnosis. Graphene-based bio-sensing systems have been developed due to the advantages of various sensing mechanisms for it, such as field effect (FET), optical, electrochemical, etc. Chauhan et al. was able to modify its surfaces using chemical methods, such as amino silane, carboxy, and diazonium, as well as plasma chemistry that causes the immobilization of the biological recognition element leading to the identification of different types of pathogens [14]. Graphene provides strong and flexible mechanical, electrical and thermal properties as well as bio-tolerance and other antiviral properties [16,67]. Graphene is used with nanoparticles to detect viruses. A multilayer (sandwich) immunoassay was developed by Anik et al. for detection of avian influenza virus H7 (AIV H7) composed of graphene coated with silver nanoparticles with a minimum detection (LOD) of 1.6 pg/ml [68]. We were able to develop an electrochemical influenza A biosensor consisting of a graphene-golden (Au) hybrid electrode. By measuring the activity of neuraminidase (N), the biosensor was tested to examine the analytical properties of influenza A, the results showed a linear range between 10-8 Uml-1 and 10-1 Uml-1 with a relative standard deviation value of 3.23% (for 10 -5 U mL-1 of N, n: 3) with a maximum detection value of 10-8 U mL-1 N. In addition, it was further used to detect true influenza A (H9N2) virus with successful results [68]. Moreover, Akanksha et al. developed a super sensitive graphene biosensor to detect Japanese encephalitis virus (JEV) [69]. The sensors were tested in the range of 1 fM to 1 μM for both JEV and AIV antigens and showed a limit of detection (LOD) of 1 fM and 10 fM for JEV and AIV, respectively. The GFET biosensor structure was then modified with a sugar chain by Matsumoto et al. and this was used to detect both human and avian influenza virus [70]. On the other hand, as illustrated in Electrochemical biosensors Biosensors are analytical devices consisting of a biological identification system and a physicochemical transducer [73]. [77]. Biosensors have distinctive properties due to the ability to adapt the specific reaction of compounds by immobilizing biological recognition elements on the sensor, whereby the basic components of biosensors are namely bio-receptors, signal transducer, and amplifier respectively [78]. POC biosensors based on carbon and graphene were developed. Graphene is expected to play a distinctive role in the treatment of COVID-19, given the low cost of this material, it can be used to detect viruses and improve their sensitivity and selection by modifying their hybrid structure, allowing their optical and electrical properties to be tuned [79]. For instance, graphene sensors that can be used to detect SARS-CoV-2 are photoluminescence, colorimetric sensors, and SPR biosensors. In 2020, Siu et al. was able to encapsulate graphene sheets of FET with a specific antibody against the protein as the spike of the physical virus contributed to the development of a field-effect (FET)-based biosensor to detect SARS-CoV-2 at concentration of 1 fg/ml in phosphate-buffered saline medium [80]. It is used in detecting the virus in clinical samples, as it showed a LOD of 2.42 × 10 2 copies/ml, and as a result, this device is considered as a promising alternative for immunological diagnosis of diseases [81]. The available approaches to battle the COVID-19 virus via the utilization of graphene-based sensors are illustrated in Figure 9. virus. Interestingly, different materials are frequently utilized to amplify the sensitivity and signal of electrochemical biosensors. For instance, in electrochemical biosensors, any significant electrochemical change at the interface between an electrolyte and electrodes, based on a conformational change produced by biometric recognition between antigen and antibody, is measured. In addition, due to its stable electrochemical and optical attribute, superior mechanical and thermal properties and high electrocatalytic activity, graphene has been widely explored to design highly efficient biosensors [82]. For instance, to create biosensors, graphene-based platforms have been employed to immobilize biomolecules. Here [85]. Here, it was observed that the developed biosensor exhibited an efficacy with a low detention limit of 1.6 pg/mL. In other words, the graphene-based electrochemical biosensors have proven to be significantly effective to detect biomolecules, specifically for the viruses, hence, suggesting that these types of biosensors have the highest potential to effectively detect the SAR-CoV-2. However, a significant amount of high-end research and testing are still required to develop reliable diagnostic devices. Illustrated in Table 2 is the summary of graphene and its related materials-based electrochemical biosensors to detect a variety of viruses. Interestingly, besides the fabrication of electrochemical biosensors for the recognition of the SARS-CoV-2 virus, graphene and graphene-based materials are also widely utilized in electrochemical biosensing applications such as reliable point of care (POC) test etc. Moreover, due to their highly conducting nature, many other graphene and its derivativesbased biosensors are highlighted in Table 3. Graphene-based gene-editing technology Due Current advancements on the potential approaches to protect against COVID-19 A person who gets COVID-19 is unlikely to be treated as there are no standard treatments for COVID-19. Thus, it is important to adopt effective prevention approaches to avoid virus infection and transmission [106]. The COVID-19 protection approaches are: PPE Kits and Masks 2. Disinfectants and Sanitizers 3. Air filtration and purification 4. Vaccination In the following sections, the current advancements in the potential approaches are discussed in detail. PPE Kits and Masks Considering the diverse modes of COVID-19 transmission, one of the effective prevention strategies is by applying personal protective equipment (PPE). PPE is usually used by healthcare frontliners [107]. Different types of PPEs such as aprons, gloves, eye protection, surgical face masks, gowns, nonpowered filtering facepiece respirators (FFRs), and powered air-purifying respirators act as physical protection from the infectious particles present in human fluids [108,109]. The objective of the application of PPE is to provide complete protection for frontliners (doctors, nurses, other healthcare workers) from the deadly spread of this virus [108]. The size of the COVID-19 virus is between 70-90 nm in diameter. Nevertheless, coughing and sneezing usually produce Flügge droplets that are below 5 μm in size, and the virus can move up to 4.5 m. WHO has suggested that when handling any aerosol-generating procedures (AGP) on an identified COVID-19-positive patient, workers are recommended to use an American standard N95 or European standard FFP2 mask [110,111]. Both of the masks work well as PPE [6]. Public health England (PHE), on the other hand also suggested on using an FFP3 mask a the similar case [110]. Surgical masks are good in preventing spray and inhalation of droplets in the 5 m range, but their ability to filter submicron-sized droplets is restricted [111]. N95/FFP2 masks are at 95% minimum efficiency for particles with a size of 0.1 to 0.3μm, and 99.5% or higher efficiency for 750 nm or bigger size particles [110]. Table 4 and Table 5 respectively illustrates the comparison between two types of masks which are the surgical mask and N95 respirator and the latest graphene and graphene-based fabrics for smart and PPE cloth fabrication utility that exhibit efficient properties. Excellent electrical conductivity Smart fabrics [120] PET/rGO/polypyrrole Polypyrrole was coated on PET/r-GO fabric via in-situ synthesis. The chemical reduction method along with dip and dry method was employed to combine the PET with GO. Finally, the composite was hot pressed to make a film. Surface chemistry and extraordinary soaking ability Shielding clothing [121] Wool/rGO/ 2 GO/ 2 was immersed with wool, where the hydrolysis of titanium isopropoxide and the chemical conversion with the assistance of sodium hydrosulfite took place to form graphene/ 2 nanocomposite. The composite dried out and condensed by chemical route. Excellent antimicrobial activity and selfcleaning ability Smart fabric and medical care cloths [119] During the pandemic, new technologies and innovations were demonstrated. Graphene technology has the potential to produce new creations of personal protection equipment and new medical solutions [122]. Graphene and its derivatives are utilized to create and evolve multifunctional materials to prevent and control coronavirus [107]. However, this review focuses on graphene technology to improve existing COVID-19 protection kits. Recent developments in the manufacture of two-dimensional graphene and graphene oxide (GO) can produce highly useful PPE and masks to protect from SARS-CoV-2 infection [123]. Applying antiviral graphene coating to PPE can improve its protective qualities [107]. Thus, advanced technology in graphene and GO has become promising for highly functional PPE [123,124]. Graphene is commonly employed as a coating material for woven and non-woven PPEs due to its fire-resistant and UV-resistant, lightweight, and conductive properties. These features are particularly useful in developing PPEs materials [124]. The ability of GO to interact with microorganisms has also led to the designation and development of PPE textiles used to restrict COVID-19 spreading as well as inactivate the virus [123]. The oxygen groups in GO enhanced surface hydrophilicity and initiated interactions with organic molecules. The application of hydrophobic graphene into PPE against SARS-CoV-2 can be seen in Figure 10. The graphene derivatives can also destruct the SARS-CoV-2 membrane by the adsorption of charged lipids [125]. The researchers believed that under the Scanning Electron Microscopy (SEM) visualization of the graphene and GO functionalization on both cotton and polyurethane materials, these materials are highly applicable PPE materials. The integration of graphene and GO in masks with cotton and non-woven, polyurethane (PU), and polypropylene [124] non-woven materials, have resulted in virus filtrations and an almost complete elimination of SARS-CoV-2 infectivity. The graphene functionalized masks have also shown antibacterial effects [123]. There is also a development for potential virucidal composite ink, which is graphene-based, that is utilized in fabrics such as N95 face masks and other PPE. The graphene ink was started by the UK-based Graphene Composites Ltd (GC) to fight COVID-19 to increase the protection for fabrics used in N95 face masks and PPE. This is also believed to provide safety for frontline workers and hospital staff [7]. Ramaiah et al. presented implementing graphene-based materials to create unique masks and germ trap technologies. For example, Nanene and Polygreen are grapheneintegrated face masks that supply useful antiviral and antibacterial characteristics. These four layers (graphene outer protective layer, polypropylene non-woven fabric, melt-blown (MB) fabric, outer shell, cotton filtering layer) fabric masks could reach 95% of filtration efficiency and can provide the wearer, protection against viral particles and air-borne bacterial [124]. The development in mask technology also includes 3D printing in mask fabrication. An example of a 3D-printed distinctive film known as 'Maya sticker' is attached to a face mask to improve their protective capabilities. Maya stickers possess nanoscale fibers coated with disinfectants which enlarge the nanoscale particles neutralization and trap as can be seen in Figure 11 [111]. There are also studies on applying 3D printing to customize face masks, such as the Copper 3D NanoHack mask [7,126]. In the future, studies and research on graphene can be made to produce graphene-based 3D-printed add-ons for face masks. properties. The fG-coated filter's efficiency was evaluated against SARS-CoV-2 viral particles, and it has been shown that viral transmission is completely stopped at the fG-coated layer. Figure 12 shows the four layers of a stacked air filter. The multiple layers stacking improved the stopping of droplets penetration. The fG-coated filter in the middle is used in the prevention against micro-organisms and small particulates by holding it within the mask [46]. The graphene interacts with viruses directly, primarily through hydrogen bonds, electrostatic interactions, and redox reactions. Illustrated in Figure 13 are different types of masks namely surgical mask, N95 respirators, FFP2 respirators and 3D printed mask respectively. Moreover, depicted in Figure 14 are the filtering efficiency of graphene-based face masks in terms of electrostatic influence, electrothermal effect, nano-porous membrane and photothermal activity [127]. Lastly, illustrated in Table 6 are the commercially available graphene and graphene-based facemask along with its superior properties. Figure 12: PP cloth stacking order for fG-coated mask fabrication [128]. Sanitizers and Disinfectants Disinfectants are used widely in sterilizing areas and surfaces. The tendency of a disinfectant to kill microbes is determined by the chemical's mode of action, the pathogen's surface molecular structure and intracellular vulnerability [129]. The SARS-CoV-2 virus can live on different types of surfaces for periods ranging from a few hours to a few days, and contaminated surfaces have been discovered to be the virus's second most common mode of transmission. Disinfectants and sanitizers are important protective alternatives to coronavirus [130]. The U.S. Environmental Protection Agency (EPA) is the agency that updated the disinfectants used for COVID-19 prevention. The hypochlorite-based products involve the sodium hypochlorite liquid, and calcium hypochlorite solid or powder formulations. The synthesis of this formulation in water will form a dilute aqueous chlorine solution and hypochlorous acid (HOCl) which is undissociated, active as the antimicrobial compound. At various concentrations, hypochlorite shows wide antimicrobial activity and is efficient against a variety of pathogens. The suggestion of 0.1% (1000 ppm) conservative hypochlorite concentrations are required for the COVID-19 context to stop many other pathogens in the healthcare setting. COVID-19 disinfectants are more encouraged to use and effective with fabric or wipes which have been immersed in disinfectant compared to spraying or fogging. Spraying disinfectants can even be harmful to human health [131]. The common surface disinfectant only gives a temporary result and is not advised for long-term use due to their irritation, toxicity, and also quickly evaporates [107,131]. However, in the current pandemic situation, developing effective anti-SARS-CoV-2 coatings/surface disinfectants play an important role in controlling the viral spread of COVID-19 [54,131]. For example, disinfectant sprays such as virucidal chemicals give unfavorable effects and have toxicity properties. Sodium hypochlorite and hypochlorous acid, on the other hand, leave residue and are corrosive to certain metallic surfaces. The SARS-CoV-2 virus has a structure that is rich with -COOH functionalities and a short lifetime on copper surfaces. The surfaces of copper are reported to have anti-pathogenic performance [54,132]. No living COVID-19 could be observed on copper alloy surfaces after four hours of exposure [63]. Thus, GO/rGo-SO 3 coatings refined with copper nanoparticles/copper ions can be promising to develop anti-SARS-CoV-2 surfaces and inactivate the SARS-CoV-2 viruses [37,129]. Other nanohybrids of GO/rGO-SO 3 based doped with Ti, Ag, and Au can also be suggested for the fabrication of antiviral coatings. Such materials ideally capture and disrupt the viral particles and reduce their chance to survive when found on the antiviral coatings [132]. Recently, nano-graphene oxide (nanoGO) is one of the graphene derivatives which has gotten a great interest in this field. Chung et al. have studied the antiviral properties of GO towards coronaviruses (Alphacoronavirus and Betacoronavirus) in a high organic materials presence (5% FBS). This study has shown the nanoGO's antiviral activity in a partially mimicked biological fluid setting [133]. There is also a study on disinfecting the masks for reusing purposes to overcome the global shortage of PPE masks. The exposure of hydrogen peroxide vapor was used as a technique to disinfect N95 masks and powered-air purifying respirators (PAPRs) masks. The study found that the N95 respirators retained their filtering effectiveness even after 50 disinfection sessions [111]. A similar approach from Smart tech hygiene company, Cleanbox Technology, with its product, Clean Defense was used for the thorough decontamination of N95, cloth, and other layered masks. The patented technology utilized UVC lights to decontaminate the masks Clean Defense has been studied on Sars-CoV-2, removing the COVID-19 virus at 99.999% on all three layers of the mask in seconds [111,134]. Air filtration and purification Air filters have the ability in removing air pollutants and enhancing indoor air quality. In filters, normally fiber, super fine glass fiber, film compound, and electret materials are utilized for high-efficiency filter materials, while glass fiber and activated carbon fiber are utilized as the primary and mid-efficiency materials for filters [135]. On the other hand, air purifiers normally contain multiple filters and a fan. The fan is responsible in drawing in ambient air and sends it through the filters, and then circulates the clean air back into the room [136]. Other than being useful in removing air pollutants, a specified air purifier can also be useful in preventing COVID-19 virus transmission. For a virus that can be transmitted through airborne droplets and aerosols [137], the centers for disease control and prevention (CDC) mask and testing has suggested for a good room ventilation by applying air purifiers. This can produce a reduction in the quantity of infectious particles in the air as well as good protection from SARS-CoV-2 indoor transmission between persons. One of the air purifiers which can prevent COVID-19 transmission is by applying the high-efficiency particulate air (HEPA) filter/purifier [137,138]. In addition, in terms of HEPA air filter/purifier, common HEPA filters are constructed by folding microfiber glass or other fibrous media which made from several layers of randomly oriented fibers with their diameter size from 2 nm to 500 nm [47] [139]. HEPA media is normally under 0.508 mm thick and made up of few hundred fibers layers to restrict the penetration of particles [140]. They are utilized to filter air particulate, pollen, dust, mold and bacteria [141].Van der Waals forces, electrostatic attraction, and capillary action can all contribute to filter fibers adhesion. Figure 15 illustrates HEPA capturing and filtering airborne particles (from 0.01 micron to 10 microns) [140]. The different processes which occur are depending on their sizes. Large particles will be incapable to penetrate through the fibers openings and will directly trapped. The smaller size particles will get captured in one of the three methods: impaction, interception and diffusion [141]. A high-quality HEPA-based air purifier could remove 99.97% of contaminants from the air, including the coronavirus-spreading respiratory droplets (particles larger or smaller than 0.15 microns). The particle size of SARS-CoV-2 which reported to be 0.06 to 0.14 microns falls within the HEPA filters capturing efficiency. Even though not yet fully tested, the CDC has suggested in using HEPA filters in powered air purifying respirators for a highly efficient filtration of the SARS-CoV-2, abased on SARS-CoV-1 studies [139]. HEPA filters are well known for their application in medical and industrial buildings. They are utilized to filter air particulate, trap air-borne pathogens and are believed to have the capability in filtering the SARS-CoV-2. The particle size of SARS-CoV-2 falls within the HEPA filters capturing efficiency, which is 0.01 micron and more. A high-quality HEPAbased air purifier could remove 99% of contaminants from the air, including the coronavirusspreading respiratory droplets. They can trap the virus, by using a high voltage to produce and release negative ions charge into the air. The negative ions will work by sticking to the virus and kill them [136]. Interestingly, carbon-based materials such as GO has been employed broadly in the fabrication of air filters due to its extraordinary features namely unique construction, superior anti-microbial performance, and excellent mechanical and chemical stability respectively. Table 7 are the roles of GO for the fabrication of effective air filters [52][53][54][56][57]. The bactericidal properties, adsorptive and mechanical stability was observed to remarkably enhanced with the presence of silver nanoparticles and GO fillers. Illustrated in [144] Ultra-thin GO /polydopamine hybrid on the surface of polypropylene Airborne pathogens The resultant filter significantly improved the protection and antimicrobial performance. [49] Challenges and future perspectives Globally, the outbreak of the current pandemic crisis has urged scientists to think out of the box in order to develop novel strategies and to have a strong collaboration between laboratories with different background mainly to battle against the spread of COVID-19. Furthermore, since unexpected issues require unexpected solutions, the current best weapon to combat the COVID-19 is via the development in research and technology. Therefore, in this review article, we propose that graphene-based nanomaterials is a promising candidate since it offers innovative solutions from antiviral activities up to biomedical research due to their fascinating physical and chemical properties respectively. In other words, graphenebased tools should be employed for decontamination, diagnosis, detection and prevention from this disease. As discussed above, to fight against this virus, numerous graphene-based products have been developed. For instance, graphene can be fabricated as a decontaminating agent either in the form of lotion or gel to kill these viruses. Alternatively, graphene coated wipes can also be potentially used to disinfect the objects surfaces. Additionally, the mist spray that is generally utilized to sanitize human body can also be used to clean any surfaces. Interestingly, the conjugation between graphene based nanodrugs and antivirals can be a successful formulation and effective. On the other hand, in order to restrict the aerosol transmission into various infected areas including the hospitals, graphene coated PPE is essentially required. Also, to efficiently filter clean the SARS-CoV2 and various other respire gems that present in air, it is vital to prepare air-conditioning and air-purification machines which have built in multi-layered graphenebased layers with modified positive charge filters. Furthermore, by tuning their interlayer spacing and microstructural properties, it is possible to separate COVID-19 virus from water by utilizing graphene-based filter membranes. Moreover, graphene based photocatalyst can also be fabricated to degrade and inactivate COVID-19 virus in water directly. With regards to this statement, several investigations related to the photocatalytic inactivation of microorganisms have been reported, for instance, it has been revealed that the photocatalyst is a suitable candidate as a good disinfectant against pathogens, which not only destroy them but also the photocatalyst can be utilized as sterilization respectively [7]. In addition, the significant results exhibited by graphene-based nanomaterials as sensors to detect COVID-19 point towards a promising future. However, it is essential to exploit the approaches involved in the development of graphene with other two-dimensional materials, which will offer a wider choice of physical and chemical properties towards designing new sensors. Previously, various of the strategies proposed involved the usage of noncovalent chemistry to immobilize large biomolecules on the flat surface of the two-dimensional materials. At the same time, before introducing these sensors to the market industry, the reproducibility during production at a large scale and the stability of the devices over a period of time are some of the key components that needs to be evaluated accordingly. Moreover, it is vital to ensure the versatility of the sensing platforms so that it will have the capability to adapt to the possible mutations of these viruses. On the other hand, in the drug delivery system, it is important to consider long term perspective of the applications of graphenebased nanomaterials and other common hard nanomaterials respectively. Therefore, there is need for a deeper understanding on the antiviral efficacy and multifunctional strategies. Furthermore, there are many thoughts related to the biocompatibility and long term toxicity of two-dimensional materials, for instance, despite various studies being reported previously, there still lies an unclear picture on the toxicological risks of graphene related materials [10]. Up to date, several types of hard nanomaterials have been clinically approved to treat human, however, we are also hoping that in the future, some of the graphene related nanomaterials will also be used for human treatment [10]. Even though graphene-based nanomaterials have revealed excellent results, these materials have only marginally explored in viral infections and their other capability has been unleashed yet in terms of their potential in inhibiting viral replication, blocking cell uptake and alerting the immune system respectively. In short, a lot more work is required to address these specific requirements concerning several exciting and challenging applications in antiviral, drug delivery and biomedical sciences respectively. Currently, despite knowing the fact that graphene-based nanomaterials are a promising candidate to combat against the COVID-19, in our opinion, it is equally vital to explore other types of two-dimensional materials namely covalent organic frameworks, metal organic frameworks, transitional metal dichalcogenides metal carbide (MXene), etc. In other words, due to their fascinating properties such as large surface area, mechanical robust, high affinity for guest materials, good conductivity, layered structure and flexibility, the versatile family of the two-dimensional materials could perhaps be an ideal platform against the SARS-CoV-2 virus. Last but not the least, in the process to battle against the current COVID-19 and future pandemics, we can foresee a bright future for graphene-based nanomaterials provided if more significant efforts are being devoted to theoretically and experimentally modulate such unique materials via controlled functionalization and to explore the most suitable material with graphene and other two-dimensional materials for ideal properties. Despite their advantages, the nanotechnology-based systems also face various obstacles before they can be safely introduced to the market. Therefore, before being widely adopted in the healthcare system, it is vital to address some bottlenecks associated with the nanotechnology applications. For instance, it is essential to ensure the safety of the nanomaterial via in-vitro studies of their biocompatibility, whereby the due to the formation of protein corona, the fate of the nanomaterials can be changed into the body specifically when they travel through the blood stream [145]. Hence, in order to obtain a detailed understanding of the nanoparticles toxicity in the body, the in-vivo studies need to be executed very carefully [146]. Due to the limitations, at an early stage of research, generic protocols have been introduced and implemented for categorization and development which will result in minimizing the chances of failures associated with the clinical translation of nanotechnology based therapy [147]. On the other hand, it is essential that all regulatory agencies, toxicology, scientific experts in material science and pharmacology to have a closer collaboration in order to overcome other related limitations. Furthermore, as far as toxicity of nanoparticles is concerned, it is related to their distribution in the lymph streams and bloodstream and their ability to penetrate through almost all tissues, organs and cells as well as to interact with different macromolecules respectively. The functioning and structure of the organs can be altered by the toxicity of the nanoparticles. Also, it is important to understand that the body defence system does not recognize specific types of nanoparticles, thus, resulting in the accumulation of nanoparticles in the tissues and organs which then leads to high lethality or toxicity respectively. In other words, rather than using the currently available traditional nanoparticles, the appropriate solution is to design nanoparticles with a lower toxicity. Therefore, there is a need to develop more advanced approaches and research to investigate the toxicity of nanoparticles and also to analyse different pathways and mechanism of toxicity at molecular level [148]. To support this statement, the design of nanoparticles was investigated by Campos et al. which had very small or no negative effects. Here, it was revealed that it was significantly impossible to investigate such nanoparticles unless all quantitative and qualitative physical and chemical properties of nanoparticles were systematically taken into consideration and a relevant experimental model to estimate their influence on biological systems is available [149]. On the other hand, even though there has been significant research work done on nanotechnology-based tools to combat COVID-19, there are still various challenges present which needs to be addressed. For instance, the combination therapy by using nanoparticles as a delivery system, nanomaterial based disinfectant agents to kill pathogens, potential use of nanomaterials to avoid the conventional restriction related with antiviral drugs and early, rapid, exceeding sensitive, portable and reasonable development of diagnosis kits as well as the development of nanoparticle-based vaccine to battle against the SARS-CoV-2 and other pathogens respectively. In addition, as mentioned above, cell toxicity, immunotoxicity, genotoxicity fibrosis, oxidative stress and inflammation respectively are some of the drawbacks associated with nanoparticles which needs to be solved before it is being implemented. Hence, in our opinion, via the utilization of nanotechnology-based strategies, we predict that many advances will be soon accomplished in terms of diagnosis, therapy and treatment of COVID-19 virus. For instance, nanotechnology-based tools can be used to treat this virus and the emerging pathogens, whereby via the utilization of nanotechnology based therapeutic antibodies (e.g., mRNA or protein-based vaccines), the active drugs are then specifically delivered to the host targeted organs, thus resulting in a rapid detection of the viruses present in a human body. Last but not the least, the ultimate challenge for the near future that requires a solution is to identify approaches to transfer nanomaterial technology to actual clinical applications as well as the feasibility of production on a large scale. Demand aspects of graphene-based products Globally, with the outbreak of the SARS-CoV-2 virus, this has resulted in the opening of a vast gateway for research and leading to the inventions of new materials and techniques along with the advancement in technology to battle against the pandemic. Interestingly, the mother of all carbon allotropes, graphene, and its derivatives, have emerged as a wonder material to battle against the virus. Due to their unique features such as excellent properties, small size and high anti-microbial efficacy, graphene and its derivatives have gained much spotlight to be explored for the detection, prevention, and treatment of SARS-CoV-2 virus [155] [150]. Furthermore, the commercialization of graphene advancements in the field of semiconductors, electronics, aeronautics, etc, have been widely reported in various studies, whereby more than 50% of the advancement involves graphene and graphene-based materials. In addition, with the outbreak of the SARS-CoV-2 virus, this has resulted the demand of graphene to increase drastically in the healthcare sector, whereby it has been estimated that by 2027, a 39% annual growth rate will be experienced by the graphene market and a market size of USD$ 2864 million will be achieved [150]. In short, by using graphenebased materials, a variety of facemasks, biosensors, prophylactic, SARS-CoV-2 diagnosing kits, etc., have been developed by many top-tier companies globally, however, a significant amount of research is still required to be conducted for mature market handling. On the other hand, besides the urgent demand for vaccines to battle against the COVID-19, the market monitoring of graphene-based materials should be carefully proceeded. For instance, before being released into the market, the limitations such as the instability or the quick aggregation properties of graphene needs to be properly addressed first. Since the graphene market is expected to increase due to this COVID-19 outbreak, therefore, it is mandatory that the cooperates frequently check and consider the risks and opportunities of graphene in the market [150]. Moreover, it is important to understand that the finance system can be affected by the impact of SARS-CoV-2 on prices, commodity, supply chains, etc., hence, various alternatives and backup strategies should be planned and prepared by the corporates to address future issues. In addition, in the fields of electronics, a loss of market was observed as the COVID-19 affected the transportation and demand of graphene [151]. However, globally, the sales and the demands will eventually increase rapidly with the increase in the productivity of graphene-based materials, thus, resulting the overall import and export of graphene and its derivatives to significantly increase. Finally, in the pandemic era, the increase in graphene and graphene-based materials stocks will benefit the economy of a country. Conclusion and Outlooks Worldwide, the COVID-19 pandemic has resulted to an unprecedented loss of human lives and economy. Therefore, there is urgent need of collaboration from the medical industry, engineers, scientists, and physicians to battle this global threat of SARS-CoV-2 virus and to custom a strong response to tackle and overcome such phenomena in the future. Moreover, global commitments from all countries, interdisciplinary collaborations, government organizations, WHO, etc. are essential to support, raise sufficient funds and to strengthen the research and development in science and technology respectively. Interestingly, for future advancements, graphene-based technology has proven to be a promising tool to diagnose, prevent and treat numerous emerging and remerging diseases. As sensors that can be used to detect SARS-CoV-2 are photoluminescence, colorimetric sensors, and SPR biosensors. Moreover, WHO continuously highlighted the importance of PPE supplies for frontline health workers, thus, strongly recommending the utilization of graphene coatings facemasks to reduce the risk of transmission. GO have been employed broadly in the fabrication of air filters due to its extraordinary features namely unique construction, superior anti-microbial performance, and excellent mechanical and chemical stability. In short, this review article summarizes the current state of knowledge of graphene and its related materials, and the importance of the research and development associated with technology to battle the COVID-19 pandemic respectively. Acknowledgements Authors would like to thank University of Malaya for providing the research facilities and SATU Joint grants ST022-2021 and ST031-2021.
2022-08-28T13:21:25.004Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "d407cf71f1b0d0004d9eff4dfff3603f8184391c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.cartre.2022.100208", "oa_status": "GOLD", "pdf_src": "ElsevierCorona", "pdf_hash": "d407cf71f1b0d0004d9eff4dfff3603f8184391c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
247152276
pes2o/s2orc
v3-fos-license
Characteristics of COVID-19 and Research Progresses on Genetic Engineering Vaccine Based on Big Data Big data platforms can effectively analyze the data and maximize the value of the data by mining the text, digital, video, and image data in various industries. The combination of big data and various industries has brought great changes to the development of the industry. Providing data according to demand can save more time and promote the development of the industry. SARS-CoV-2 (COVID-19) is sweeping across the world, and it has spread to several countries and regions. Human infections have been reported all around the world. Due to the unique characteristics of COVID-19, no specific medicine is available yet to cure patients before the successful research and development of vaccines. Hence, it is of important significance to research and develop vaccines. Guided by the biological characteristics of COVID-19 and the philosophy of synthetic biology, this study reviews the developed genetic engineering vaccines. Background In November 2019, patients with novel coronavirus pneumonia (COVID-19) were identified in Wuhan, and subsequently, patients with COVID-19 were reported in many provinces in China and abroad. As of March 2020, the World Health Organization (WHO) declared a global pandemic of coronavirus disease 2019, and many countries adopted strict blockade measures. As of June 2020, the number of patients with COVID-19 is close to 8 million. According to clinical data, COVID-19 is initially asymptomatic, and patients with positive nucleic acid tests have no early fever. Some patients have early temperature fluctuations between 36.5°C and 38°C, but the body temperature is lower than 39°C [1]. Studies have shown that even patients without overt symptoms have an extremely high capacity for virus transmission. Only a small number of mildly ill patients show extensive infection shadows on lung CT after hospitalization, and most patients have good lung findings. However, RNA of COVID-19 was found in the laryngeal tissue of both severely and mildly ill patients. COVID-19 belongs to the RNA viruses and contains four main structural proteins, namely, the macrospin (S glycoprotein), which forms a polymer, the M glycoprotein, which wraps the RNA and internal proteins, and the phosphorylated N and E proteins [2]. Outbreak viruses belong to beta-coronaviruses. β-coronaviruses with similar fragments have been identified in bats through evolutionary trees. erefore, it is presumed that the main transmission route of COVID-19 is associated with bats. Vaccines are known to be the simplest and most direct means of stopping epidemics. However, because COVID-19 is so strongly mutated, it is difficult to develop a vaccine. Currently, vaccine development is mainly based on the structural features of the viral S protein. e S protein mediates the binding of the virus to the host cell and becomes the key to vaccine development. Currently, there are many bioinformatics methods used for vaccine structure design. Since the infection characteristics of COVID-19 are similar to those of historical SARS and MERS, this study provides a comparative analysis of these three viruses with the aim of gaining experience in the development of specific vaccines. Relations of COVID-19 with SARS and MERS It is essential to study the relationship of new coronaviruses with other coronaviruses in human history. By analyzing previous viruses and their relationship to COVID-19, vaccine development can be made less difficult and shorter. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) are two coronavirus outbreaks associated with COVID-19. SARS was an outbreak 17 years ago and has a high structural similarity to COVID-19 and is, therefore, closely related. Middle East respiratory syndrome is the most recent coronavirus outbreak prior to COVID-19. Based on the abovementioned understanding, it is tentatively suggested that COVID-19 may have some connection with MERS and SARS. By analyzing the similarities between the new coronavirus vaccine and MERS and SARS, it will facilitate researchers to further determine the direction of the new coronavirus vaccine development. e structural proteins of viruses are the key to determine the type of virus. erefore, determining the relationship between these viruses must start from the similarities and differences of the related structural proteins. First, the genes encoding the structural proteins of the viruses need to be analyzed, and the degree of gene sequence similarity can generally help researchers determine their relationship to COVID-19. Although the timing of MERS was closest to the outbreak of COVID-19, the structural protein gene sequences of COVID-19 were more than 90% similar to those of SARS. Based on bioinformatics analysis, COVID-19 is closer to SARS in the evolutionary tree. erefore, studying the available SARS data can yield more background data related to COVID-19 [3]. Since the S and N proteins of viruses are usually conserved, vaccines prepared based on the structure of S and N proteins in viruses have a long validity period. ACE2 Is the Receptor of COVID-19 Cells By analyzing the S proteins in SARS and COVID-19, scientists found 76% similarity between the S proteins of the two viruses, implying that similar proteins on cells may be used as receptors to enter cells when SARS and COVID-19 invade them. ACE2 is an important channel for SARS viruses, while SARS-CoV S-MLV and SARS-CoV-2 S-MLV have the same ability to enter cells. In this experiment, ACE2 was found to be a receptor for multiple coronaviruses to enter cells. Based on further sequence analysis of the SARS glycoprotein, ACE2 is a receptor for COVID-19, which binds to COVID-19 and mediates its entry into cells [4]. Subsequently, the cellular infection behavior of COVID-19 was analyzed. Similarly, cellular infection by SARS was used in the comparative analysis. e researchers analyzed the similarities and differences between SARS and COVID-19 based on the way they bind to ACE2. According to previous studies, the glycoprotein of SARS has 14 key binding sites to ACE2. However, the analysis of COVID-19 binding to ACE2 revealed that 8 of the 14 sites were extremely conserved and the remaining 6 were semiconserved sites. is study explains the similarity of SARS and COVID-19 [4]. Based on the abovementioned analysis, scientists used cryoelectron microscopy to observe the specific binding sites of S proteins to ACE2. By observing the trimmers formed by S proteins (S1 subunit and S2 subunit) with ACE2, it was found that the glycoprotein S2 subunit binds to the ACE2 protein in a very similar form in both viruses. Epitopes on T Cells and B Cells Under all conditions, the targets recognized by t cells are peptide fragments derived from exogenous proteins. ese peptide fragments are captured by specific molecules of the host cell and presented to the cell surface. e molecules that submit antigenic peptides to t cells are cell membrane glycoproteins encoded by a complex set of major histocompatibility complex (MHC) genes. Researchers can design experiments to study the response of t cells and b cells to a given antigen. It has been reported that 27 of 115 epitopes on the surface of t cells are involved in COVID-19 responses, all of which target the S and N proteins of COVID-19 [5]. However, very few of them are able to produce their neocoronavirus counterpart genes. Similar to t cells, b cells have some epitopes that produce binding responses to the structural proteins of COVID-19. Although only some individuals possess MHC genes capable of producing responses to COVID-19 structural proteins, generating epitopes for binding responses is essential for vaccine [4] studies. Immune cells can respond to the structural protein of COVID-19, thus helping researchers to discover the binding sites of COVID-19 structural protein. Different epitopes on immune cells can generate coupling reactions with different structural proteins on COVID-19, which facilitates researchers to identify the binding sites of COVID-19 [6] structural proteins. By integrating the coupling reactions, the true key proteins can be screened from the many binding sites of the structural proteins, thus screening the most critical protein subassociations that recognize COVID-19. By analyzing the sequence and spatial structure of the key proteins and their subconjugation, a COVID-19 vaccine can be further developed. An Effective Vaccine Is Needed to Resist COVID-19 ere are no specific anti-neocoronavirus drugs available, and all patients who are cured have the virus killed in their bodies through medical care and autoimmunity. erefore, there is a need to develop a vaccine to stop the further spread of the new coronavirus. Vaccination of the public develops general immunity to the new coronavirus and weakens the ability of COVID-19 to infect and spread. Currently, many countries around the world are giving sufficient attention to vaccine development. Many companies related to the medical industry around the world are investing a lot of manpower and capital in the development of new coronavirus vaccines. According to incomplete statistics, pharmaceutical companies are working on five main technological lines, including the development of inactivated vaccines, recombinant protein vaccines, nucleic acid 2 Journal of Healthcare Engineering vaccines, adenovirus vector vaccines, and attenuated influenza virus vaccines. e research and development of laboratory vaccines is based on the infectious behavior of the virus. at is, virus samples need to be obtained before a vaccine can be studied, but in practice, the virus is generally not collected directly from patients. It is claimed that the operation of collecting virus directly from patients is very complex. e limitations of the direct sampling method are even more pronounced when the laboratory is far from the infected area. For example, it may lead to virus leakage. erefore, the traditional method of studying viruses in the laboratory is to query the genetic sequence of the virus through international databases (e.g., NCBI) and then synthesize a clone of the currently prevalent virus artificially in the laboratory. Finally, clones are used as the basis for research. In the early stages of a viral plague outbreak, research teams are often organized in the country of origin to investigate the infected individuals. Several suspected pathogens are extracted from the patients. Later, each of these suspected pathogens is analyzed according to Koch's law and the pathogen of the outbreak is finally identified. e viral pathogen required structural analysis and genetic sequence analysis to determine the virus type and to publish the genetic sequence of the virus to the world. Later, research departments around the world used the published gene sequences as the basis for studying the virus by various means. When a virus is synthesized in the laboratory, a reverse genetics approach must be used. In other words, we must understand the specific functions of different genes in the context of known viral gene sequences. Synthetic viruses are synthesized in engineered bacteria. e cDNA of the virus must be obtained, since the virus is synthesized in engineered bacteria. e cDNA of the virus is fed into a recombinant plasmid, which is then fed into the engineered bacterium. Finally, the virus is assembled in the engineered bacterium [7]. In the plasmid expression system, the cDNA of the virus is inserted into the gap between the RNA polymerase I promoter and terminator. At the same time, the entire transcription unit of RNA polymerase I is surrounded by the promoter and terminator of RNA polymerase II. is structure is known as the polii-polii structure. In this structure, the expression of RNA polymerase I and RNA polymerase II uses two different DNA basic strands as templates. e expression of RNA polymerase I and RNA polymerase II is designed to transcribe two different DNA basic strands, resulting in two complementary RNA strands. us, this structure ensures that both antisense RNA strands and righteous RNA strands are obtained from a single cDNA strand. Meanwhile, after the antisense RNA strand and the righteous RNA strand are expressed in the host cell, the righteous RNA starts to translate the viral protein as mRNA. Similar to the process of infection of cells by common viruses, virus assembly and cell lysis proceed sequentially [8]. Some viruses have genes that are segmented. In engineered bacteria, different gene fragments of these viruses must be imported into the plasmid to express the complete viral genome. It was shown that viral fragments can be automatically assembled after expression in engineered bacteria. In addition, each newly generated viral particle contains all fragment genes of the virus, which are not duplicated. e virus was fully expressed in the engineered bacteria and could be detected from the cell culture after cell lysis. After cell pyrolysis, the highest virus concentration is in the supernatant. erefore, a large number of viruses can be collected. Viral Vector Vaccine. Viral vector vaccines are designed to amplify the COVID-19 antigen in the human body by feeding the viral antigen gene into a viral vector, thereby triggering an immune response to the COVID-19 antigen in the body. e virus usually used as a vector is generally called adenovirus, because adenovirus stimulates the body to produce strong humoral or cellular immunity. In addition, adenoviruses are highly capable of infecting the respiratory and intestinal tracts, causing rapid dispersion of infected cells and producing an even stronger immune response in the body. Adenoviruses offer several advantages and have become a more desirable solution for viral vector vaccines. As a viral vector, adenovirus is a genetic mutant of the original adenovirus. Adenovirus can trigger a strong immune response in the body even after deletion of the E1 or E3 genes. erefore, adenoviruses must be purified before they can be safely used. is virus is known as an i-generation virus. Due to the deletion of E2 or E4, the immune response to adenovirus infection is reduced and the viral genes are less packaged, but the safety is increased. is virus is known as "second generation." When all or most of the adenovirus genes are deleted, the virus is called a thirdgeneration virus and the immune response is very low. Considering the efficiency of the immune response, the first generation is often used as a vaccine vector, which accelerates the onset of the immune response. In summary, the S protein of COVID-19 is strongly antigenic. erefore, expression of the S protein of COVID-19 in vivo elicits an effective epitope response. e cDNA expressing the S protein was integrated into the genome of a generation of adenovirus. Isolated human respiratory cells were then infected with adenovirus, and the products of the infected cells were detected by western blot. e S protein was observed to be produced or not. e next experiment was started after the test. Recombinant Protein Vaccine. According to the abovementioned analysis, the S protein is essential for the human body to produce an immune response to COVID-19. Recombinant protein vaccines are synthesized in large amounts, and the S protein is made into a vaccine that is injected into the human body. A large amount of S protein appears in the human body and is observed as an antigen by the immune system, resulting in the production of antibodies to S protein through the organism. is antibody can Journal of Healthcare Engineering bind to COVID-19 at the same time and trigger an immune response. e production of S protein is mainly dependent on the engineered bacteria. e mRNA required for the translation of the S protein can be obtained by sequencing and structural detection of the S protein. cDNA expressing the S protein is obtained from the mRNA, which is inserted into a plasmid to obtain a recombinant plasmid expressing the S protein. e recombinant plasmid was imported into the engineered bacteria such as E. coli and Bacillus subtilis. e S protein was detected by western blot, and the engineered bacteria that successfully expressed S protein were selected for further screening. e bacteria with high S protein expression can be screened by a directed evolution strategy and mass production of bacteria can be realized. Highly pure S proteins can be isolated and extracted from bacterial products for use as vaccines. mRNA Vaccines. e mRNA binds directly to the ribosome inside the cell and translates the peptide chain. Messenger RNA vaccines are injected into receptors by selecting messenger RNA capable of translating viral antigenic proteins. Mass production of antigenic proteins is achieved through the receptor's synthesis mechanism, and antibodies specific for the antigenic protein are formed based on the immune system's response to the antigenic protein. e mRNA for synthesizing antigenic proteins is not the only component of an mRNA vaccine. Due to the instability of RNA structure, RNA hydrolases are abundant in the living body and in the environment, making it impossible for ordinary RNA to survive stably in the living body for a long time. erefore, the development of mRNA vaccines requires modification of mRNA to prolong its effectiveness in the environment. To prolong the life of mRNA, it is processed into non-self-amplifying mRNA and self-amplifying mRNA. Self-amplifying mRNA vaccines: there is an arbovirus that belongs to the class A virus. It is an RNA virus that can replicate independently in the human body and produce a strong immune response. Structural genes and genes related to toxin expression can be knocked out by biotechnological means to maintain only its own replication function [9]. Subsequently, mRNAs that can translate S proteins are integrated into the RNA virus, giving the mRNA the ability to self-replicate. Non-self-amplifying mRNA vaccine: this vaccine is mainly used to modify mRNA and increase the stability of mRNA in cells. Utr structure is added to the 5′ segment of mRNA. At the same time, the poly A tail is added to the 3′ segment. Such modified RNA is difficult to bind to RNA hydrolase and also prolongs the residence time of mRNA in the cell. Unlike self-amplifying mRNAs, non-self-amplifying mRNAs are not viruses and do not have the ability to selfinfect and self-replicate. erefore, non-self-amplified mRNA vaccines cannot enter cells directly and must be injected accurately into the recipient cells to produce the target S protein. Conclusions and Prospects e development of vaccines and specific drugs takes a long time, ranging from tens of months to 5 years. e theoretical concepts of synthetic biology are gaining popularity in a short period of time, but provide a very powerful motivation for research. Unlike traditional biomedicine, the concept of synthetic biology allows the integration of multiple disciplines and the multifaceted analysis of novel coronaviruses, which can significantly facilitate vaccine development. However, COVID-19 will coexist with humans for a considerable period of time in the foreseeable future. Until a vaccine is developed, humans can only reduce the chance of virus transmission through social appeals and executive orders. Data Availability e datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest e author declares no potential conflicts of interest.
2022-02-28T16:12:18.959Z
2022-02-26T00:00:00.000
{ "year": 2022, "sha1": "7f7cad303650f039b0a5b9f91e7d400c7be7d797", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jhe/2022/9311052.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de2cabeec3af864423c8da58851a54b781fcbe04", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
119120752
pes2o/s2orc
v3-fos-license
Remarks on Higgs Mechanism for Gravitons We construct two kinds of model exhibiting Higgs mechanism for gravitons in potentials of scalar fields. One class of the model is based on a potential which is a generic function of the induced internal metric $H^{AB}$, and the other involves a potential which is a generic function of the usual metric tensor $g_{\mu\nu}$ and the induced curved metric $Y_{\mu\nu}$. In the both models, we derive conditions on the scalar potential in such a way that gravitons acquire mass in a flat Minkowski space-time without non-unitary propagating modes in the process of spontaneous symmetry breaking of diffeomorphisms through the condensation of scalar fields. We solve the conditions and find a general solution for the potential. As an interesting specific solution, we present a simple potential for which the Higgs mechanism for gravitons holds in any value of cosmological constant. Introduction Since the advent of the pioneering work of Fierz and Pauli [1], it has turned out that the theoretical problems involved for constructing a complete massive gravity theory are very subtle, challenging and even call for consideration beyond perturbation theory [2]. For instance, it is nowadays well known that there is no smooth massless limit in perturbation theory of massive gravity in the sense that the massless limit in massive gravity does exist but does not agree with the result predicted by Einstein's general relativity which describes massless gravitons. This mass discontinuity, which is dubbed as van Dam-Veltman-Zakharov (vDVZ) discontinuity [3,4], is a fatal defect in massive gravity theory since it produces a different value for the bending of light by the sun, that is, the value obtained from massive gravity is 3 4 of that from Einstein's general relativity and experiments, which implies that gravitons must be strictly massless in nature. The reason why there is no smooth massless limit is that there is a discrepancy of the number of dynamical degrees of freedom between massive and massless gravitational theories. Actually, in four space-time dimensions, we have five states of spins ±2, ±1 and 0 for massive gravitons whereas we have only two states of helicity ±2 for massless gravitons, so in the massless limit of massive gravitons, an extra helicity 0 state shows up in the spectrum, thereby breaking a smooth limit to massless ones. (Two states of spins ±1 decouples smoothly because of the current conservation.) The same problem is known to exist in non-abelian gauge theories though there is no such a problem in case of abelian gauge theories by the current conservation [2]. It is worthwhile to notice that this observation suggests us one resolution for the vDVZ discontinuity, namely a possible way out of this might be to match the number of the degrees of freedom in the both theories. Indeed, in case of massive non-abelian gauge theories, we can take a smooth massless limit by incorporating extra scalar fields and triggering vacuum condensation of them, which is nothing but the Higgs mechanism. Thus, it is very natural to pursue the analogy of massive gauge theories and then ask ourselves whether it is possible to construct the Higgs mechanism for gravitons in order to obviate the problem relevant to the absence of massless limit in massive gravity theories. Recently, interests on the construction of massive gravity theories have revived from different physical motivations [5]- [13]. One motivation comes from the astonishing observational fact that our universe is not just expanding but is at present in an epoch of undergoing an accelerating expansion [14,15,16]. Massive gravity theories might shed some light on this problem in the sense that they could modify Einstein's general relativity at large cosmological scales and might lead to the present accelerated expansion of the universe without assuming the existence of mysterious dark matter and dark energy. The other motivation for attempting to construct massive gravity theories is conceptual and is related to the noncritical string theory applied to quantum chromodynamics (QCD) [10]. For instance, as inspired in AdS/CFT duality, if we wish to apply a bosonic string theory to the gluonic sector in QCD, massless fields such as spin 2 graviton in string theory, must either become massive or be removed somehow by an ingenious dynamical mechanism since such the massless fields do not appear in QCD. A few years ago, 't Hooft proposed a new Higgs mechanism for gravitons where the massless gravitons 'eat' four real scalar fields and consequently become massive [10]. In his model, vacuum expectation values (VEV's) of the scalar fields are taken to be the four space-time coordinates by gauge-fixing diffeomorphisms, so the whole diffeomorphisms are broken spontaneously. Afterward, a topological term was added to the 't Hooft model where an 'alternative' metric tensor is naturally derived and the topological meaning of the gauge conditions was clarified [17] 2 . One serious drawback in the 't Hooft model is that a scalar field appearing after the SSB is a non-unitary propagating field so that in order to keep the unitarity the non-unitary mode must be removed from the physical Hibert space in terms of some procedure. This problem was solved by including higher-derivative terms in scalar fields and tuning appropriately the cosmological constant to be a negative value in Ref. [11] 3 . More recently, Chamseddine and Mukhanov have presented a new Higgs mechanism for gravitons also by adding a specific form of higher-derivative terms in scalar fields to the Einstein-Hilbert action [20]. One advantage of their model is that we do not have to restrict the cosmological constant to be negative, namely zero or positive cosmological constant is also allowed to trigger the gravitational Higgs mechanism. This model was later examined in [21] from the viewpoint of general models with an arbitrary potential of the induced internal metric. The aims of this article are the following: First, we construct two kinds of massive gravity models based on either the induced internal metric H AB or the usual metric tensor g µν and the induced curved metric Y µν . Although it seems that there are two distinct formulations of massive gravity models, the two formulations are in essence equivalent through identities of the metrics. Next, we derive conditions on the scalar potential in such a way that gravitons acquire mass in a flat Minkowski space-time without non-unitary propagating modes in the process of spontaneous symmetry breaking of diffeomorphisms. We solve the conditions and find a general solution for the potential. As an interesting specific solution, we present a simple potential for which the Higgs mechanism for gravitons holds in any value of the cosmological constant. This paper is organized as follows: In section 2, we construct a model of the Higgs mechanism for gravitons by using the induced internal metric H AB . In section 3, we also construct such a model by using the usual metric tensor g µν and the induced curved metric Y µν . In section 4, we present some concrete models satisfying the conditions derived so far. In particular, we present a simple model showing the Higgs mechanism for gravitons in any value of the cosmological constant in four dimensions. The final section is devoted to conclusions and discussion. Model based on the induced internal metric H AB In this section, we wish to construct a rather general model of Higgs mechanism for gravitons on the basis of the induced internal metric H AB in a general D-dimensional space-time. We start with the following action [21] 4 : Here G is the D-dimensional Newton's constant and the induced internal metric H AB is defined as where φ A are D real scalar fields with A = 0, · · · , D − 1, and the indices A, B, · · · are raised and lowered in terms of the Minkowski metric η AB = diag(−1, 1, · · · , 1) 5 . Finally, a priori V is a generic function of H AB . This general action gives us the following equations of motion: We are now interested in obtaining 'vacuum' solution of the form This vacuum solution is not static since one component of φ A , that is, φ 0 is essentially equivalent to time x 0 = t. The requirement of the presence of the vacuum solution leads to a constraint on the potential V where we have defined H AB * = η AB and omitted to write the indices AB on H AB * explicitly for simplicity. In other words, the equation (5) is a constraint imposed on the potential V in order to have a flat Minkowski space-time as the background. Next, we expand the fields around this vacuum (4) as and write out all the terms up to second order. At this stage, we gauge away the scalar fluctuations ϕ A by using diffeomorphisms. Of course, once we gauge away the D scalars, we can no longer gauge away any components of the gravitational fluctuations h µν . After setting ϕ A = 0, the linearized equations of motion for (3) read where we have used (5) and for simplicity the Einstein's tensor G µν ≡ R µν − 1 2 g µν R is not expanded around the Minkowski metric. Then, the strategy for finding appropriate potentials is to require that with a suitable choice of the potential the linearized equations of motion (7) reduce to a set of equations which are the same as those of Fierz-Pauli massive gravity [1]. This requirement is needed for excluding the appearance of the scalar ghost in the spectrum. In order to find general conditions which the potential must satisfy to become the Fierz-Pauli massive gravity, by taking account of the symmetry of indices and the fact that because of the Lorentz symmetry only the flat Minkowski metric is available as second-rank tensor in the linear order of the approximation, we can set where a 1 , a 2 are some constants and we have defined η µ(ρ η σ)ν ≡ 1 2 (η µρ η σν + η µσ η ρν ). Substituting Eq. (9) into Eq. (7), we obtain Then, comparing Eq. (10) with the Fierz-Pauli massive gravity (8), it turns out that the constants a 1 , a 2 must satisfy the relations Consequently, we have a general solution for the potential at H AB * = η AB which is given by To close this section, let us note that the cosmological constant is defined as Λ ≡ V (H AB = 0). Moreover, it is of importance to note that in the model of massive gravity under consideration, although diffeomorphisms are spontaneously broken via condensation of scalar fields, the Poincare symmetry is never broken. This is a nontrivial statement since there is in general a possibility such that in the Higgs phase of gravitation condensates of scalar fields could break Lorentz and/or translational symmetries. Alternative model based on g µν and Y µν In this section, we wish to construct an alternative general model of Higgs mechanism for gravitons based on the usual metric tensor g µν and the induced curved metric Y µν in a general D-dimensional space-time. We shall proceed in an almost identical fashion to the model treated in the previous section. Let us start with the following action: Here the induced curved metric Y µν is defined as and V is a generic function of both g µν and Y µν . From the action (13), one can easily derive the equations of motion: This time, the requirement of the presence of the vacuum solution (4) imposes the following constraint on the potential V : where we have defined g * µν = Y * µν = η µν . As before, after expanding the fields around the vacuum (4) up to quadratic order and taking the gauge conditions ϕ A = 0, the linearized equations of motion for (15) take the form where we have used (16). In order that the linearized equations of motion (17) coincide with the Fierz-Pauli massive gravity (8), the potential V must satisfy some contraints at g * µν = Y * µν = η µν . To find them, we assume the following relations: where b 1 , · · · , b 5 are some constants. Then, using Eq's. (8), (17) and (18), one is led to the conditions on the constants b 1 , · · · , b 5 Hence, together with Eq. (16), we have a general solution for the potential V where 1 2 b 1 + b 2 = 0. In other words, the model with any potential satisfying Eq. (20) gives rise to physically plausible massive gravity theories. Let us note again that the cosmological constant is defined as Λ ≡ V (Y µν = 0) as well. In this section, we have presented an alternative model of the Higgs mechanism for gravitons based on metrics g µν and Y µν . At first sight, it might appear that this new massive gravity model is different from the model made in the previous section. However, this is an illusion. In fact, the two kinds of models are essentially equivalent since we have identities such as η AB H AB = g µν Y µν , H AB H AB = Y µν Y µν etc. Although the two models are equal to each other, the model based on H AB is more convenient to handle than that based on g µν and Y µν in the sense that Eq's. (5) and (12) are simpler than Eq. (20). Thus, we shall make use of the model based on the induced internal metric H AB for presenting examples in the next section. Concrete models We are now ready to present some concrete models of the Higgs mechanism for gravitons by fixing the form of the potential V . Before doing so, let us recall relevant works done thus far. In Ref. [10], 't Hooft has advocated an idea of Higgs mechanism for gravitons where the massless gravitons 'eat' four real scalar fields and consequently become massive. One serious problem in the 't Hooft model is that a scalar field appearing after the SSB is a non-unitary propagating field, thereby violating the unitarity. This problem was afterward solved by including higher-derivative terms in scalar fields and tuning appropriately the cosmological constant to be a negative value in Ref. [11]. Recently, Chamseddine and Mukhanov have presented a new Higgs mechanism for gravitons also by adding a specific combination of higher-derivative terms in scalar fields to the Einstein-Hilbert action [20]. One advantage of their model is that we do not have to restrict the cosmological constant to be negative, namely zero or positive cosmological constant is also allowed to trigger the gravitational Higgs mechanism. Although the model by Chamseddine and Mukhanov is of interest, the potential which they have found is a bit tricky, higher order in H AB (in fact, sixth order) and has a special property V (H * ) = 0, so it might be more interesting if we could find more natural models with the lower-order terms in H AB which also exhibit the Higgs mechanism for gravitons in any value of the cosmological constant. In this section, we shall look for such a model. As the first model, we would like to deal with a model with the property that the cosmological constant takes any value except in two and four dimensions. The first model has quadratic terms with respect to the metric H AB : where H ≡ η AB H AB , Λ is the cosmological constant, and α 1 , α 2 , α 3 are constants to be determined shortly. The condition (5) gives rise to Moreover, Eq. (12) produces the relations Together with Eq's. (22) and (23), we can express the remaining α 1 by As a result, the potential reads Then, with the help of (25), putting H AB = H AB * = η AB leads to the value of the cosmological constant in this model Depending on the value of V (H * ), the cosmological constant Λ takes an arbitrary value. However, note that in particular, Λ = −3m 2 < 0 for D = 2, 4, so this model of the Higgs mechanism for gravitons holds only in case of the negative cosmological constant in our fourdimensional space-time, which is a unsatisfactory point of this model. Nevertheless, the model at hand gives us two useful informations. The one information is that our model includes the model constructed by Kakushadze in a specific case [11]. Indeed, his model corresponds to the case of α 2 = 0, that is, V (H * ) = m 2 , and then the cosmological constant is given by Λ = − D 2 +4D−8 8 m 2 and is always negative for D > 1 [11]. The other information extracted from our model is that we cannot construct a plausible massive gravity model by starting with the 't Hooft model. The 't Hooft model consists of only linear term of H in addition to the constant cosmological term, so in order to get the 't Hooft model, we have to set both α 2 and α 3 to be zero at the same time. But it is obviously impossible, thus meaning that the 't Hooft model is not free from the ghost mode. A problem associated with the first model (25) is that we can construct a reasonable massive gravity model only in case of the negative cosmological constant in four dimensions. Thus, next let us move on to the second model of the Higgs mechanism for gravitons, which is a slight generalization of the first model in that the potential now involves cubic terms in H AB , but the second model turns out to be free from the issue of the cosmological constant. The potential of the second model is given by where α 1 , · · · , α 6 are constants to be determined later. As in the first model, the conditions (5) and (12) lead to the relations among α i (i = 1, · · · , 6), but it turns out that they only determine α 3 , α 5 , α 6 in terms of remaining α 1 , α 2 , α 4 whose result reads Then, it is straightforward to calculate the cosmological constant defined as Λ ≡ V (H AB = 0) as before, which reads In this case, it is remarkable that since Λ = − 4 3 α 1 − m 2 for D = 4 we can construct a massive gravity model at any value of the cosmological constant by selecting out the value of α 1 in an appropriate way. Hence, owing to the existence of cubic terms, this model of the Higgs mechanism for gravitons holds irrespective of the signature of the cosmological constant in four dimensions. As a final model, for completeness, let us comment on a model presented recently by Chamseddine and Mukhanov in Ref. [20] in the framework of this article. The potential of their model is of sixth-order in H AB and is explicitly given by where e 1 , e 2 , e 3 , α, β are constants. Again, the conditions (5) and (12) determine the constants e 1 , e 2 , e 3 as where α − β = 0. Then, the cosmological constant given by Λ = −e 1 β reads In particular, for D = 4, we have which can certainly take any value depending on the values of α and β. Conclusion and discussions In this article, we have constructed two kinds of model exhibiting Higgs mechanism for gravitons where the one class is based on a potential of the induced internal metric H AB , and the other class involves a potential of the usual metric tensor g µν and the induced curved metric Y µν . Even if they appear to be different models at first sight, they are in fact equivalent because of identities among metrics. Furthermore, using the former model, we have explicitly presented a massive gravity model holding at any value of the cosmological constant. Incidentally, our formalism reminds us of bimetric theory of gravity which was made by Rosen [23] long ago. Actually we have the induced metric H AB or Y µν made out of scalar fields as well as the metric tensor g µν , and the induced metric plays a role of the order parameter in spontaneous symmetry breakdown. Namely, the induced metric H AB or Y µν constructed out of scalar fields condenses to produce mass for the metric tensor g µν . Related to this observation, our formalism might be concerned with strong gravity by Salam and Strathdee [24]. Moreover, it is useful to recall that at present we have an alternative mechanism to give mass to massless gravitons only in three dimensions. In three dimensions, the new massive gravity (NMG) provides us non-linear, parity invariant, and generally covariant massive gravitons by adding a conformal combination of curvature squared terms to the Einstein-Hilbert term [25]. This NMG turns out to be unitary at least at the tree level [26] and super-renormalizable [27]. Then, it is of interest to examine a relation between the NMG and the Higgs mechanism for gravitons. Indeed, for instance, we have already shown that the NMG cannot coexist with the Fierz-Pauli term [28], so it is an interesting question to investigate whether the NMG could coexist with the Higgs mechanism for gravitons at hand. What remains to be seen is an explicit calculation to demonstrate how the massless limit is attained in the framework of the Higgs mechanism for gravitons. The other interesting problem is to examine whether the present models are really consistent models or not. Recall that the classical treatment by Fierz and Pauli [1], which is perfectly satisfactory for free fields though, meets with difficulties when interactions are switched on. In the models at hand, the graviton mass is generated by the dynamical mechanism, i.e., Higgs mechanism, so there would be a possibility to escape this problem. Anyway we would like to report these problems in future. Recall that in case of massive Yang-Mills theory the Higgs mechanism made it possible to not only achieve a well-defined massless continuity but also allow us to have a renormalizable theory of massive vector gauge fields which stay in the weak coupling regime. On the other hand, in case of gravity the Higgs mechanism would only allow a smooth transition from massless gravitons to massive gravitons. Since the Einstein-Hilbert action is known to be unrenormalizable, the corresponding massive gravity theory is also unrenormalizable and it might behave much more badly in the ultraviolet region since we have added the higherderivative coupling terms of scalar fields in the potential. In any case, quantum corrections to massive gravity theory would occur at some cut-off scale, so the corrections could be ignored at energies much lower than the cut-off scale. A detailed study of the massive gravity has been done in the paper by Boulware and Deser [29]. While the theory of massive gravity is well-defined at the linear level, it becomes to have extra sixth trace mode, which is in essence a ghost, in addition to five degrees of freedom at the non-linear level in four dimensions. We can write down several questions which were already raised up about fourty years ago [29]. • The massless limit does exist and agree with general relativity? • The energy is bounded from below? • The flat space-time is a local stable equilibrium state? • When interactions are switched on, does the trace mode appear, thereby breaking the unitarity? We believe that our models of the Higgs mechanism for gravitons could provide a resolution for the above-mentioned questions. A detailed analysis will be reported in a separate publication. We wish to close this article by confessing a philosophy behind this study: The concept of spontaneous symmetry breaking prevails and have provided a considerable influence on the evolution of particle physics and the condensed matter physics, so why the most universal interaction, gravitation, does not adopt such a beautiful concept in the theory?!
2010-04-19T02:07:54.000Z
2010-04-19T00:00:00.000
{ "year": 2010, "sha1": "00a15e135dcd40784ea4d52463663f184ce65e36", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1004.3078", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00a15e135dcd40784ea4d52463663f184ce65e36", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
146054762
pes2o/s2orc
v3-fos-license
Watermelon cultivation in regeneration areas of a sugarcane field under different soil managements The objective of this work was to evaluate watermelon (Citrullus lanatus) cultivation in regeneration areas of a sugarcane field, under different soil management systems and N fertilization regimes. Two experiments were carried out in the 2014/2015 and 2015/2016 harvest seasons, in areas of sugarcane plantation in Andradina, in the state of São Paulo, Brazil. Cultivations were performed in a randomized complete block design, with plots and subplots, and four replicates. The plots represented the tillage systems (conventional, minimum tillage, and no-tillage), and the subplots, the different N fertilization rates (0, 100, 200, and 300 kg ha-1) applied as topdressing. In 2014/2015, the minimum tillage system resulted in the highest commercial yield of 70.2 Mg ha-1. In 2015/2016, there were no differences for yield among tillage systems; however, yield differed among N treatments. The highest commercial yields of 64.1 and 31.1 Mg ha-1 were achieved with the N doses of 253 and 209 kg ha-1 as topdressing, respectively, in 2014/2015 and 2015/2016. Watermelon can be cultivated in regeneration areas of sugarcane field, and the demand of N by the plant does not depend on the soil tillage system. Introduction Brazil is the world's largest producer of sugarcane, with the annual harvest of 684 million tons in the 2016/2017 harvest (Acompanhamento..., 2016).After planting, crop productivity decreases year after year, until replanting is done after an average of 5 years.This agricultural practice is known as crop rotation (Santiago & Rossetto, 2008).This practice is usually performed in periods of higher rainfall, and it involves processes such as grazing and subsoiling, which causes soil losses owing to erosion (Bolonhezi et al., 2007).The adoption of mechanized sugarcane harvesting stimulated the gradual elimination of fires that resulted in the deposition of a large amount of straw and dry matter on the soil surface.Most of the replanting areas remain unoccupied from the end of the harvest to the beginning of replanting, allowing of the implementation of planting practices for the conservation management (Bolonhezi et al., 2007) of sugarcane straw by crop rotation.Previous studies evaluated sugarcane crop rotation with peanuts (Bolonhezi et al., 2007;Barbosa et al., 2014), soybean (Finoto et al., 2015), and green manure (Ambrosano et al. 2011).However, to the best of our knowledge, no studies have evaluated crop rotation with vegetables until present.Therefore, watermelon (Citrullus lanatus) cultivation, which has a high socioeconomic importance in tropical regions, provides an option of encouraging the cultivation of food crops in sugarcane regeneration areas.Watermelon is one of the main cucurbits cultivated in Brazil; it is planted in a 94,375 ha area, and its yield is 2,171,448 Mg per year (FAO, 2014).Watermelon is grown in the state of São Paulo, primarily in sandy soils (Branco et al., 2014), in degraded pasture lands.Watermelon cultivation requires N as one of the most important nutrients whose deficiency limits this crop productivity (Grangeiro & Cecílio Filho, 2004;Leão et al., 2008).The recommended fertilizer application for watermelon growth in São Paulo state is from 80 to 130 kg ha -1 N (Trani et al., 1997) during the crop cycle. Few studies have evaluated the conservation management of watermelon cultivation.Rocha et al. (2011) compared the use of conventional tillage, minimum tillage, and no-tillage systems with a scarifier, and observed that the highest yield occurred in conventional tillage (126.5 Mg ha -1 ), whereas that in the no-tillage treatment was lower (74.1 Mg ha -1 ).In no-tillage systems, the presence of straw on the soil surface changes the physical, chemical, and biological soil properties, leading to an higher soil fertility and yield than those in the conventional tillage system (Bernoux et al., 2006;Tivelli et al., 2010).The use of straw in no-tillage systems facilitates the colonization by microorganisms, in contrast to conservation tillage systems (Simpósio..., 2007).Therefore, different soil tillage systems may differently affect the flow of water and nutrients, including N. Predicting N dynamics in soils under no-tillage systems is difficult because this dynamics depends on the rate of straw decomposition and on the amount of available N for the rotation crop.Therefore, the N concentration supply to the crop depends on the N level available in the soil.The amount of N released from the straw during the next cycle of sugarcane cultivation may vary from 3 to 30% (Vitti et al., 2008(Vitti et al., , 2010)).However, studies show that the N incorporation by the mineralization of straw increases significantly with longer cultivation periods, and decreases with shorter cultivation periods (Vitti et al., 2011;Fortes et al., 2013); this may be the case with watermelon cultivation.Because of changes in the physical, chemical, and biological soil properties under different tillage systems, complementary nutritional studies are necessary to ensure adequate rates and periods of N application to avoid excess or lack of nutrients to the crops. The objective of this work was to evaluate the watermelon cultivation in regeneration areas of a sugarcane field under different soil management systems and N fertilization regimes. Materials and Methods Two experiments were carried out during the 2014/2015 and 2015/2016 harvest seasons in sugarcane fields, in the municipality of Andradina, in the state of São Paulo, Brazil.The geographic coordinates of this municipality are 20°45' and 20°46'S and 51°22' and 51°22'W, at 379 m altitude. Both sugarcane crops, each in a 30 ha area, were replanted after the seventh consecutive harvest, when the yield of sugarcane 'RB 83 5486' was less than 60 Mg ha -1 .In the year before the installation of both experiments, weeds in the ratoon cane were managed in September with herbicides, using 1 L ha -1 clomazone, and 1.5 kg ha -1 diuron with hexazinone. After harvest, sugarcane crop regrowth was dried using glyphosate (3 kg ha -1 i.a.), and the soil was tilled.For watermelon planting, 'Olímpia' hybrid was manually sown on October 6, 2014 and October 6, 2015.Cultivation practices were adopted considering the needs of the crop, and irrigation was performed by sprinkling, using the central pivot system. Four replicates of each treatment were arranged in a randomized complete block design.The main plots, with 192 m 2 , included three tillage systems: conventional, minimum, and no-tillage.Conventional tillage involved the incorporation of sugarcane straw into the soil, after harvest, using a disk plow, and later, a disk harrow.Minimum tillage was performed by subsoiling at 0.60 m soil depth in the crop rows, and no-tillage, by opening crop rows to 0.03 m soil depth using a seeder for direct seeding of grains.In the minimum tillage and no-tillage systems, most of the area was covered with sugarcane straw.The subplots corresponded to different N concentrations (0, 100, 200, and 300 kg ha -1 ) in the soil cover, in the form of ammonium nitrate (33.5% N), divided into three treatments at 15, 30, and 50 days after plant emergence.Spacing was 3.0 m between rows and 1.0 m between plants within each row.Each subplot was composed of two rows of eight plants each, with 8.0 m length by 6.0 m width, totaling 48.0 m 2 .A border plant was left at each end of the subplots, and 10.0 m space was left between the plots to prevent the main rows from overlapping. One week before watermelon planting, 10 sugarcane straw samples were collected from the experimental area where straw had not been incorporated in the soil.This procedure was repeated biweekly until 75 days after sowing (DAS), next to the experimental area.The straw samples were collected, using a 0.25 m 2 template (0.5x0.5 m), and dried in a forced-circulation air oven at 65°C, for determining the mass of straw dry matter.These measurements were subjected to regression analysis. The following production characteristics were evaluated: total and commercial number and total fruit per plant, diameter and length of commercial fruit (m), total and commercial production of fruit (grams per plant), and total and commercial yield (Mg ha -1 ) of fruit.The data were subjected to the analysis of variance and F-test.The effect of N concentrations was evaluated using the regression analysis, and the criterion for choosing the model was the magnitude of the coefficients of regression.Tukey's test was used for analyzing the data on the soil preparation, at the 5% probability.Data obtained from the two harvests were analyzed in combination, when the relationship between the residual mean squares of the individual analyses of variance of each crop did not exceed the 7:1 ratio (Banzatto & Kronka, 2006).Software Sisvar version 5.3 (Ferreira, 2011) was used in the statistical analyses. Results and Discussion In the 2014/2015 and 2015/2016 harvest seasons, the conditions of the tests (genetic material, cultivation practices, soil and relief classifications) were similar, but the climatic conditions differed.In the 2014/2015 season, rainfall was 327.6 mm, which was similar to the historical average of the region, whereas in the 2015/2016 season, rainfall was 715 mm, which was 78% higher than the historical average of the region in the period (CIIAGRO, 2016).The increased precipitation limited the control of fungal diseases and caused crop yield decrease (Figure 1).The mean maximum and minimum temperatures varied from 20.9 to 33.3°C in the first year, and from 20.8 to 34.5°C in the second year; these values are similar to the historical average of the region in the period (18.7-32.7°C)(CIIAGRO, 2016). One week before watermelon sowing, the mean yield of straw dry matter in the experimental areas, in 2014/2015 and 2015/2016 seasons, was 14.5 and 15.8 Mg ha -1 , respectively.These values are close to those (8-20 Mg ha -1 ) found in the literature (Vitti et al., 2008;Trivellin et al., 2013). The dry matter yield of sugarcane straw decreased over time in the biweekly harvests and fitted to the linear functions (Figure 2), indicating a decomposition rate of up to 9.4 and 7.5 Mg ha -1 straw dry mass, at 75 days after sowing, in the 2014/2015 and 2015/2016 harvests, respectively. There was no significant interaction between soil tillage and N concentrations applied to the soil cover (p<0.05) in the two evaluated seasons.Therefore, the demand of N in watermelon production was the same, regardless of the soil tillage system adopted, with or without the incorporation of sugarcane straw.Farinelli et al. (2006) and Farinelli & Lemos (2012) evaluated N fertilization in the soil cover in bean and maize crops, using conventional tillage and no-tillage treatments, respectively, and showed that beans required higher N concentrations in the no-tillage system because of the decreased straw decomposition.In contrast, the agronomic and nutritional characteristics of maize did not differ between tillage systems and N concentrations used, although the conventional tillage had caused higher losses of N, and N application to the soil cover had promoted an increase in the incorporation of this nutrient by maize.There was a significant effect of the treatments when they were analyzed independently.In the 2014/2015 season, tillage strongly affected the production characteristics of watermelon, except for the total and the commercial number of fruit, with 2.3 and 1.5 fruit per plant, respectively.In minimum tillage, the mean fruit diameter and length were 0.82 and 0.48 m, respectively, which differed statistically from the means found for conventional tillage (0.76 and 0.46 m) and no-tillage (0.79 and 0.47 m) (Table 1).Moreover, the total and commercial production and total and commercial yield varied among the treatments, and the highest means were observed in the minimum tillage system, corresponding to 23.0 and 21.0 g per plant, and 76.8 and 70.2 Mg ha -1 , respectively.Commercial productivity in the minimum tillage system was 45 and 36% higher than that in the conventional tillage and no-tillage, respectively (Table 1).In the 2015/2016 season, the tillage system did not affect the evaluated production characteristics.The 2015/2016 season showed the following results: the mean total and commercial number of fruit per plant was 1.6 and 0.76 fruit, respectively; fruit diameter and length were 0.73 and 0.42 m, respectively; total and commercial production were 9.4 and 6.7 kg per plant, respectively; and total and commercial yield were 31.5 and 22.3 Mg ha -1 , respectively.Therefore, in the 2014/2015 season, the minimum tillage system showed the best results, with a mean commercial yield of 70.2 Mg ha -1 .By contrast, in the 2015/2016 season, there were no significant differences between the evaluated tillage systems, and the mean commercial yield was 22.3 Mg ha -1 . Other studies reported different results for watermelon cultivation under different tillage systems.Rocha et al. (2011) compared watermelon crops cultivated on oat straw using conventional, minimum, and no-tillage systems, and performed scarification with one to four disks.They concluded that the conventional tillage system was the best treatment, with the mean crop yield of 126.5 Mg ha -1 .In this case, the higher soil mobilization increased the root surface area, resulting in a more homogeneous, horizontal, and deeper root system. Silva et al. ( 2013) evaluated weed management of watermelon under no-tillage and conventional tillage systems.They concluded that the no-tillage system provided higher yield than the conventional tillage.Branco et al. (2014) compared watermelon cultivated on clover and oat straw under minimum tillage and notillage conditions, and observed that root development was restricted in the no-tillage system.However, commercial yield was similar between the treatments (27.4 Mg ha -1 ), except for no-tillage using oat straw Table 1.Total number of fruit per plant (TNF), number of commercial fruit per plant (NCF), fruit diameter, fruit length, total production (TP), and commercial production (CP), total yield, and commercial yield (CY) of watermelon 'Olímpia' (Citrullus lanatus), according to the tillage system used in the 2014/2015 and 2015/2016 harvest seasons (1) .(17.9 Mg ha -1 ).The combined analysis of the crops indicated that the production variables were affected by the planting season, except for the fruit diameter and length, which did not meet the conditions for this type of analysis.The variables evaluated in 2014/2015 were approximately two-fold higher than in 2015/2016 (Table 2).This difference can be attributed to adverse climatic conditions, particularly the rainfall in the 2015/2016 season. In the 2014/2015 season, the highest fruit diameter and length, commercial production and yield were affected by the different N concentrations, which fitted with the quadratic regression model (Figure 3).In the 2015/2016 season, all the production variables were affected by different N levels, except for the total number of fruit.The mean values of these variables also fitted with the quadratic regression model. In the 2014/2015 season, the highest fruit diameter and length, and the estimated commercial production and yield were 0.81 m, 0.48 m, 18.9 kg per plant, and 64.1 Mg ha -1 , respectively, and these values were obtained using 195, 178, 241, and 253 kg ha -1 N applications to the soil cover, respectively.In the 2015/2016 season, the highest total production and yield were 11.4 kg per plant and 37.5 Mg ha -1 , respectively, using 233 and 185 kg ha -1 N applications to the soil cover, respectively.In the 2015/2016 season, the highest estimated number of commercial fruit, fruit diameter and length, and commercial production and yield were 1.0 fruit per plant, 0.78 m, 0.45 m, and 8.9 kg per plant, and 31.1 Mg ha -1 , respectively, using 195, 219, 227, 174, and 209 kg ha -1 N application to the soil cover, respectively.In the state of São Paulo, the official recommendation for watermelon cultivation is the total application from 80 to 130 kg ha -1 N (Trani et al., 1997).Barros et al. (2012) studied the effect of different N concentrations in the planting of watermelon 'Crimson Sweet'.They found that the highest yield (40.4 Mg ha -1 ) was achieved using 144.4 kg ha -1 N, and this value is close to the official recommendation.In contrast, Soares et al. (2002) analyzed different doses of N for the growth of watermelon 'Crimson Sweet', and found that the highest yield (64.9 Mg ha -1 ) was obtained using 229.8 kg ha -1 N, which was close the value found in the present study. Therefore, in the present work, the mean total N level (fertilization of crop and cover crop) that resulted in the highest yield was 93% higher than the highest recommended concentration, and some of the applied N was probably leached, volatilized, and immobilized by microorganisms.The amount of N leachate is particularly dependent on the amount of the applied N, type of soil, and volume of precipitation (Nielsen et al., 1982).The residual sugarcane straw promotes changes in the production environment and increases the C:N ratio.Fortes et al. (2012) observed that the C:N ratio of sugarcane straw initially varied from 70:1 to 108:1 but decreased along the harvests.The release of N by straw is slow, and its use by the subsequent crop is also low, according to Vitti et al. (2008), primarily because of the high initial C:N ratio of sugarcane straw, which favors the immobilization of N by soil microorganisms (Jingguo & Bakken, 1997).Therefore, as compared with the official recommendations, the high-N concentrations applied for higher productivity in the present work may be related to the competition for this nutrient between plants and microorganisms. Figure 1 . Figure 1.Rainfall, and maximum and minimum air temperatures recorded in two harvest seasons: A, 2014/2015; and B, 2015/2016, in the municipality of Andradina, in the state of São Paulo, Brazil. Figure 2 . Figure 2. Effect of the collection time on the dry matter amount of sugarcane straw in the 2014/2015 and 2015/2016 harvest seasons, in the municipality of Andradina, in the state of São Paulo, Brazil. Figure 3 . Figure 3.Effect of N concentrations in the soil cover on the total and commercial number of fruit (A, B), fruit length (C), fruit diameter (D), total and commercial production (E, F), and total and commercial yield (G, H) of watermelon 'Olímpia' (Citrullus lanatus), in the 2014/2015 and 2015/2016 harvest seasons (equations 1 and 2, respectively) in the municipality of Andradina, in the São Paulo state, Brazil. Means followed by equal letters do not differ by Tukey's test, at 5% probability.* and **Significant at 5 and 1% probability, respectively.ns Nonsignificant.
2019-05-07T13:40:58.344Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "566ba32db89cbfff5312f768f2ef51ccf4838df9", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/pab/v54/1678-3921-pab-54-e00039.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "566ba32db89cbfff5312f768f2ef51ccf4838df9", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
16717285
pes2o/s2orc
v3-fos-license
An open label, dose response study to determine the effect of a dietary supplement on dihydrotestosterone, testosterone and estradiol levels in healthy males Background Maintaining endogenous testosterone (T) levels as men age may slow the symptoms of sarcopenia, andropause and decline in physical performance. Drugs inhibiting the enzyme 5α-reductase (5AR) produce increased blood levels of T and decreased levels of dihydrotestosterone (DHT). However, symptoms of gynecomastia have been reported due to the aromatase (AER) enzyme converting excess T to estradiol (ES). The carotenoid astaxanthin (AX) from Haematococcus pluvialis, Saw Palmetto berry lipid extract (SPLE) from Serenoa repens and the precise combination of these dietary supplements, Alphastat® (Mytosterone(™)), have been reported to have inhibitory effects on both 5AR and AER in-vitro. Concomitant regulation of both enzymes in-vivo would cause DHT and ES blood levels to decrease and T levels to increase. The purpose of this clinical study was to determine if patented Alphastat® (Mytosterone(™)) could produce these effects in a dose dependent manner. Methods To investigate this clinically, 42 healthy males ages 37 to 70 years were divided into two groups of twenty-one and dosed with either 800 mg/day or 2000 mg/day of Alphastat® (Mytosterone(™)) for fourteen days. Blood samples were collected on days 0, 3, 7 and 14 and assayed for T, DHT and ES. Body weight and blood pressure data were collected prior to blood collection. One-way, repeated measures analysis of variance (ANOVA-RM) was performed at a significance level of alpha = 0.05 to determine differences from baseline within each group. Two-way analysis of variance (ANOVA-2) was performed after baseline subtraction, at a significance level of alpha = 0.05 to determine differences between dose groups. Results are expressed as means ± SEM. Results ANOVA-RM showed significant within group increases in serum total T and significant decreases in serum DHT from baseline in both dose groups at a significance level of alpha = 0.05. Significant decreases in serum ES are reported for the 2000 mg/day dose group and not the 800 mg/day dose group. Significant within group effects were confirmed using ANOVA-2 analyses after baseline subtraction. ANOVA-2 analyses also showed no significant difference between dose groups with regard to the increase of T or the decrease of DHT. It did show a significant dose dependant decrease in serum ES levels. Conclusion Both dose groups showed significant (p = 0.05) increases in T and decreases in DHT within three days of treatment with Alphastat® (Mytosterone(™)). Between group statistical analysis showed no significant (p = 0.05) difference, indicating the effect was not dose dependent and that 800 mg/per day is equally effective as 2000 mg/day for increasing T and lowering DHT. Blood levels of ES however, decreased significantly (p = 0.05) in the 2000 mg/day dose group but not in the 800 mg/day dose group indicating a dose dependant decrease in E levels. Background The process of male aging is associated with a slow progressive decrease in serum testosterone (T) levels as a result of decreased production. It has been designated as T deficiency syndrome (TDS) by the International Society of Andrology (ISA), International Society for the Study of the Aging Male (ISSAM) and European Association of Urology (EAU) [1][2][3]. Its clinical features can be described as a combination of sarcopenia and andropause leading to a decline in physical performance [4]. Maintaining endogenous T levels as the male ages may slow the symptoms of sarcopenia characterized by muscle erosion, loss of muscle strength and bone mineral density. It may also play a key role in eliminating the symptoms of andropause, characterized by sexual dysfunction, lack of energy, increased cognitive impairment and decreased general well being [5][6][7]. Avoiding or slowing these symptoms is important for staying healthy and competitive as life expectancy increases. Many of them can be alleviated by T replacement therapy either transdermally or by injection however, they are only available by prescription because of the potential risks of this treatment [5]. T is converted to either dihydrotestosterone (DHT) by the enzyme 5αreductase (5AR) or estradiol (ES) by the aromatase (AER) enzyme. Recent studies have shown that DHT administered to orchidectomized mice resulted in increased highdensity lipoprotein cholesterol and triglyceride levels [8]. DHT is also implicated in the etiology of benign prostate hyperplasia (BPH) which is regarded as a global health problem in men over 50 years of age [9]. Elevated levels of DHT, prostate specific antigen (PSA) and symptoms of BPH are commonly treated with a 5AR inhibitor which reduces the amount of T converted to DHT [10]. However, 5AR inhibitors produce increased blood levels of ES and have been reported to cause symptoms of gynecomastia [11]. Therefore, a dietary supplement that increases endogenous levels of T while decreasing levels of DHT and ES may be very useful for maintaining physical performance and alleviating the conditions of andropause and sarcopenia while decreasing the risk of BPH. Recent in-vitro studies report that the carotenoid Astaxanthin (AX) from Haematococcus pluvialis is a more potent inhibitor of 5AR than the Saw Palmetto berry lipid extract (SPLE) from Serenoa repens. When AX is combined with SPLE in specific amounts as (Mytosterone(™)), it showed a greater inhibition of 5AR than SPLE alone [12]. Further, in-vitro assays show that AX inhibits the AER enzyme which converts T to ES [13]. Therefore, we hypothesize that a precise combination of AX and SPLE, Alphastat ® (Mytosterone(™)), would regulate the inhibitory activity of both 5AR and AR enzymes in-vivo. Concomitant regulation of both enzymes would cause DHT and ES blood levels to decrease and T levels to increase in a dose dependant manner. To investigate this, two groups of 21 healthy male participants between the ages of 40 and 70 years were dosed with either 800 or 2000 mg of Alphastat ® (Mytosterone(™)) per day over a two week period. Serum levels of T, DHT and ES were analyzed to determine changes in hormone levels. Study design This design was a single centre, prospective, open-label, dose comparison clinical study. Two groups of healthy, male subjects were dosed with either 800 mg/day or 2000 mg/day of a precise combination of AX and SPLE (Alphastat ® (Mytosterone(™)), Triarco Industries, Wayne, NJ) for 14 consecutive days. Subjects were assigned to dose groups in an alternating manner based on arrival time. Serum levels of T, DHT and ES were used to investigate differences between the two dose groups. Since the purpose of the study was to compare the differences between a high dose and a low dose from baseline values, a control group was not used. Blood samples were drawn at approximately the same time of day to minimize any diurnal variations in hormone levels. Subjects Two groups of twenty-one healthy, males ages 37-70 years volunteered for this study. The ages of the 800 mg/ day dose group ranged from 37 years to 67 years with an average of 55.6 years and body weight ranged from 68.4 kg to 102.5 kg with an average of 80.3 kg. The ages of the 2000 mg/day dose group ranged from 53 years to 70 years with an average of 61 years and body weight ranged from 67.9 kg to 95.3 kg with an average of 84.8 kg. Inclusion criteria required subjects to have a PSA lower than 10.1 ng/ml, and pass a compliance screening test and a health screen for malaria, diabetes and ailments mentioned in the exclusion criteria. Exclusion criteria included any cases of moderately severe co-morbid disease including cardiac, pulmonary, renal, hepatic, or active cancer, BPH with complications, abnormal digital rectal examination (DRE), active prostate cancer or history of acute bacterial prostatitis. Also excluded were individuals taking, or who had taken within the past 30 days, any dietary supplements containing saw palmetto or astaxanthin. There were no food or exercise requirements or restrictions. Participants were advised and encouraged to maintain their current dietary intake and exercise habits throughout the study period to prevent changes in either from interfering with the study results. This study was approved by the University of Yaoundé I Teaching Hospital Review Board and the Cameroon Research Ethics Committee. All subjects enrolled in the study completed a written Institutional Review Board (IRB)-approved informed consent and were examined by an urologist at the hospital. Dose protocol The low dose group (n = 21) ingested two 400 mg capsules per day, one in the morning and one in the evening. The high dose group (n = 21) ingested five 400 mg capsules per day, 2 in the morning, 1 in the afternoon and 2 in the evening. All capsules were taken with 8-12 oz of water. Diet There were no diet protocol or diet log requirements however, participants were advised and encouraged to maintain their current dietary intake habits throughout the study period in order to prevent changes in dietary intake confounding the study results. Sample collection Blood samples were collected between 8 am and 10:30 AM on the sampling days. Sample collection was conducted at the hospital using red capped Vacutainer ® tubes and centrifuged to produce serum. All tubes were coded, refrigerated and sent to the Laboratory of Nutrition and Nutritional Biochemistry for analyses. Serum analyses Serum levels of T, DHT, and ES were determined using separate commercial ELISA test kits (Alpha Diagnostics, San Antonio, USA). All reference standards, controls and serum samples were dispensed at room temperature. All test kits have been designed and tested for human serum samples. DHT determination in serum A 50 μL aliquot of each standard, control and sample were dispensed into separate antibody coated wells of the strip plate. DHT -Horseradish peroxidase (HRP) conjugate solution (100 μL) was then dispensed into each well, the plates covered and incubated at room temperature for 60 minutes with gentle shaking. Following incubation the plate was aspirated and washed three times with approximately 300 μL diluted wash buffer. HRP-substrate solution, Tetramethylbenzidine (TMB) (150 μL) was then added into each well, the plates covered and incubated at room temperature for 10 minutes with gentle shaking (approximately 200 rpm) to develop a blue color. Stopping solution (50 μL) was then added into each well and the plates mixed gently as the blue color turned yellow. The concentration of each well was determined by absorbance measured at 450 nm. Within run variations in control values of greater than +/-20% required reanalysis. The specificity of DHT ELISA kit was determined by measuring interference from high concentrations of the following: DHT 100%; T, 8.7%; 5-beta-DHT, 2%; Androstenedione, 0.2%; Dehydroepiandrosterone sulfate, 17-beta-estradiol, Estriol, Estrone, Progesterone, 17-OH-Progesterone, Cortical pregnenolone < 0.01%. ES determination in serum A 50 μL aliquot of each standard, control and sample were dispensed into separate anti-Mouse IgG coated wells of the strip plate. ES-HRP conjugate solution (100 μL) was then added into each well. The plate was covered and incubated at room temperature for 60 minutes with gentle shaking. Following incubation the plate was aspirated and washed three times with approximately 250 μL diluted wash buffer. HRP substrate, TMB solution (150 μL) was added to each well, the plates covered and incubated at room temperature for 10 minutes with gentle shaking to develop a blue color. Stopping solution (50 μL) was then added into each well and the plates mixed gently as the blue color turned yellow. The concentration of each well was determined by absorbance measured at 450 nm. Within run variations in control values of greater than +/-20% required reanalysis. The ES antibody used in this kit is very sensitive and specific. The following compounds were tested for cross reactivity of the assay: ES (100%), Estriol and Estrone (1%), Progesterone, and Cortisol (0.1%). Total T determination in serum A 10 μL aliquot of each standard, control and sample were dispensed into the appropriate wells and rabbit polyclonal antibody solution into all wells of the strip plate. Enzyme conjugate solution (50 μL) was then added to each well. The plate was covered and incubated at room temperature for 60 minutes with gentle shaking. Following incubation the plate was aspirated and washed 3 times with approximately 250 μL diluted wash buffer. HRP substrate Solution A (100 μL) and HRP substrate Solution B (100 μL) was added to each well. The plate covered and incubated at room temperature for 30 minutes with gentle shaking. Stopping solution (50 μL) was then added into each well and the plates mixed gently as the blue color turned yellow. The concentration of each well was determined by absorbance measured at 450 nm. Within run variations in control values of greater than +/-20% required reanalysis. The rabbit polyclonal antibody used in this kit is very sensitive and specific for T. The following compounds were tested for cross-reactivity of the assay: T (100%), 5-a-dihydrostestosterone (9.6%), Androstenedione (1.7%), 11-oxystestosterone (1.5%), Epiandrosterone (0.06%). The following compounds had negligible cross-reactivity: 5-beta-DHT, 5-a-androstan-3-a, 17bestradiol, 17b-Diol, 5-a-androstan-3, 17 dione, Androsterone, Cortisol, Dehydroepiandrosterone, Estriol, Estrone, Progesterone, Corticosterone, Danazol, 11-bhydroxytestosterone. Statistical analysis All significance and power testing on results was done at a level of alpha = 0.05. Within group analyses was performed on T, DHT and ES levels between baseline and each time point, within each dose group using one-way repeated measures analysis of variance (ANOVA-RM). Two-way analysis of variance (ANOVA-2) was performed on T, DHT and ES levels between each dose group after baseline subtraction. Results are expressed as means ± SEM. Statistics were performed using a commercially available software program (Origin ® for Windows, version 8.0). Body weight and tolerance The mean baseline body weight was 80.3 kg in the 800 mg/day dose group and 84.8 kg in the 2000 mg/day dose group. There was no significant change in mean body weight over the 14 day treatment period in either dose group. Both doses of Alphastat ® (Mytosterone(™)) were well tolerated and no adverse events reported. Blood pressure The mean baseline systolic blood pressure (SBP) was 142 mmHg in the 800 mg/day dose group and 137 mmHg in the 2000 mg/day dose group. No significant (p = 0.05) changes in mean SBP from baseline values were reported in the 800 mg/day dose group. In the 2000 mg/day group SBP was reported to be significantly (p < 0.05) below baseline values on day 3, day 7 and day 14. The mean baseline Diastolic blood pressure (DBP) was 71.5 mmHg in the 800 mg/day dose group and 66.9 mmHg in the 2000 mg/day dose group. DBP was reported to be significantly (p < 0.05) below baseline values on day 7 and day 14 in both the 800 mg/day dose group and 2000 mg/day dose group. Serum T levels The mean baseline level of serum total T in the 800 mg/ day dose group of 21.64 nmol/L was significantly (p = 0.05) different than the baseline level of 26.26 nmol/L in the 2000 mg/day dose group. ANOVA-RM of the 800 mg/ day dose group showed the mean level of T was significantly (p < 0.05) greater than the mean baseline level at day 7 and day 14. ANOVA-RM of the 2000 mg/day dose group showed the mean level of T was significantly (p < 0.05) greater than the mean baseline level at day 3, day 7 and day 14. ANOVA-2 comparison, after baseline subtraction, between the 800 mg/day dose group and the 2000 mg/day dose group showed the interaction between time points to be significant (p = 0.05) reaching a statistical significance of 1.0. Comparison of means between dose groups showed no significant (p = 0.05) interaction reaching a statistical power of 0.51 (Figure 1). Serum DHT levels The mean baseline level of serum DHT in the 800 mg/day dose group of 2.79 nmol/L was significantly (p = 0.05) different than the baseline level of 2.34 nmol/L in the 2000 mg/day dose group. ANOVA-RM of the 800 mg/day dose group showed the mean level of DHT was significantly (p < 0.05) less than the mean baseline level at day 3, day 7 and day 14. ANOVA-RM of the 2000 mg/day dose group showed the mean level of DHT was significantly Serum Total Testosterone Figure 1 Serum Total Testosterone. Effect of Alphastat ® (Mytosterone(™)) on total testosterone levels (nmol/L). Values are means (+/-SEM), * indicates significant difference from baseline value (p ≤ 0.05). less (p < 0.05) than the mean baseline level at day 3, day 7 and day 14. ANOVA-2 comparison, after baseline subtraction, between the 800 mg/day dose group and the 2000 mg/day dose group showed interaction between time points to be significant (p = 0.05) reaching a statistical significance of 1.0. Comparison of means between dose groups showed no significant (p = 0.05) interaction reaching a statistical power of 0.46 (Figure 2). Serum ES levels The mean baseline level of serum ES in the 800 mg/day dose group of 21.49 pmol/L was not significantly (p = 0.05) different than the baseline level of 23.94 pmol/L in the 2000 mg/day dose group. ANOVA-RM of the 800 mg/ day dose group showed the mean level of ES was significantly (p < 0.05) less than the mean baseline level at day 7. ANOVA-RM of the 2000 mg/day dose group showed the mean level of ES was significantly less (p < 0.05) than the mean baseline level at day 3, day 7 and day 14. ANOVA-2 comparison, after baseline subtraction, between the 800 mg/day dose group and the 2000 mg/day dose group showed the interaction between time points to be significant (p = 0.05) reaching a statistical significance of 0.73. Interaction between dose groups was also significant (p = 0.05) reaching a statistical power of 1.0 ( Figure 3). Discussion ANOVA-RM analyses of the results show significant increases in serum total T and significant decreases in serum DHT in both dose groups as well as significant decreases in serum ES in the 2000 mg/day dose group at a significance level of alpha = 0.05. These significant differences from baseline were confirmed using ANOVA-2 analyses after baseline subtraction. These results of increased T levels with concurrent decreases in DHT and ES support earlier in-vitro mechanism reports that the 5AR and AER enzymes are inhibited by AX and SPLE [12,13]. They also support the hypothesis that a precise combination of AX and SPLE, Alphastat ® (Mytosterone(™)), may regulate a concomitant inhibitory activity of both enzymes in-vivo. The doses used in this study however, did not result in a clear dose dependent effect. ANOVA-2 analyses show no significant difference between dose groups with regard to the increase of T or the decrease of DHT. However, it did show a significant dose dependant decrease in serum ES levels. The lack of a dose response with regard to T and DHT indicates the maximum effect of Alphastat ® (Mytosterone(™)) is obtained at the 800 mg/day dose. Further studies using lower doses must be conducted to establish a dose response and maximum effect level regarding T and ES. A dose of 2000 mg/day may be necessary if decreases in ES levels are desired. This effect may have an application in women that are prone to estrogen dependent breast cancer and requires further exploration. The results also suggest that Alphastat ® (Mytosterone(™)) is equally effective in men between the ages of 37 and 70 for increasing endogenous total serum T levels and decreasing serum DHT levels without increasing ES levels. The 800 mg/day dose group ranged 37 -67 years with a mean of 55.5 years and 24% under the age of 50. The 2000 mg/day dose group ranged 53 -70 years with a mean of 61 years and no one under the age of 50. Confirmation of this by further studies is important since mean total T levels have been reported to decrease by 30% between the ages of 25 and 75 [14]. The decline in T as a result of male aging is designated as TDS by several international societies and is described as a combination of sarcopenia and andropause [1-3]. Its clinical features are characterized by muscle erosion, loss of muscle strength and bone mineral density as well as increased sexual dysfunction, lack of energy, increased cognitive impairment and decreased general well being [5][6][7]. Avoiding or slowing these symptoms is important for staying healthy and competitive as life expectancy increases. The results of this study indicate that Alphastat ® (Mytosterone(™)) may help maintain or reverse declining T levels in the aging male by maintaining endogenous T levels and without increasing ES levels. It may also be beneficial in males prone to developing symptoms of BPH due to elevated DHT levels. BPH is regarded as a global health problem in men with approximately 50% of men over age 50 reporting symptoms [9,10]. Alphastat ® (Mytosterone(™)) may regulate DHT levels without the side effects of increased ES levels reported with the use of prescription drugs that inhibit 5AR. Conclusion The precise combination of AX and SPLE, Alphastat ® (Mytosterone(™)), produced significant changes in serum T, DHT and ES levels. A dose of either 800 mg or 2000 mg/ day produced significant increases in T and decreases in DHT within three days with no increases in ES. The effect was not dose dependent indicating that the 800 mg/per day dose is as equally effective as 2000 mg/day in the age range of subjects studied. Blood levels of ES also decreased significantly and in a dose dependant manner indicating the 2000 mg/day dose is more effective than the 800 mg/ day dose. There were no outward signs of toxicity or adverse reactions. This data provides support for each mechanism of action observed in-vitro and suggests a potential role for its use in aging men experiencing TDS or symptoms of BPH. Competing interests This study was funded by Triarco Industries, Inc. (Wayne, NJ) through the contract research organization (CRO) Gateway Health Alliances, Inc. All research was conducted independently and according to protocol at the Laboratory of Nutrition and Nutritional Biochemistry, University of Yaounde I, Cameroon. All researchers have no financial interests concerning the outcome of this investigation and the results do not constitute an endorsement by the authors and/or their institutions concerning the ingredient tested. Figure 3 Serum Estradiol. Effect of Alphastat ® (Mytosterone(™)) on serum estradiol levels (pmol/L). Values are means (+/-SEM), *indicates significant difference from baseline value (p ≤ 0.05).
2018-05-08T18:10:08.354Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "403cd9aa40c97914704168506271b4b2fdf0db11", "oa_license": "CCBY", "oa_url": "https://jissn.biomedcentral.com/track/pdf/10.1186/1550-2783-5-12", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "403cd9aa40c97914704168506271b4b2fdf0db11", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
119313319
pes2o/s2orc
v3-fos-license
Newton-like dynamics associated to nonconvex optimization problems We consider the dynamical system \begin{equation*}\left\{ \begin{array}{ll} v(t)\in\partial\phi(x(t))\\ \lambda\dot x(t) + \dot v(t) + v(t) + \nabla \psi(x(t))=0, \end{array}\right.\end{equation*} where $\phi:\R^n\to\R\cup\{+\infty\}$ is a proper, convex and lower semicontinuous function, $\psi:\R^n\to\R$ is a (possibly nonconvex) smooth function and $\lambda>0$ is a parameter which controls the velocity. We show that the set of limit points of the trajectory $x$ is contained in the set of critical points of the objective function $\phi+\psi$, which is here seen as the set of the zeros of its limiting subdifferential. If the objective function satisfies the Kurdyka-\L{}ojasiewicz property, then we can prove convergence of the whole trajectory $x$ to a critical point. Furthermore, convergence rates for the orbits are obtained in terms of the \L{}ojasiewicz exponent of the objective function, provided the latter satisfies the \L{}ojasiewicz property. Introduction and preliminaries The dynamical system v(t) ∈ T (x(t)) λ(t)ẋ(t) +v(t) + v(t) = 0, where λ : [0, +∞) → [0, +∞) and T : R n ⇒ R n is a (set-valued) maximally monotone operator, has been introduced and investigated in [10] as a continuous version of Newton and Levenberg-Marquardt-type algorithms. It has been shown that under mild conditions on λ the trajectory x(t) converges weakly to a zero of the operator T , while v(t) converges to zero as t → +∞. These investigations have been continued in [2] in the context of solving optimization problems of the form inf where φ : R n → R∪{+∞} is a proper, convex and lower semicontinuous function and ψ : R n → R is a convex and differentiable function with locally Lipschitz-continuous gradient. More precisely, problem (2) has been approached via the dynamical system v(t) ∈ ∂φ(x(t)) λ(t)ẋ(t) +v(t) + v(t) + ∇ψ(x(t)) = 0, where ∂φ is the convex subdifferential of φ. It has been shown in [2] that if the set of minimizers of (2) is nonempty and some mild conditions on the damping function λ are satisfied, then the trajectory x(t) converges to a minimizer of (2) as t → +∞. Further investigations on dynamical systems of similar type have been reported in [1] and [21]. The aim of this paper is to perform an asymptotic analysis of the dynamical system (3) in the absence of the convexity of ψ, for constant damping function λ and by assuming that the objective function of (2) satisfies the Kurdyka-Lojasiewicz property, in other words is a KL function. To the class of KL functions belong semialgebraic, real subanalytic, uniformly convex and convex functions satisfying a growth condition. The convergence analysis relies on methods of real algebraic geometry introduced by Lojasiewicz [30] and Kurdyka [28] and developed recently in the nonsmooth setting by Attouch, Bolte and Svaiter [7] and Bolte, Sabach and Teboulle [16]. In the first part of the paper we show that the set of limit points of the trajectory x generated by (3) is entirely contained in the set of critical points of the objective function φ + ψ, which is seen as the set of zeros of its limiting subdifferential. Under some supplementary conditions, including the Kurdyka-Lojasiewicz property, we prove the convergence of the trajectory x to a critical point of φ + ψ. Furthermore, convergence rates for the orbits are obtained in terms of the Lojasiewicz exponent of the objective function, provided the latter satisfies the Lojasiewicz property. In the following we recall some notions and results which are needed throughout the paper. We consider on R n the Euclidean scalar product and the corresponding norm denoted by ·, · and · , respectively. The domain of the function f : R n → R ∪ {+∞} is defined by dom f = {x ∈ R n : f (x) < +∞} and we say that f is proper, if it has a nonempty domain. For the following generalized subdifferential notions and their basic properties we refer to [17,31,33]. Let f : R n → R ∪ {+∞} be a proper and lower semicontinuous function. The Fréchet (viscosity) subdifferential of f at x ∈ dom f is the set∂ for each x ∈ R n . When f is convex, these subdifferential notions coincide with the convex subdifferential, thuŝ The following closedness criterion of the graph of the limiting subdifferential will be used in the convergence analysis: The Fermat rule reads in this nonsmooth setting as follows: if x ∈ R n is a local minimizer of f , then 0 ∈ ∂ L f (x). We denote by crit(f ) = {x ∈ R n : 0 ∈ ∂ L f (x)} the set of (limiting)-critical points of f . When f is continuously differentiable around x ∈ R n we have ∂ L f (x) = {∇f (x)}. We will also make use of the following subdifferential sum rule: if f : R n → R ∪ {+∞} is proper and lower semicontinuous and h : R n → R is a continuously differentiable function, then Further, we recall the notion of a locally absolutely continuous function and state two of its basic properties. Remark 1 (a) An absolutely continuous function is differentiable almost everywhere, its derivative coincides with its distributional derivative almost everywhere and one can recover the function from its derivativeẋ = y by integration. The following two results, which can be interpreted as continuous versions of the quasi-Fejér monotonicity for sequences, will play an important role in the asymptotic analysis of the trajectories of the dynamical system (3). For their proofs we refer the reader to [2, Lemma 5.1] and [2, Lemma 5.2], respectively. Lemma 2 Suppose that F : [0, +∞) → R is locally absolutely continuous and bounded from below and that there exists G ∈ L 1 ([0, +∞)) such that for almost every t ∈ [0, +∞) Then there exists lim t→∞ F (t) ∈ R. Lemma 4 Let f : R n → R ∪ {+∞} be a proper, convex and lower semicontinuous function. Let x ∈ L 2 ([0, T ], R n ) be absolutely continuous such thatẋ ∈ L 2 ([0, T ], R n ) and x(t) ∈ dom f for almost every t ∈ [0, T ]. Assume that there exists ξ ∈ L 2 ([0, T ], R n ) such that ξ(t) ∈ ∂f (x(t)) for almost every t ∈ [0, T ]. Then the function t → f (x(t)) is absolutely continuous and for almost every t such that x(t) ∈ dom ∂f we have f (x(t)) = ẋ(t), h ∀h ∈ ∂f (x(t)). Asymptotic analysis In this paper we investigate the dynamical system where x 0 , v 0 ∈ R n and λ > 0. We assume that φ : R n → R ∪ {+∞} is proper, convex and lower semicontinuous and ψ : R n → R is possibly nonconvex and Fréchet differentiable with L-Lipschitz continuous gradient, for L > 0; in other words, ∇ψ(x) − ∇ψ(y) ≤ L x − y for all x, y ∈ R n . In the following we specify what we understand under a solution of the dynamical system (4). is a strong global solution of (4) if the following properties are satisfied: The existence and uniqueness of the trajectories generated by (4) has been investigated in [2]. A careful look at the proofs in [2] reveals the fact that the convexity of ψ is not used in the mentioned results on the existence, but the Lipschitz-continuity of its gradient. We start our convergence analysis with the following technical result. be the unique strong global solution of the dynamical system (4). Then the following statements are true: Proof. (i) See [10, Proposition 3.1]. The proof relies on the first relation in (4) and the monotonicity of the convex subdifferential. (ii) The proof makes use of Lemma 4. This relation has been already stated in [2, relation (51)] without making use in its proof of the convexity of ψ. be the unique strong global solution of the dynamical system (4). Suppose that φ + ψ is bounded from below. Then the following statements are true: Proof. (i) The statement follows by inner multiplying the both sides of the second relation in (4) byẋ(t) and by taking afterwards into consideration Lemma 5(ii). (iii) From (i) and Lemma 5(i) it follows that for almost every t ≥ 0. The conclusion follows by applying Lemma 2. Proof. From the first relation in (4) and the subdifferential sum rule of the limiting subdifferential we derive for any Further, we have and (see Lemma According to the closedness property of the limiting subdifferential, the proof is complete as soon as we show that From (9), (10) and the continuity of ∇ψ we get Further, since v(t k ) ∈ ∂φ(x(t k )), we have Combining this with (9) and (12) we derive A direct consequence of the lower semicontinuity of φ is the relation which combined with (9) and the continuity of ψ yields (11). We define the limit set of x as ω(x) := {x ∈ R n : ∃t k → +∞ such that x(t k ) → x as k → +∞}. We use also the distance function to a set, defined for A ⊆ R n as dist(x, A) = inf y∈A x − y for all x ∈ R n . be the unique strong global solution of the dynamical system (4). Suppose that φ + ψ is bounded from below and x is bounded. Then the following statements are true: (ii) ω(x) is nonempty, compact and connected; (iii) lim t→+∞ dist x(t), ω(x) = 0; (iv) φ + ψ is finite and constant on ω(x). Proof. Statement (i) is a direct consequence of Lemma 7. Statement (ii) is a classical result from [25]. We also refer the reader to the proof of Theorem 4.1 in [3], where it is shown that the properties of ω(x) of being nonempty, compact and connected are generic for bounded trajectories fulfilling lim t→+∞ẋ (t) = 0. Statement (iii) follows immediately since ω(x) is nonempty. Remark 9 Suppose that φ + ψ is coercive, in other words, Let x 0 , v 0 ∈ R n and λ > 0 be such that v 0 ∈ ∂φ(x 0 ). Let (x, v) : [0, +∞) → R n × R n be the unique strong global solution of the dynamical system (4). Then φ + ψ is bounded from below and x is bounded. Indeed, since φ + ψ is a proper, lower semicontinuous and coercive function, it follows that inf u∈R n [φ(u) + ψ(u)] is finite and the infimum is attained. Hence φ + ψ is bounded from below. On the other hand, from (7) it follows Since φ+ ψ is coercive, the lower level sets of φ+ ψ are bounded, hence the above inequality yields that x is bounded. Notice that in this case v is bounded too, due to the relation lim t→+∞ v(t) + ∇ψ(x(t)) = 0 (Lemma 6(ii)) and the Lipschitz continuity of ∇ψ. Convergence of the trajectory when the objective function satisfies the Kurdyka-Lojasiewicz property In order to enforce the convergence of the whole trajectory x(t) to a critical point of the objective function as t → +∞ more involved analytic features of the functions have to be considered. A crucial role in the asymptotic analysis of the dynamical system (4) is played by the class of functions satisfying the Kurdyka-Lojasiewicz property. For η ∈ (0, +∞], we denote by Θ η the class of concave and continuous functions ϕ : [0, η) → [0, +∞) such that ϕ(0) = 0, ϕ is continuously differentiable on (0, η), continuous at 0 and ϕ ′ (s) > 0 for all s ∈ (0, η). Definition 3 (Kurdyka-Lojasiewicz property) Let f : R n → R ∪ {+∞} be a proper and lower semicontinuous function. We say that f satisfies the Kurdyka-Lojasiewicz (KL) property at x ∈ dom ∂ L f = {x ∈ R n : ∂ L f (x) = ∅}, if there exist η ∈ (0, +∞], a neighborhood U of x and a function ϕ ∈ Θ η such that for all x in the intersection If f satisfies the KL property at each point in dom ∂ L f , then f is called KL function. The origins of this notion go back to the pioneering work of Lojasiewicz [30], where it is proved that for a real-analytic function f : R n → R and a critical point x ∈ R n (that is ∇f (x) = 0), there exists θ ∈ [1/2, 1) such that the function |f − f (x)| θ ∇f −1 is bounded around x. This corresponds to the situation when ϕ(s) = Cs 1−θ for C > 0. The result of Lojasiewicz allows the interpretation of the KL property as a re-parametrization of the function values in order to avoid flatness around the critical points. Kurdyka [28] extended this property to differentiable functions definable in o-minimal structures. Further extensions to the nonsmooth setting can be found in [6,[12][13][14]. One of the remarkable properties of the KL functions is their ubiquity in applications (see [16]). We refer the reader to [5-7, 12-14, 16] and the references therein for more properties of the KL functions and illustrating examples. In the analysis below the following uniform KL property given in [16,Lemma 6] will be used. Lemma 10 Let Ω ⊆ R n be a compact set and let f : R n → R ∪ {+∞} be a proper and lower semicontinuous function. Assume that f is constant on Ω and that it satisfies the KL property at each point of Ω. Then there exist ε, η > 0 and ϕ ∈ Θ η such that for all x ∈ Ω and all x in the intersection holds. Remark 11 We notice that we do no require second order assumptions for φ. However, we want to notice that if φ is a twice continuously differentiable function, then the dynamical system (15) can be equivalently written as where x 0 , v 0 ∈ R n and λ > 0. This is a differential equation with a Hessian-driven damping term. We refer the reader to [3] and [9] for more insights into dynamical systems with Hessiandriven damping terms and for motivations for considering them. Moreover, as in [9], the driving forces have been split as ∇φ + ∇ψ, where ∇ψ stands for classical smooth driving forces and ∇φ incorporates the contact forces. In this context, an improved version of Lemma 5(i) can be stated. Lemma 12 Let x 0 , v 0 ∈ R n and λ > 0 be such that v 0 = ∇φ(x 0 ). Let (x, v) : [0, +∞) → R n × R n be the unique strong global solution of the dynamical system (15). Then: Proof. Take an arbitrary δ > 0. For t ≥ 0 we have where the inequality follows from the Baillon-Haddad Theorem [11,Corollary 18.16]. The conclusion follows by dividing (18) by δ 2 and by taking the limit as δ converges to zero from above. We are now in the position to prove the convergence of the trajectories generated by (15). Remark 14 Taking a closer look at the above proof, one can notice that the inequality (23) can be obtained also when φ : R n → R ∪ {+∞} is a (possibly nonsmooth) proper, convex and lower semicontinuous function. Though, in order to conclude thatẋ ∈ L 1 ([0, +∞); R n ) the inequality obtained in Lemma 5(i) is not enough. The improved version stated in Lemma 12 is crucial in the convergence analysis. If one attempts to obtain in the nonsmooth setting the inequality stated in Lemma 12, from the proof of Lemma 12 it becomes clear that one would need the inequality for all (x 1 , x 2 ) ∈ R n × R n and all (ξ * 1 , ξ * 2 ) ∈ R n × R n such that ξ * 1 ∈ ∂φ(x 1 ) and ξ * 2 ∈ ∂φ(x 2 ). This is nothing else than (see for example [11]) for all (x 1 , x 2 ) ∈ R n × R n and all (ξ * 1 , ξ * 2 ) ∈ R n × R n such that x 1 ∈ ∂φ * (ξ * 1 ) and x 2 ∈ ∂φ * (ξ * 2 ). Here φ * : R n → R denotes the Fenchel conjugate of φ, defined for all x * ∈ R n by φ * (x * ) = sup x∈R n { x * , x − φ(x)}. The latter inequality is equivalent to ∂φ * is ρ-strongly monotone, which is further equivalent (see [35,Theorem 3.5.10] or [11]) to φ * is is strongly convex. This is the same with asking that φ is differentiable on the whole R n with Lipschitz-continuous gradient (see [11,Theorem 18.15]). In conclusion, the smooth setting provides the necessary prerequisites for obtaining the result in Lemma 12 and, finally, Theorem 13. Convergence rates In this subsection we investigate the convergence rates of the trajectories (x(t), v(t)) generated by the dynamical system (15) as t → +∞. When solving optimization problems involving KL functions, convergence rates have been proved to depend on the so-called Lojasiewicz exponent (see [5,12,24,30]). The main result of this subsection refers to the KL functions which satisfy Definition 3 for ϕ(s) = Cs 1−θ , where C > 0 and θ ∈ (0, 1). We recall the following definition considered in [5]. Definition 4 Let f : R n → R ∪ {+∞} be a proper and lower semicontinuous function. The function f is said to have the Lojasiewicz property, if for every x ∈ crit f there exist C, ε > 0 and θ ∈ (0, 1) such that According to [6, Lemma 2.1 and Remark 3.2(b)], the KL property is automatically satisfied at any noncritical point, fact which motivates the restriction to critical points in the above definition. The real number θ in the above definition is called Lojasiewicz exponent of the function f at the critical point x. The convergence rates obtained in the following theorem are in the spirit of [12] and [5].
2017-03-03T21:03:09.000Z
2017-03-03T00:00:00.000
{ "year": 2019, "sha1": "9e71dc1ee896b385c44f07f1d39c5e2d2caa7a68", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.01339", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9e71dc1ee896b385c44f07f1d39c5e2d2caa7a68", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
207257068
pes2o/s2orc
v3-fos-license
Reducing redundant data transmissions in wireless ad hoc networks: comparing aggregation and filtering Efficient bandwidth usage is vital for real-time ad hoc networking applications like vehicular safety. Yet, such applications can produce large amounts of identical data. Pruning redundant data transmissions can enable delivering richer data to more users at shorter intervals. Reducing redundancy has been studied extensively for stable network topologies but the solutions are not directly extensible to dynamic topologies where information about network state obsolesces quickly. We compare two novel combinations of the adaptive controlled flooding routing protocol, SBSD, with implementations of response aggregation and query filtering for mobile environments. We test these combinations in simulated vehicular networks. We show that, even in cases where response aggregation only slightly improves network performance, query filtering can improve delivery by up to 30 % and response time by 75 %. Introduction Ad hoc networks have long been envisioned for areas lacking communication infrastructure.However, recent growth in mobile computing has driven wider infrastructure availability.In areas with 3G or 4G service, many proposed peer-to-peer applications for video streaming and travel information have been rendered obsolete.Yet, ad hoc networking can still play the primary role in certain applications, such as real-time video streaming for vehicular safety [34] and, more generally, situational awareness (SA) applications for mobile environments. In essence, SA maintains a complete view of a set of environmental variables, including expected future changes.Accurate SA enables preventing problems rather than addressing them after they occur but can be demanding in terms of network transmission capacity.The faster conditions change on the ground, the faster information must be provided to network users.Likewise, more complex situations require monitoring larger sets of variables. Many SA domains are potentially both complex and highly dynamic.An early, important paper on vehicular ad hoc networks (VANETs) considered sending queries about local travel conditions to roadside infrastructure [11].Another example is assisting military operations; at the tactical level, information about friendly and hostile forces must be updated continuously.Other SA domains include: disaster recovery [19], where resources must be allocated sequentially to save lives and minimize property damage; dynamic pricing of toll roads and parking spaces; and even advertising strategies for yield management [20]; and vehicular safety [7]. This paper considers the unique problems of vehicular safety applications, which must continuously gather and disseminate detailed information about all vehicles in an area.Consider collision warnings for vehicle streams approaching a curve from opposite directions.Naı ¨ve, straight-line extrapolation of vehicle trajectories will generate many spurious collision alerts.Instead, safety alerts might watch for vehicles drifting out of their lanes, indicating driver inattention, intoxication, or incapacity.To reduce the impact of observation errors, alerts would include perspectives of multiple observers.We contend that, in dense environments, vehicular safety updates will require shortened transmission ranges and multi-hop communication and also generate data that is highly prone to redundancy. Short transmission range Shorter range lets nodes transmit more often and, given sufficient node density, improves network throughput.The two basic strategies for dealing with high node density are shortening transmission range [36] or slowing the transmission rate [12], [32].Vehicular safety requires a minimum acceptable notification rate; the standard transmission ranges provided by 802.11p-which can at times encompass a thousand or more vehicles [1]may be too long.Indeed, a majority of beacons will be lost at a fraction of that density [4].Broadcast storm can also arise from transmissions by roadside infrastructure, as considered in [35]. Multi-hop communication With short transmission range, it follows that SA updates must be forwarded over multiple hops to inform distant network users.For example, suppose an erratic driver is detected.If warnings are ultimately forwarded to vehicles a few kilometers away, they will be better prepared to avoid collisions. Redundancy Many readings from different observers may be identical.Such readings can be combined without loss and pruning their redundancies mitigates broadcast storm.As a network policy, reduced redundancy could allow more frequent updates, a larger set of observed variables, or even reduced power consumption. Many methods have been proposed for reducing redundant data transmission in low-mobility networks.In various ways, they avoid sending the same data over the same route more than once.High-mobility networks-such as vehicular ad hoc networks-often have ephemeral routing paths, making it impractical to obtain the knowledge required by such methods.Further, extant methods for high-mobility networks minimize bandwidth consumption to fulfill a given set of queries or notifications.While such methods indeed free up bandwidth for additional network traffic, they lack mechanisms to adapt the SA updates rate to network activity. We propose a different approach: increasing data throughput to allow SA updates over longer distances and at shorter intervals.In [25], the Self-balancing supply/ demand (SBSD) controlled flooding protocol showed effectiveness in dynamic topologies.Here, we extend SBSD with response aggregation and query filtering (defined in the next section), both separately and together.We test these extensions in a simulated vehicular network. The remainder of this paper is ordered as follows.In Sect.2, we define four basic methods for reducing redundancy.In Sect.3, we cover relevant research.In Sect.4, we briefly describe SBSD and detail our two redundancy reduction models.Finally, we present our simulation results in Sect. 5. Methods for reducing redundant transmissions We illustrate four basic methods for reducing redundant transmissions of data using a scenario of four cars (A, B, C, and D) driving in series (Fig. 1).To better illustrate these methods, we assume a sparse vehicle distribution such that each car is only in communication range of the ones immediately in front of or behind it; e.g., C can communicate with B and D but not A. We assume other vehicles exist in both directions; D can communicate with them through a car E and A can through a car Z.A query for mobility data from a set of cars X is expressed as q(X) and the response containing mobility data from a set of cars X is expressed as r(X).For example, in regard to a set X = {A, B, C}, the query would be q(A, B, C) and the response would be r(A, B, C). Data aggregation constructs aggregated packets from two or more packets with duplicate data. Definition Given a node n holding two responses r 1 and r 2 , with (r 1 \ r 2 6 ¼ ;), data aggregation will have n construct an aggregated packet containing (r 1 [ r 2 ). Example Car A receives two queries q(B, C) and q(C, D) via Z.After receiving r(B, C) and r(C, D), instead of forwarding them separately to Z, A sends r(B, C, D). Demand aggregation delivers packets to multiple recipients along a common routing path, rather than constructing each path separately. Definition Given a response r to be routed to two different destinations over two sets of nodes P 1 and P 2 .If (P 1 \ P 2 ) = [, any nodes in (P 1 \ P 2 ) need only transmit r once for it to be delivered to both destinations. Example Cars A and C have both sent q(D) to D. If D knew A was reachable via C and B, it could send r(D) to A via C and knowing that both would receive it.Query filtering reduces the scope of queries to exclude information that is already held or for which a query has already been forwarded.This can occur at the query source or at nodes between the source and destination, as in [17]. Definition Given a node n that forwarded a query q and later receives a query q 0 , (q \ q 0 6 ¼ ;), query filtering will rewrite q 0 to request only ðq 0 Àðq \ q 0 ÞÞ . Example Car C holds r(C) but not r(D) and receives q(C, D) from A. C only forwards q(D) to D, as it can already fulfill r(C) itself. Subsumption is a query (or response) being wholly contained within a set of queries (or responses).Exact identification of subsumption relationships can be impractical since solutions are NP-complete [29].An alternative is probabilistic subsumption-checking, such as [24], which samples points from the query space (and which we describe further in Sect.4.3). Definition Given a set of n queries Q = {q 1 , q 2 , …, q n }, n C 1, then a query q is said to be subsumed by Example Car A has just sent its own query q(C, D) to B. A then receives q(C) via Z.A does not forward q(C), as A will receive r(C) subsumed in r(C, D). Related research Stable network topology and accurate knowledge of query and data distributions reduce the need for redundant routing and caching.Stable topology allows reliable singlepath routing.Known query distributions let data be stored near likely requesters.Known data locations eliminate the need for search within the network.The topological stability of wired networks enables efficient delivery methods and architectures.Google, for example, tracks query distributions to cache locally popular Web pages at its servers.This speeds response times and reduces forwarding queries to other clusters.Google also continually refreshes caches, as query popularity can decline quickly.Developing similar predictive and adaptive methods for wireless ad hoc networks must consider node mobility. Wireless ad hoc networks can be categorized by absolute mobility, node speeds, and relative mobility, the speeds at which nodes move relative to each other.Networks with low absolute and relative mobility have time to gather network knowledge; those with high absolute and relative mobility do not.We classify typical examples of mobility combinations in Table 1, below, before elaborating on their redundancy reduction methods in the next two sections. Redundancy reduction in low relative mobility networks Many readers may be familiar with data aggregation in low-mobility sensor networks, such as [27].We discuss some representative methods to illustrate which can (and cannot) be applied to constrained mobility VANET scenarios.Typically, low-mobility sensor networks have stable routing paths and nodes have fixed roles as either data sources or sinks.Methods from wired computing [6] can substantially reduce bandwidth and power usage.The recent PhotoNet [30] removes redundant frames from streams of visual data to deliver diverse content.The model includes additional features to reduce redundant transmissions-for example, upon encountering each other, nodes exchange lists of the photos they have recently seen.However, the paper only gives results for low-density, intermittently connected networks.It is unclear if the model as presented would perform well in dense, dynamic networks using broadcasting. For large-scale, interconnected sensor networks, query filtering and subsumption can reduce network traffic by an order of magnitude [15,16,18].However, filtering policies must be carefully chosen for specific networks and applications.Filtering at query sources reduces network traffic the most but sends more disjoint queries, with fewer opportunities to aggregate data or demand.Filtering at intermediate nodes can help cache popular information near future queries and lessens the risk that any recent updates will be missed. Redundancy reduction in high relative mobility networks In networks exhibiting high relative mobility, the topology, query distribution, and locations of data may be largely unknowable.It is difficult to optimize what data to cache, where to cache it, and when to discard it.This increases the costs of search and dissemination within the network.For routing, networks must also rely more on flooding instead of learning and re-using routing paths.Researchers have nevertheless devised predictive caching models for such dynamic environments.The seminal paper [14] proposed three methods (based on access frequency and network topology) for predictive data caching against anticipated demand.A general-purpose method for proactively pushing information to meet its anticipated demand is presented in [13].Anticipated demand is based on historical trends, so their approach depends on stable query distributions and, to some extent, on a number of queries being very popular.The SBSD routing protocol caches data inherently by responding to queries, as matching responses propagate within each query's flooding area. Demand aggregation is also useful in VANET applications; an early but well-known example using tree-based routing given in [5].Similarly, if relative mobility is low, vehicle streams can comprise a virtual backbone [3] allowing data aggregation methods used in static sensor networks.But while sensor networks typically deliver information from many sensors to few sinks, VANETs often deliver identical information to many vehicles (e.g., upcoming travel conditions to a set of approaching vehicles).Further, in sensor networks, the content, timing, and volume of messages can be regulated by network policy.Demand aggregation in VANETs is complicated by privacy concerns and intermittent partitioning, particularly in applications designed for use by the general public. Some researchers have addressed data redundancy in VANETs by combining similar information more compactly, e.g., by averaging or sampling.A framework for sharing aggregate historical information among vehicles is given in [9] but does not include performance results.Probabilistic aggregation of binary state information for sets of identical items was examined in [21] but is not extensible to non-identical items.Even non-redundant data may be transmitted more efficiently by grouping related items, such as all vehicular safety updates generated during a short interval; this approach was observed in [33] to beneficially impact network throughput. Discarding highly similar data has also been applied to VANETs.In [10], a fuzzy logic model evaluates the similarity of data and avoids transmitting similar content.However, it is intended for applications not requiring exactness.Another approach, from [23] proposes a multilevel model that separates received data into more fundamental units to facilitate eliminating redundant transmissions.While their model enables more efficient bandwidth usage, it also incurs substantial overhead in processing and transmissions; its suitability for dense networks with high relative mobility is unclear. On highways, vehicles' relative positions tend to be stable in regard to same-direction travel, letting VANETs apply data aggregation and query filtering models from sensor networks.For example, in [37], nodes delay forwarding packets in order to limit redundant transmissions.The expectation is that waiting will allow nodes to receive identical data that can be combined.Of course, vehicular safety information is generally much more time-sensitive than broader traffic conditions.A two-tiered information delivery platform (for short-and long-range SA) is proposed in [15].This model has each node broadcast its information about nearby vehicles using a combination of data compression and aggregation. The response aggregation and query filtering models we combine with SBSD differ from the preceding models as follows (and which will be detailed in Sect.4).First, they require no coordination among nodes; this makes them suitable for dynamic environments where coordination is problematic.Second, they do not inherently require any delays to accumulate redundant data; this makes them suitable in time-sensitive applications like vehicular safety.Finally, in the response aggregation model, each node's decisions are determined according to its local collision rate, letting it adapt to changing network conditions. Model implementation In this section, we present our two redundancy reduction models.In Sect.4.1, we describe SBSD, which is necessary to our presentations, in Sects.4.2 and 4.3, of our response aggregation model and our probabilistic subsumption-based query filtering model.Finally, Sect.4.4 gives an estimation model for their consequent performance gains, in terms of flooding depth and query fulfillment.The basic life cycle of a query in SBSD, from posting to expiration, is as follows.For a node n posting a query q, q is flooded via broadcast around n (we describe SBSD's adaptive flooding depth in the following paragraphs).When a replica of q is received by a node n 0 that holds a response r matching q, n 0 creates a matched query q 0 by appending r to q.This matched query is then flooded back to n. Flooding depth in SBSD is variable and adapts to the volume of network traffic.SBSD uses the congestion metric utility to rank packets and thereby regulate flooding depth.It is applied in the same fashion to queries and responses (which are stored as data fields appended to query packets).A query replica's utility u at a node n is given by Eq. 1 below, where a is its age (the elapsed time since the original query was posted), h is the number of hops it replicated to reach n, and f is its frequency (the number of nodes that have posted the same query, as known by n). Utility represents the negative ratio of the congestion a query has inflicted to the congestion it is allowed to inflict.Inflicted congestion is estimated from the time and distance a query has flooded from its source, increasing with both.Allowed congestion is not calculated directly but, as a policy, scales linearly with frequency; more popular queries flood farther and receive more broadcasts at each node.This scaling is a consequence of the above utility function.As more queries flood an area, each one's allowed congestion is smaller.This system behavior arises from nodes only broadcasting their high-utility replicas, as follows. Each node independently determines u min , the minimum utility at which a replica may be forwarded.This u min is calculated after each broadcast by forming the set of highest-utility packets a node expects to be able to transmit at a binary exponential rate for their remaining times-tolive.As long as a replica's utility remains above u min , it may be repeatedly broadcast, giving robustness against collisions and temporary partitions.Since SBSD uses flooding, nodes near each other tend to experience similar network traffic and thus have similar u min values.This policy induces a predictable flooding depth that adapts to changing network traffic levels. The standard model of SBSD uses a context-aware cross-layer (CACL) medium access control compared against the 802.11MAC in [22].From CACL, SBSD receives information about node density to determine u min and schedule repeated broadcasts of packets, which in turn drives the packets CACL receives from SBSD.CACL remedies limitations of the 802.11MAC for broadcasting in dense environments, like hidden terminal collisions and nodes claiming the channel for excessively long periods.The basic features of SBSD using CACL are given below. Density estimation Each node learns its one-hop neighborhood N 1 by received packets.All sources of received transmissions in the previous 500 ms are included in N 1 .For vehicular applications, the two-hop neighborhood node population N 2 is estimated as 2N 1 .Density estimates are used to schedule broadcasts by imposing post-broadcast delays and delays between repeat broadcasts of a packet. Channel sensing To avoid single-hop collisions, nodes do not broadcast while the channel is in use.Instead, a node scheduled to broadcast but finding the channel busy checks at random intervals (averaging 0.1 ms) for the channel to be free. Post-broadcast delay After a node n broadcasts, it incurs a variable delay d before it can broadcast again.Given N 2 and packet transmission time t, d is randomly taken from the uniform distribution Collision rate The above post-broadcast delay gives an expected collision rate of 1 e % 0:367.This is a well-known result for a group of k nodes each broadcasting once during a set of k frames and its proof is therefore omitted. Repeat broadcasts Repeat broadcasts occur at a binary exponential rate with base 2N 2 t.To correct for changing density, a packet's base is updated after it is broadcast. Packet selection For its next scheduled broadcast, a node n selects a packet from the set having utility [ u min .The packet with the fewest prior broadcasts by n is chosen.If multiple packets thus qualify, the highest-utility one is chosen. Response aggregation model Our aggregation model permits pairwise aggregation of packets containing common data without loss.For any two sets of response data A and B, we define overlap as occurring when ðA \ BÞ 6 ¼ ;.Measuring A, B, and (A [ B) in terms of data size, we define the magnitude x of any overlap as Þ¼ðA [ BÞ; A and B have no data in common.If x = 1, A and B contain identical response data; their intersection equals their union. When a node's upcoming broadcast contains response data, it may combine the primary packet (already selected for broadcast) with a secondary packet (which contains some of the same data as the primary packet).Ensuring aggregation is beneficial requires estimating the net benefit of aggregating any two responses.We use a benefit ratio b to compare the additional response data transmitted per unit time against any expected increase in the collision rate.When a node determines its best benefit ratio b max [ 1, aggregation is expected to be beneficial.Else, the primary packet is broadcast without aggregation.From its packets having utility [ u min , n selects the secondary packet yielding the highest b as follows.Dc (the change in collisions) We model the cost of collisions as a linear function of the size of aggregated packets.This captures the effect of data loss being proportional to packet size.As CACL is designed to incur a 1 e collision rate, we assume future collision rates will be at least 1 e .For a primary packet A and secondary packet B, we define Dc ¼ 1 e A[B A .Note that Dc increases when the overlap x increases and when A is smaller than B (even if A is subsumed within B).The latter avoids transmitting large aggregated packets at the short intervals at which CACL will transmit small primary packets. Our aggregation model anticipates that nodes may experience collision rates that differ from the norm or change over time.Variable transmission ranges (used by this paper's simulations for improved realism) reduce CACL's expected 1 e collision rate.Since N 2 as estimated by a node n includes nodes beyond the average transmission range, on average it overestimates the number of nodes expected to cause collisions with a given transmission by n. To address uncertainty in collision rates and their measurement, each node tracks its local collision rate using an exponential smoothing model.Our goal is simply to estimate the future collision rate based on recent evidence, in order to adapt to changes in nearby transmission activity.After every collision or successful receipt, each node updates its estimated receipt rate s from its previous estimate s 0 using the formula s = (1 -a)s 0 ?ax, where x is 1 for a successful receipt and 0 for a lost packet.The estimated collision rate c is then (1s).We set a = 0.02, which means that half of c's value is determined by the node's previous 34 packets (i.e., (1 -a) 34 & 0.5.In practice, nodes would adapt to a changed collision rate within a few seconds. Subsumption-checking model We now present our subsumption-checking model, using the scenario in Fig. 2, below. Scenario An SA application operates in area S. Four network users (Albert, Bob, Chuck, and Don) respectively post queries for areas A, B, C, and D within S. These queries are mapped to S as shown in Fig. 2. Query example Queries q(A) and q(B) seek current vehicle positions and speeds in rectangular areas A and B, respectively. Response example Responses r(A) and r(B) are the matching mobility data from the two areas. Overlap example The overlap of r(A) and r(B) is any data in intersection A \ B .Consider Don's query q(D), which is subsumed by qðAÞ [ qðBÞ but by neither q(A) nor q(B) alone.This gives four possibilities.First, if Don already has r(A) and r(B), q(D) can be fulfilled locally without sending it to other nodes.Second, if Don receives r(A) and r(B) while waiting for r(D), he can construct r(D) from them.Third, if a node E near Don has received q(D), r(A), and r(B), E can construct r(D) and forward it to Don.Finally, if E has already forwarded q(A) and q(B) before receiving q(D), E might not forward q(D) if it expects to receive both r(A) and r(B). These possibilities can be generalized as follows.Assume every node n has a cache of Q n queries and R n responses which we term n's queue.Consider a query q with a hypothetical matching response r.Node n could check for subsumption when: • Posting q Here, n checks if r R n .If so, q would not be transmitted to other nodes but immediately fulfilled locally.• Receiving q Again, n checks if r R n .If so, it could return r to the node that posted q.As well, n would check if q Q n .If so, n might elect to not forward q (as r will be contained in the set of responses to Q n ).In dynamic topologies, factors like temporary partitions and packet drops and expirations make it difficult to know whether it is better to wait for the responses to Q n or immediately forward q Q n .Accordingly, in our model query filtering occurs only at the query's source and via subsumption.That is, either node n posting query q has r R n (so q can be locally fulfilled) or does not (so n transmits q in whole).This approach also assists predictive caching, as the replication of responses continuously refreshes the caches of all nodes. Checking for subsumption only at query sources also reduces the computational burden.Checking only when n posts a new query q is much less onerous than checks after every receipt of a query or response.We further reduce processing time by adopting the probabilistic method from [24] given in Algorithm 1 (Fig. 3).Although fast, this method can falsely indicate subsumption relationships; therefore, it is best adopted when queries seek a representative sample of data in an area rather than require complete, exact matches. In this paper, we do not explicitly consider categorical variables.Although the basic approach of checking a node's response array would still apply, the random point selection would likely not apply.For example, suppose a query q(X) seeks all vehicles of some type X; then, the query space would effectively have only one point.The question, then, is if the set of matching data in the response array comprise a complete response to q(X).Although not insuperable, this is a different problem from the range-based queries we consider here and one of our future research directions. Theoretical performance gains We now mathematically analyze how our two models change the depth of information search and dissemination for SBSD and CACL.The basic SBSD model allows flooding to proceed equally in all directions and, as shown in [25], causes flooding areas to be inversely related to the volume of same-size responses being forwarded.This fact allows estimating the performance gains under idealized conditions with the following simplifying assumptions. Given a query q posted by a node n and seeking a matching response r: • Exactly one copy of r exists in the network within a constant distance d of n. • The response r will be delivered to n iff it exists within a constant distance D of n, where D\d.• Within the circle of radius d centered on n, the same response r is equally likely to be at any location. For aggregation, we further assume a constant overlap x (as defined in Sect.4.2).Then, these assumptions, D can be estimated from the network-wide probability of response delivery p, where p ¼ D Higher overlap x allows responses to be obtained from more distant nodes.Disregarding query packets, pairwise aggregation allows the flooding area A for each response can increase by a factor of (1 ?x) and D by ffiffiffiffiffiffiffiffiffiffiffiffi 1 þ x p .If query packets are assumed to compose a constant fraction k (0 \ k \ 1) of network traffic, then pairwise aggregation increases A by a factor of (1 . The adoption of different wireless standards will, of course, impact the parameters d, D, k, and x.For example, Algorithm 1. Checks if the response to a query q is subsumed in an array r[] of n responses Inputs: SampleSize s (the number of points to be checked), Query q, Response array r[] Output: Boolean variable subsumed (true if subsumption is present, false otherwise) Point[s] p = new empty array, boolean subsumed = true for (i = 0; i < s; i ++) { do { p[i] = getRandomPoint(q); // select random point in query space } while (duplicate(p, p[i]); // repeat if p[i] already in Point[] array } for (i = 0; i < s; i ++) { boolean pointcheck = false; // tracks each point-checking iteration for (j = 0; j < n; j++) { if (inResponse(r[j], p[i]) { // is point p[i] in response r[j]? pointcheck = true; break; } // if r[j] contains point p[i], no need to check other responses}} if (!pointcheck) // if point p[i] is not in r[], the subsumption check has failed { subsumed = false; break; }} return subsumed; Fig. 3 Pseudo-java code for Algorithm 1 using a higher data rate will directly d and D. Further, all else equal, having responses replicate farther will cache more varied response data at each node and increase x.Network topology and density also impact those parameters; e.g., k increases as it takes queries longer to find matching responses.Still, the relationships p ¼ D 2 d 2 and A(x) = A(1 ?x)(1k) are independent of such factors. In contrast, subsumption-checking prevents queries from entering the network altogether.In effect, the network must fulfill fewer queries, in turn permitting each one a larger flooding area.This will be reflected in higher delivery (from increased search depth for queries) or faster response time (due to reduced competing network traffic).As with aggregation, the changes in flooding area and depth can be estimated from the improvement in delivery. Network topology may prevent achieving the full theoretical gains.Traffic signals, for example, can inhibit routing by creating large gaps in vehicle flows.On the hand, if the location of a query's destination is known, high node densities can be exploited to establish more direct routes.Limiting flooding areas for improving SBSD's throughput was explored in [26].That approach would, however, tend to reduce the gains from aggregation and filtering, as nodes would process less varied response data. Computation time is another concern.Ideally, any aggregation or filtering could be performed between broadcasts.Larger response packets and higher node density increase the allowable time, while queue length increases the required computation time.If such computations are too onerous, an alternative would be to assess overlap for a random sample of the queue.We note the average times available for such computations in regard to our two simulation scenarios in Sect. 5. Simulations and performance analysis Our simulations are conducted using the JiST/SWANS platform [2], which is a widely-used Java-based discrete event network simulator.Each of our simulation scenarios considers a set of 600 vehicular nodes moving along roads contained within a 1,000 m square over 40 s of simulated time.Vehicles have a maximum speed of 13.4 m/s (corresponding to 30 mph, a common urban speed limit in the United States).We used the STRAW mobility module [8], a micro-simulator that considers real-world variables such as acceleration and stopping distances. The goal of our simulations is to observe how the performances of redundancy reduction methods are affected by the prevalence of redundant data and volume of network traffic.In order to minimize the effect of random factors (such as node density variation) across simulation runs, we adopted a uniform density model with the following custom road layout, as shown in Fig. 4. The layout is twenty 900 m road segments, spaced every 100 m, in a rectangular grid.Vehicles are initially distributed uniformly and are randomly assigned to travel clockwise or counterclockwise in a 500 m square.For example, a node starting at point A (Fig. 4) would follow either the circuit (indicated in yellow) A-B-C-D-A or A-D-C-B-A.In effect, every intersection will be the start and end of one route in each direction, clockwise and counterclockwise.Further, each such pair of routes is assigned 1 % of the total vehicular traffic, half in each direction. In order to simulate realistic travel behaviors of vehicles moving toward specific destinations, each simulation run models 40 s of real time.This shortness ensures that vehicles will complete less than half a circuit during a simulation run.Although our mobility model is not wholly realistic, all vehicles are effectively proceeding directly to a destination.Moreover, the uniform traffic distribution reaches equilibrium quickly; longer simulation runs do not materially affect the network performance results.In any event, real data for large groups of vehicles at this level of detail is not readily available to researchers [31]. Because of the relatively high node density in our simulations, we use a shorter transmission range than the 300-1,000 m typical of VANET research.For dense urban environments (as opposed to highways), shorter transmission ranges simply enable greater network throughput without great risk of connectivity gaps.The 802.11p (5.9 GHz) wireless standard from [28] gives a data rate of 12 Mps and mean transmission range of 50 m with Fig. 4 Road layout standard deviation 4 m.Although JiST/SWANS does not yet simulate 802.11p, its 802.11b similar 11 Mps data rate and we set the mean transmission range to 45 m to approximate the same network throughput, the shorter range of 802.11b here compensating for its lower data rate. We include two scenarios defined by the network-wide query posting rate: a low network traffic one with 15 queries posted per second (in Sect.5.1) and a high network traffic one with 30 per second (in Sect.5.2).To scale network load linearly, the query variety is proportional to the posting rate-400 different query specifications are possible in Sect.5.1 and 800 in Sect.5.2.For each query, only one node has the matching response; however, we do not assume this is known, so even after being matched, queries (and any appended responses) continue to replicate throughout the network. Mapping of queries and responses is abstracted; our concern is the prevalence of data redundancy rather than its causes.Queries and their matching responses are mapped to a square search space with length L; this corresponds to the prevalence of redundant response data rather than any particular distance within our simulations.Within each scenario, L may vary from 50 to 150 units.Each query q is a rectangle with length x(q) and width y(q).The dimensions x and y vary from 5 to 15 units and are taken separately from a uniform distribution.Each unit represents 10 bytes of data; a query's range may contain from 25 to 225 units and the corresponding response from 250 to 2,250 bytes. Each data point represents the average results from ten simulation runs of 40 s each, with crossbars indicating one standard deviation above and below the average.We present results for delivery (the percentage of queries that receive a matching response before expiry), response time (the average time between when a query is posted and the posting node receives the matching response), response packets (total broadcasts of response packets, with aggregated responses counting as one broadcast), and query packets (total broadcasts of unmatched queries).Results for packet delivery and response time are taken only from queries posted during the interval (10 s, 30 s).This gives 10 s for the network to reach equilibrium traffic and provides a complete lifetime to all tracked queries.The four models considered are: • Baseline (B) No aggregation or subsumption-based filtering.Since this model is invariant with respect to L, only one batch of simulations was run for each scenario and results appear as straight lines.• Aggregation (A) Aggregation is used but not subsumption-based filtering.• Subumption (S) Subsumption-based filtering is used but not aggregation. • Aggregation and subsumption (AS) Both aggregation and subsumption-based filtering are used. Summary of results • The aggregation model provides some improvement in delivery at low values of L (i.e., when redundancy is most common).• The subsumption model provides substantial improvements in both delivery and response time over the entire tested range of L. • For the aggregation and subsumption model, no statistically significant difference in delivery is observed from subsumption applied alone. Low network traffic In this scenario, the network query posting rate was 15 per second, corresponding to one query per vehicle every 40 s. For each posted query, one node within 500 m of its source holds the matching response.Results for delivery are given in Fig. 5, below.The B model provides delivery of about 86 %.At the low end of query range L, the A model improves delivery by about 5 %.However, this improvement rapidly dissipates as L increases.For testing the null hypothesis that the true mean values of the baseline and the other models are the same, p values are given in Table 2. The S and AS models both show a large and statistically significant improvement over the baseline throughout the entire range of tested L values.They further are very similar, even at low L values, suggesting the S model alone encapsulates the potential performance gains of aggregation. Response times are given in Fig. 6.The A model is very similar to the baseline, although slightly slower.This arises from the cumulative effect of small delays in forwarding response packets; with the larger aggregated packets, the first transmission of any response packet by a given node tends to occur later.The AS models, on the other hand, greatly reduce response time because many queries can be locally fulfilled using locally available data. Packet counts are given for queries and responses in Figs.7 and 8, respectively.These counts only consider transmissions during the middle 20 s of each simulation run.Some counter-intuitive effects are observed.For the A model, fewer response packets are broadcast than in the baseline but more query packets are.This occurs because aggregation allows all response packets to be transmitted sooner overall.Because SBSD with CACL selects packets according to their prior broadcasts, this causes queries to be broadcast sooner as well.Also, note that for the AS model, both response and query packet counts are lower than in the baseline.The cause is less favorable aggregation options.Overlap tends to be lower, so response packets tend to be larger than in the A model; this crowds out transmissions of query packets. Observed aggregation rates (the fraction of response packet transmissions that are in fact aggregated) and overlaps (the magnitude x) are given in Fig. 9 for models A and AS.Models B and S do not use aggregation and are omitted.Both the rates and overlaps decrease as the query range L increases (and the rate and overlap curves converge).Likewise, the rate is lower for model AS at low values of L because subsumption relationships are fairly common.Note that the product of rate and overlap indicates the expected throughput increase; even allowing for more collisions, the delivery increases due to aggregation fall far short. For example, at L = 50, the product is about 0.3.Using our performance gains model from 4.4, for the baseline responses should be obtainable from about 86 % of the area around a given query source (i.e., the circle with radius 500 m).For the A model, then, responses should be obtainable from an area of (1.3)(0.86)= 1.118 times as large.Thus, delivery should be virtually 100 %.This is not simply due to repeat selection of the same aggregation candidates, as evidenced by the unimpressive gains both at high and low rates.Instead, the main cause is interruptions in vehicle flows from traffic signals, which prevents response forwarding to new nodes and so induces repeated broadcasts of the same aggregates to nodes already having them. High network traffic In this section, we observe how the perform a heavier network load.Queries are posted at a network rate of 30 per second, corresponding to one per vehicle every 20 s.As in Sect.5.1, again one node within 500 m of each query source has the matching response.The higher posting rate reduces the flooding area for each query, which we expect to reduce delivery for the baseline model.The higher posting rate will also make nodes process a greater variety of responses, potentially allowing more opportunities for aggregation and subsumption. Results for delivery are given in Fig. 10.Although the baseline model achieves only about 65 % delivery, the pattern resembles Fig. 5. Again, the A model increases delivery by about 5 % at L = 50 and rapidly converges to the baseline as L increases.This is because the set of secondary packet candidates at each node is essentially the same as in Sect.5.1.Since node density and response size distribution are the same, each node's minimum forwarding utility u min would be the same. For testing the null hypothesis that the true mean values of the baseline and the other models are the same, p values are given in Table 3.As we observed in our low network traffic scenario, the S and AS models both show a statistically significant improvement over the baseline throughout the entire range of tested L values.The similarity of the S and AS model results again shows that the S model encapsulates the potential performance gains of aggregation. The S and AS models again provide almost 100 % delivery at L = 50 and almost a 10 % gain over the baseline at the highest tested values of L. This is partly due to the ability to reuse local data.That is, while pairwise aggregation at best can increase throughput by a factor of two, subsumption can locally answer any number of queries seeking similar data.Still, as the query range increases, S and AS converge to the baseline's performance.No material differences were observed between the S and AS models. Response time (Fig. 11) also resembles the results from Sect.5.1, although somewhat slower for all four models.The curve for the baseline is almost perfectly aligned with that of the A model.Because of the higher query posting rate, a greater proportion of network traffic is devoted to forwarding them.Note that all four models show higher query packet counts (Fig. 12) than they did in Sect.The response packet counts (Fig. 13) are much nearer to their Sect.5.1 values because response packets are much larger than the unmatched queries. However, we do observe slightly higher aggregation rates and overlaps in both the S and AS models (Fig. 14) because CACL considers the effect of aggregation when estimating a node's future transmission capacity.That is, when aggregation is more extensive, nodes can transmit more responses per unit time, decreasing their u min and letting them select secondary packets from a larger set of candidates.This effect is not, however, sufficient here to achieve any noticeable gains in delivery or response time (comparing A to B or AS to S). Conclusion Within current technological capabilities and wireless standards, long transmission distances are poorly suited for the dense vehicle populations commonly observed in cities.This is especially true when vehicles seek information regarding vehicles and traffic flows in their own immediate vicinities.Short transmission distances avoid congesting the network with information beyond the area in which it would be of wide interest.Yet, the data processing capabilities of mobile devices continue to grow exponentially.Accordingly, it is worthwhile to consider not only the potential of shorter transmission distances but also how data might be managed in order to reduce the network's bandwidth requirements.We have compared two localized models for reducing redundant data transmissions in such dense vehicular environments; to the best of our knowledge, this comparison has not been previously performed for VANET scenarios. We observed that pairwise response aggregation can provide material gains in response delivery when the query range is very limited.However, we also observed that vehicular mobility limited the gains to a fraction of their theoretical potential.In contrast, query filtering by subsumption gave much greater performance improvements, both in delivery and response time.The potential cost is that locally obtained responses may not be current and some data points may be missed entirely.While our research should not be taken as a blanket condemnation of aggregation, it strongly suggests that filtering is much the better option for flooding-based applications. Our future work will develop this query filtering model to provide more current response data.This will entail a probabilistic approach to refreshing data caches.Individual data elements will be transmitted according to how rapidly they change and the intensity of the demand for them.By applying a ranking function to such data and synthesizing response packets that may fulfill the missing parts of many queries, we expect to further improve the performance of our subsumption-based query filtering model. Fig. 1 Fig. 1 Vehicle configuration (the change in throughput) Given two responses of sizes A and B having overlap x, with 0 \ x \ 1, aggregation raises throughput by a factor of 1 ?x.In effect, during the time required to transmit data of size (A [ B), in terms of query fulfillment ð1 þ xÞðA [ BÞ) is being transmitted.Thus, Ds ¼ 1 þ x. Fig. 2 Fig. 2 Mapping queries and responses 2 d 2 and thus D ¼ d ffiffi ffi p p , for 0 D d. Table 2 Two-tailed p values for delivery against baseline, low traffic Table 3 Two-tailed p values for delivery against baseline, high traffic
2018-01-23T22:40:25.532Z
2015-02-08T00:00:00.000
{ "year": 2015, "sha1": "42157a1a214f18eec6e0868ee37a89d9b5acbfee", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Reducing_Redundant_Data_Transmissions_in_Wireless_Ad_Hoc_Networks_Comparing_Aggregation_and_Filtering_1/10753316/1/files/19264439.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "2cf0bbed0106cf02c131661aaee91a36ad566898", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
11758003
pes2o/s2orc
v3-fos-license
FoxO3 is a negative regulator of primary CD8+ T cell expansion but not of memory formation The generation of CD8+ T cells by vaccination represents an important goal for protective immunity to infectious pathogens. It is thus of utmost importance to understand the mechanisms involved in the generation of optimal CD8+ T cell responses. The forkhead box O (FoxO) family of transcription factors plays a crucial role in cellular responses to environmental change. Among them, FoxO3 is critically involved in the regulation of cellular proliferation, apoptosis, metabolism, and stress resistance to withdrawal of nutrients or cytokine growth factors. Since the role of FoxO3 has been poorly studied in the immune system, here we have evaluated its involvement in the CD8+ T cell response. We observe that CD8+ T cells deficient for FoxO3 undergo a significantly greater primary expansion than their wild-type counterparts in response to both infectious (vaccinia virus) or non-infectious (non replicating cellular vaccine) immunogens, resulting in a larger cohort of cells following contraction. These survivors, however, do not undergo a greater secondary response than wild type. Taken together, our data show that FoxO3 is a negative regulator of the CD8+ T cells response, specifically during the primary expansion. Introduction Understanding the mechanism(s), which promote effective CD8 + T cell responses, is essential to the design of new vaccines against infectious diseases and cancer. CD8 + T cells play an essential role in the clearance of either infected or abnormal cells through a variety of effector mechanisms 1,2,3 . This is preceded by a robust primary expansion in which rare precursors expand up to 10,000 fold 4 . After infection is brought under control, the majority of the cells will die 5 (90-95%), with the remaining cells forming a long-lived memory pool, which can self-renew and rapidly produce new effector cells upon antigen re-encounter. In the recent years the role of cellular metabolism in regulating CD8 + T cell function and memory has come to the forefront. Recent studies have shown that metabolism is important to regulate CD8 + T cell fate, survival and death 6,7,8,9 . Several molecules have been implicated in T cell metabolism. The phosphatidyl-inositol-3-OH kinase (PI(3)K) pathway and subsequently Akt are activated after TCR triggering or cytokine stimuli such as IL-2 or IL-15. Akt activation is due to its phosphorylation status, and mTORC2 is involved in the phosphorylation of one of the Akt serine, whereas Akt activates mTORC1. Akt has been shown to negatively regulate FoxO molecules 6,10,11 , preventing their entry into the nucleus and their function as transcription factors. The FoxO transcription factors are mammalian orthologs of the Caenorhabditis elegans longevity protein Daf-16 that are widely conserved through evolution and have been shown to play critical roles in cellular responses to environmental changes 12,13 . Three of the four known FoxO orthologs (FoxO1, 3, and 4) have overlapping targets of transcriptional regulation and appear to be widely expressed and similarly regulated 14 . FoxO1 and FoxO3 are the main isoforms expressed in immune cells, but their expression levels differ between organs of the immune system and between lymphoid and myeloid cell types: FoxO1 expression is higher in spleen and lymph nodes as compared to FoxO3, which is the main transcript detected in the thymus and bone marrow 15 . FoxO3 plays a crucial role in regulating cellular proliferation, apoptosis, metabolism, and stress resistance to withdrawal of nutrients or cytokine growth factors (reviewed in 10 ). Like FoxO1 and −4, the functions of FoxO3 are regulated post-transcriptionally, largely through phosphorylation 16 . Although a role for FoxO1 in the CD8 + T cell memory formation has been established, 17,18,19,20 little is known about the function of FoxO3 in the CD8 + T cell response. Information on the role of FoxO3 in immune functions has emerged from the study of genetically deficient (knockout) mice 21 . This study did not find evidence of immunological abnormalities in unmanipulated FoxO-deficient mice, either by histology or enumeration of T and B cells 21 . However, acute infection of FoxO3 −/− mice with lymphocytic choriomeningitis virus (LCMV) or Vesicular Stomatitis Virus (VSV) revealed a more than 3 fold increase in the number of antigen-specific CD4 + and CD8 + T cells. The increased expansion of the primary responder lymphocytes coincided with dysregulated cytokine production by dendritic cells (DC) 21 , and highlights a key role for FoxO3 in the regulation of antigen presenting cell (APC) function, which was confirmed by subsequent studies 22,23,24 . More recently, a cell-intrinsic role for FoxO3 in regulating the CD8 + T cell response to infectious pathogens such as LCMV 25,26 or listeria 27 was identified. This was based on the observation that CD8 + T cells lacking FoxO3, mounted proportionally larger responses, which was attributed to decreased apoptosis either during the primary expansion phase 25,26 or during the contraction phase 27 . Of critical relevance to memory function, however, none of these studies assessed whether secondary responses were influenced by a cell-intrinsic role of FoxO3. We have now examined whether FoxO3 regulates both primary and memory (recall) responses. We find that, in response to both inflammatory and noninflammatory immunogens, FoxO3-deficient CD8 + T cells undergo a greater primary but comparable secondary response to wild-type (WT) controls. Therefore FoxO3 regulates primary but not memory CD8 + T cell responses. FoxO3 regulates the expansion of primary CD8 + effector T cells To evaluate the intrinsic role of FoxO3 in the CD8 + T cell response, we co-transferred a small number of WT and FoXO3 −/− CD8 + T cells, both expressing a transgenic TCR (OT-I) specific for the Chicken Ovalbumin (OVA), into WT recipient mice. Hosts were then immunized with either a non-replicating cellular vaccine (Actm-OVA K b−/− splenocytes) 28 or replicating recombinant vaccinia virus containing OVA (vaccinia-OVA). Seven days after the immunization, at the peak of the primary response, we evaluated the expansion of both populations in lymphoid and non-lymphoid organs and observed that the CD8 + T cells lacking FoxO3 expanded significantly more than their WT counterparts ( Figures 1A, B and C). However, the percentages of so called memory precursor effector cells defined by the expression of CD127 and lack of KLRG1 expression and short-lived effector cells (SLEC, CD127 − KLRG1 + ) 29 were comparable between the WT and FoxO3 −/− CD8 + OT-I primary responder cells 7 days post-immunization ( Figure 2A). This was confirmed by the fact that we did not see, any difference in the expression of T-bet (expressed by effector cells) or Eomes (expressed by memory cells) ( Figure 2B). These data demonstrate that FoxO3 negatively regulates the overall expansion of primary CD8 + effector T cells. FoxO3 does not influence the functional differentiation of primary CD8 + effector T cells To investigate if in addition to the quantity, FoxO3 also negatively regulates the quality of the primary response, cytokine production was measured 7days post immunization. There was no difference noted however, in the proportion of cells able to produce IFNγ or IFNγ and TNF-α, between the WT and the FoxO3 −/− CD8 OT-I T cells ( Figure 3A). Moreover there was no difference in the amount of IL-2 production ( Figure 3B), indicating that FoxO3 does not control the cytokine production of primary CD8 + effector T cells. FoxO3 does not control the initial contraction or expansion phase of secondary responder CD8 + T cells To investigate if FoxO3 controls the magnitude of the memory response through enhanced cell death, we analyzed the contraction phase that follows the primary expansion. In contrast to the enhanced expansion of the FoxO3 −/− primary responder OT-I CD8 + T cells, we did not observe any significant change in the proportion between the WT and FoxO3 responder cells during the contraction phase, indicating that FoxO3 does not promote cell death or counteract survival of the primary responder cells ( Figure 4A). To investigate if, similar to the primary expansion, FoxO3 also negatively regulates the magnitude of the secondary response; recipient mice were rechallenged with Listeria-OVA 43 days after the initial priming and analyzed 5 days later ( Figures 4A and B). WT and FoxO3 −/− OT-I responder cells expanded at an equal rate during the secondary response, indicating that in contrast to the primary response, FoxO3 does not control the magnitude of the secondary expansion. Discussion In this study we have investigated the impact of FoxO3 during the CD8 + T cell response. Using co-transfer of antigen-specific WT and FoxO3 −/− CD8 + T cells we were able to show that FoxO3 plays a cell intrinsic role in the primary expansion of the effector cells that follows the first encounter with their specific antigen. On the contrary, we did not find any involvement of FoxO3 in the functional differentiation of the primary effector cells nor did we find an effect of FoxO3 during the contraction phase or memory formation. Firstly we found that the FoxO3 deficient CD8 primary responder cells expanded significantly more than the cells expressing FoxO3, indicating a negative regulatory function of FoxO3 in the primary expansion of the CD8 + T cell. These results together with previous findings showing that FoxO3 induces the expression of the pro-apoptotic molecules Bim and Puma in CD8 + T cells (27), suggest that FoxO3 promotes cell death during the initial primary response. We did however, not observe any difference in the quality of the effector response and both WT and FoxO3 −/− CD8 + T cells were able to produce IL-2 and IFNγ and TNF-α to the same extent, implying that FoxO3 does not affect the quality of the effector CD8 + T cells generated following infectious or non-infectious immunization. Surprisingly and in contradiction to two previously published studies 25, 26, 27 , we did not find any effect of FoxO3 during the contraction phase or the generation of the secondary response. The differences could be due first to the fact that we are using a FoxO3 deficient strain of different origin compared to the other studies, where in the Sullivan et al. and the Tzelepis et al., studies a FoxO3a-trap was used whereas we used the FoxO3 Kca 21 (FoxO3 −/− ), but since in both cases the FoxO3 protein is absent, it should not explain the differences in our results. In addition, different immunization strategies were used in the published studies compared to our approach here, which might contribute to the discrepancy. To control for this, however, we included 2 types of immunization strategies, a cellular vaccine and an infectious pathogen. Since both approaches rendered the same results, we concluded that the different immunization strategies are likely not the cause of the different outcome of the studies. Another possibility is the difference in the number of cells that was transferred which was much larger in the published studies as compared to our study here. It is well established that the precursor frequency has an impact on the efficiency and the nature of the memory generation 28,30 . In fact most facets of the CD8 + T cell response, including kinetics, proliferation, surface molecule expression, effector function and the efficiency of memory generation are substantially altered when the initial number of TCR transgenic T cells is sufficiently high to inhibit the endogenous CD8 + T cell response to the same Ag. Those data suggest that the use of TCR transgenic T cells to model the endogenous CD8 + T cell response may only be reliable under conditions where these cells represent only a fraction of the endogenous repertoire. In our case using a low precursor frequency, we noted no difference in the ratio of WT compared to FoxO3 deficient cells during the contraction or upon a secondary challenge, implying that FoxO3 did not affect those phases. Altogether, our results indicate that FoxO3 is not essential for the generation of CD8 + memory T cells. This is in contrast to FoxO1, which was shown to promote CD8 + central memory formation 17,18,19,20 by repressing T-bet and the effector differentiation. Our results are also in line with the notion that there is no compensation by FoxO3 when FoxO1 is absent or conversely. Thus it seems that FoxO transcription factors have differential roles in the CD8 + T cell response where FoxO3 regulates the expansion during priming whereas FoxO1 repress the effector function and participates in the central memory formation. Author Manuscript Author Manuscript Author Manuscript Author Manuscript Material and methods Mice C57BL/6, were purchased from The Jackson Laboratory (Bar Harbor, Maine). OT-I CD45.1 + and Act-mOVA/K b−/− mice on a C57BL/6J background have been previously described 31 . The OT-I FoxO3 −/− CD45.1 + strain was generated by intercross between FoxO3 Kca and OT-I CD45.1 + mice. Mice were maintained by in-house breeding at the La Jolla Institute for Allergy and Immunology under specific pathogen-free conditions in accordance with guidelines by the Association for Assessment and Accreditation of Laboratory Animal Care International. Statistical analysis Data were analyzed using PRISM software (GraphPad, San Diego, CA). Differences between groups were examined for statistical significance using an unpaired two-tailed Student's t test. Unless otherwise indicated, data represent the mean ± SEM, with * = p<0.05 considered statistically significant. FoxO3 does not regulate the contraction and secondary expansion. A-B 500 OT-I CD45.1 FoxO3 −/− and 500 OT-I CD45.2 were co-injected into WT (CD45.1/2) mice one day before immunization. The mice were infected with 1×10 6 vaccinia-OVA or with 5×10 6 Actm-OVAk b−/− . In A the response was measured in the blood at different time points during priming, contraction, and memory. The mice were then rechallenged at day 43 with 5000 Lm-OVA and the response was measured in the blood 5 days later. B represents the FACS plot during memory and secondary response. Data are representative of groups of 4 to 5 mice and represents the most representative result of 2 to 3 independent experiments.
2016-05-12T22:15:10.714Z
2014-09-07T00:00:00.000
{ "year": 2015, "sha1": "76bf5407893c45028f652aa429380f942281a421", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4324096?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "76bf5407893c45028f652aa429380f942281a421", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256036103
pes2o/s2orc
v3-fos-license
Refined geometric transition and qq-characters We show the refinement of the prescription for the geometric transition in the refined topological string theory and, as its application, discuss a possibility to describe qq-characters from the string theory point of view. Though the suggested way to operate the refined geometric transition has passed through several checks, it is additionally found in this paper that the presence of the preferred direction brings a nontrivial effect. We provide the modified formula involving this point. We then apply our prescription of the refined geometric transition to proposing the stringy description of doubly quantized Seiberg-Witten curves called qq-characters in certain cases. JHEP01(2018)025 1 Introduction We have encountered the great developments of exact methods and a variety of their applications in quantum field theory, for instance, the Seiberg-Witten theory [1,2] and the Nekrasov partition function for instanton counting problem [3,4] as prominent landmarks, which are part of subjects in this paper. Correspondingly, the string theory and M-theory realization of these ingredients have been established and passed through a pile of checks in literatures. Specifically, (the 5d uplift of) the Nekrasov partition function can be systematically obtained by using the topological vertex [5] that is a powerful ingredient to calculate the amplitude in the topological string theory [6][7][8][9] for a given Calabi-Yau threefold as the target space. The free energy of the topological string amplitude is expanded standardly with respect to the genus and the string coupling constant. The latter is translated into the Ω-backgrounds ( 1 , 2 ) in a special limit 1 + 2 = 0 which is called the unrefined (self-dual) limit. Since the Nekrasov partition function could be actually formulated for a general value of ( 1 , 2 ), the refined version of the topological vertex to include two parameters given by has been suggested by [10,11], which was named the refined topological vertex. 1 Their definition could successfully reproduce the Nekrasov partition function with general Ωbackground in many circumstances and bring us to meaningful outcomes from string theory to supersymmetric gauge theories (basically with eight supercharges). It has been shown in [12][13][14] that the open string and closed string sector in the usual (i.e. unrefined) topological sting theory is just linked by the geometric transition (open/closed duality). However, underlying physics for the geometric transition in the refined topological string theory that we would refer to as the refined geometric transition is not yet well understood mainly because there is no known world-sheet interpretation of it. Recently, great quantitative support for the refined geometric transition was reported by [15]. The prescription for geometric transition in terms of the refined topological vertex has been proposed [16] and basically checked in the context of the AGT correspondence [17,18], but it is not complete due to the possible choice of the so-called the preferred direction on the refined topological vertex. The topological vertex is graphically a trivalent vertex, and the proper point of the refined vertex different from the unrefined one is the existence of the preferred direction that is a special direction out of three edges of the vertex. This does result from the inclusion of (q 1 , q 2 ) into the topological vertex. It is labeled by a Young diagram assigned on each edge, and as well, we pick up two of three edges to put (q 1 , q 2 ) on. This means that the the preferred direction as the last edge has a special role on the computation of the refined topological string amplitude. In the first half of the paper, it will be argued that the refined geometric transition has to be sensitive to the choice of the preferred direction, and we will provide another prescription to implement the JHEP01(2018)025 refined geometric transition on the web diagram constructed by vertices with the preferred direction that differs from the conventional one mentioned above. In order to check the consistency of our prescription, we explore double quantization of the Seiberg-Witten geometry, which is called the qq-character, by utilizing the refined geometric transition. The qq-character has been recently introduced by Nekrasov in the context of the BPS/CFT correspondence [19][20][21]. It is a natural gauge theoretical generalization of the q-character of quantum affine algebra [22], corresponding to the Nekrasov-Shatashvili limit (q 1 , q 2 ) → (e , 0) [23,24], because the qq-character is obtained with generic Ω-background parameter (q 1 , q 2 ). There are a lot of interesting connections with, for example, quiver gauge theory construction of W-algebra (quiver W-algebra) [25,26], 2 double affine Hecke algebra (DAHA) and Ding-Iohara-Miki (DIM) algebra [28][29][30][31][32], and so on. The qq-character plays a role as a generating function of the chiral ring operator, and is realized as a defect operator. For example, it becomes a line operator in 5d gauge theory, which is a codimension-4 defect [33]. In this paper we propose how to realize the qq-character in refined topological string by the brane insertion, analyzed using the refined geometric transition. In particular, the codimension-2 defect operator, corresponding to the surface operator in gauge theory, is obtained by inserting a defect brane to the Lagrangian submanifold of the Calabi-Yau threefold [16,[34][35][36][37][38]. We show that the Y-operator, which is a codimension-4 building block of the qq-character, can be constructed by inserting two codimension-2 defect operators. Although the Y-operator itself has a pole singularity, we obtain the qq-character, having no singularity, as a proper combination of Y-operators. 3 The regularity of the qq-character is a nontrivial check of our prescription for refined geometric transition. The remaining part of this paper is organized as follows: in section 2 we propose a new prescription for geometric transition in refined topological string. In order to obtain a proper contribution of the Lagrange submanifold, we have to consider the shift of parameters, which is not realized as a shift of the Kähler parameter, when the defect brane is inserted to the inner brane. In section 3 we apply the prescription of the refined geometric transition to the qq-character, which is a generating function of the chiral ring operator. We examine several examples, especially A 1 and A 2 quivers, and obtain a consistent result with quiver gauge theory. This shows a nontrivial check of our prescription of refined transition. We conclude with summary and discussions in section 4. Geometric transition in the refined topological string We would upgrade the operation of the geometric transition in the refined topological string theory where the partition function can be in principle evaluated by the refined topological vertex [10,11] for a given Calabi-Yau geometry (see appendix A.2 for our convention). As there is a much wide variety of Calabi-Yau geometries, for simplicity and a purpose of the application to qq-characters, we restrict our argument to a simple class of the geometries JHEP01(2018)025 visualized by a web diagram in figure 1. The thin dotted line connecting the upper and lower end of the diagram represents a compactified direction in the geometry. Note that this type is essentially equipped with the structure of the resolved conifold. It is known that this geometrical data can be dualized to type IIB string theory with D5-branes, NS5branes, and (1, 1)-fivebranes. One of crucial ingredients in calculating the refined topological string amplitude is the preferred direction on the refined topological vertex. It is an artificial technique for formalism, and final results with different choices of the preferred direction have to coincide (at least without any normalization). However, we claim in this paper that the refined geometric transition should be sensitive to where the preferred direction is set. To explain this point, at first we give a brief review of the prescription for the refined geometric transition that has been used in the literatures [16,[34][35][36][37][38] in section 2.1, and then section 2.2 contains our proposal that actually clarifies the effect of the different selection of the preferred direction. The quantitative argument which we rely on is shown in section 2.3. Conventional prescription Since there is no established world-sheet description of the refined topological string theory so far, one need to fix a guiding principle for the refined geometric transition from another context. One of frameworks to provide such a principle is the AGT correspondence [17] and its 5d uplift [39,40]. This duality can be encoded into type IIB string theory presented by the (p, q)-fivebrane web diagram like figure 1. The dictionary between the (p, q)-web and the geometry allows us to compute the partition function of the corresponding gauge theory by utilizing the refined topological vertex [10,11], which in the 4d limit turns out to be consistent with the correlation function on the 2d conformal field theory (CFT) side in some cases. Soon after finding the AGT relation, its statement has been extended to include the correspondence between a surface operator in the 4d N = 2 SU(2) gauge theory and a degenerate field in the Liouville CFT [18]. This circumstance can also be realized in the framework of the (p, q)-web. The surface operator is engineered by inserting a D3-brane into the (p, q)-web, which is further mapped to a Lagrangian brane 4 representing a Lagrangian submanifold in the corresponding Calabi-Yau. The computation of the topological string partition function must be incorporated with contributions from open strings when the JHEP01(2018)025 Figure 2. The geometric transition operated on the unpreferred direction. The horizontal axis is compactified and the dots indicate the preferred direction. target space is a Calabi-Yau with specified Lagrangian submanifolds. Although there is no established formula of the refined version of the open topological vertex, this can be evaluated by implementing the geometric transition. In the 4d limit, the result obtained in this way is actually compatible with the correlation function in the presence of a degenerate field in the Liouville CFT. We would sketch concretely the rule of the refined geometric transition that has been lead from the AGT story. On the web diagram as shown in figure 1, each internal line implies the topology of CP 1 and is equipped with a Kähler modulus. Let Q (s) a be a Kähler modulus for the s-th diagonal internal segment from the left in the a-th horizontal (uncompactified) line from the top (see figure 4 for our convention). The point of calculations along the AGT story with this web diagram is that the preferred direction is chosen on the vertical (compactified) direction, which is depicted as black dots in figure 2 (throughout the paper, the vertical axis is always the compactified direction and the horizontal one is uncompactified). The geometric transition can be implemented with the horizontal (uncompactified) line: with appropriately tuning Kähler moduli for diagonal lines attached to the b-th horizontal line, this line is detached from the vertical lines and moved away. The geometric transition for the web diagram of our interest is essentially the same as that of the conifold, passing through from the resolved conifold to the deformed conifold and vice versa. If one would like to suspend a Lagrangian brane on the r-th vertical line in the process shown in figure 2, the Kähler moduli are specialized as 5 with m, n ∈ Z. This prescription can nicely produce the AGT relation with the surface operator. Consequently, the refined geometric transition associated with the unpreferred direction is operated by (2.1). We are closing the review with commenting on the integers m, n in (2.1). It has been argued in [16] that, in the 4d limit, the adjustment such that m, n > 0 might produce Figure 3. The geometric transition operated on the preferred direction. The horizontal axis is compactified and the dots indicate the preferred direction. general (non-elementary) surface operators supported on the surface described by where (z 1 , z 2 ) are complex coordinates on two-dimensional planes respecting the rotation by the Ω-background parameters ( 1 , 2 ), respectively. This discussion seems to work for such physical surface operators, however, for the present we do not have a requirement to restrict the range of m, n to be non-negative from the refined topological string point of view. This is why we take m, n to run for all integers. Although the refined geometric transition with m, n < 0 would engineer the unphysical surface operators in the sense that these do not follow the standard discussion (2.2), such branes at least in the unrefined (q 1 q 2 = 1) context are referred to as anti-branes [41]. We would return to this point in section 4. New prescription We turn to giving our new prescription for the refined geometric transition that takes the issues of the preferred direction into account. On computing the refined topological string amplitude for the web diagram of our main interest, the preferred direction is chosen along the horizontal, i.e., uncompactified direction, marked by dots in figure 3. The difference of the preferred direction from the previous situation requires us to introduce small modification for the refined geometric transition. In this subsection, we write down the process to implement the refined geometric transition for the current choice of the preferred direction. A point which we should stress is to put the preferred direction on the uncompactified (horizontal) direction where the geometric transition can be carried out. In addition, for consistency, it is required that the contributions from the Lagrangian brane is not produced if the web with (M, N ) lines simply reduces to the one with (M, N − 1) lines without the Lagrangian brane after the geometric transition, where M and N stand for the number of compactified (vertical) and uncompactified (horizontal) lines, respectively (see figure 4(a)). Let us consider the geometric transition that is executed on the b-th horizontal line with the Lagrangian brane emerging on the r-th vertical line (figure 3). Our proposal for JHEP01(2018)025 the refined geometric transition under the above requirement is comprised of the following three steps: 0. As a supposition the preferred direction is taken to be the uncompactified axis (horizontal here), and then one computes the refined closed topological string amplitude as done in [42,43]. 1. For s ≥ r, variables w 2. Then, tuning the Kähler moduli as with m, n ∈ Z. 3. Finally, shifting variables w (r) ab (i, j) for all a by hand, while others in (2.7) are kept unchanged. We should make a comment on the shift of step 3 in our prescription. The shift (2.5) has nothing to do with the Kähler parameters: any Kähler parameter is not shifted together with this operation, but rather, with viewing w (r) ab (i, j) as a single variable, it is just to add − 1 − 2 to it. This is purely a technical thing which is originated from the difference of the specialization (2.4) of the Kähler moduli Q (s) b for s < r and s > r. The reason why we need this shift is to satisfy the requirement for consistency that the refined geometric transition without generating a Lagrangian brane reproduces the closed topological string amplitude (see below for numerical details). The step 2 and 3 reflect the dependence of the refined geometric transition on the preferred direction. Indeed, it is expected that, even though the closed topological string amplitude should be independent of the preferred direction, the open one really depends on whether or not the Lagrangian brane is attached to the preferred direction (see, e.g., [44,45]). This is basically because the Lagrangian brane can end on the (p, q)-fivebrane with general (p, q), therefore the geometric transition should be characterized by (p, q) in addition to (m, n). This implies that the position of the preferred direction put on the (p, q)-fivebrane leads to the inequivalent result of the open topological string amplitude. Both procedures of the refined geometric transition can reproduce correctly the identical result in the unrefined limit q 1 q 2 = 1 as expected. Our prescription seems compatible with this suggestion. A Lagrangian brane appears on only one vertical line upon a single sequence of the above geometric transition. If one desires to generate several Lagrangian branes on different vertical lines for a web diagram, it is necessary to consider a bigger web and repeat the procedure (2.3)-(2.5) many times (as demonstrated in section 3). JHEP01(2018)025 We will devote the next subsection to showing quantitative clarification how this process works and produces the refined topological string amplitude incorporating the contribution of the Lagrangian brane. In section 3, it will be discussed that the refined geometric transition initiated by our prescription gives possibly how to realize the qq-character from string theory. Derivation Our prescription given above seems a bit intricate rather than (2.1), and we would explain here why this works when the uncompactified line assigned with the preferred direction is removed upon the geometric transition. General formula for the partition function We are now concentrating on the compactified web shown as figure 4 with general (M, N ) lines. On the technique of the refined topological vertex (A.2), the partition function Z M,N for this web diagram has been derived as [43] with t representing the transpose of the Young diagram (figure 11(c)). We collect the definitions and notations in appendix A. Note that, for simplicity, we omit a complex parameter Q τ := e 2πiτ in the theta function as θ 1 (x) unless otherwise stated. We denote the Kähler moduli for diagonal, vertical, and horizontal internal segments by Q Also, we use the simplified symbols for the products of the Kähler moduli in the variables (2.7), defined as for the numerator, andQ for the denominator, where The second equality follows from the consistency condition to form internal hexagons (figure 4(b)). It has been revealed that this geometry actually realizes an elliptically fibered Calabi-Yau with the complex modulus Q τ identified as JHEP01(2018)025 We comment on the M-theory uplift of this picture. It is well known that type IIB string theory compactified on S 1 is dual to M-theory on the torus T 2 . The web as in figure 4 is rendered to the M-theory brane configuration where the stacks of M2-branes are suspended between separated M M5-branes on an asymptotically locally Euclidean (ALE) space equipped with Z N orbifolding. This duality supports the fact that the low energy theory on the present (p, q)-web are described by the tensor branch of the corresponding 6d N = (1, 0) theory. It has been argued in [42,43] that the partition function (2.6) captures the spectra of self-dual strings, called M-strings, wrapping T 2 in the 6d theory, and the Young diagrams µ (s) a label the ground states of M-strings. Actual process of the geometric transition When we perform the geometric transition on the b-th horizontal line such that the Lagrangian brane appears on the r-th vertical line, the main contribution that should be carefully treated is for all a. We would divide the calculation process for this into two parts with s < r and s ≥ r. Remark that we sometimes implicitly use the relation (2.12) to change the variables. For s < r. We firstly focus on the sector for s < r where the geometric transition (2.4) can straightforwardly work. One can easily see that (2.13) does not produce the nontrivial contribution unless (2.14) Accordingly, this condition is necessary in order to obtain the appropriate result for the partition function obtained via the geometric transition. Then, variables z (s) ab (i, j) and w (s) ab (i, j) can be rewritten as where we used the relation (2.11) for z (s) ab (i, j). With these expressions, the specialization (2.4) of the Kähler moduli results in JHEP01(2018)025 Indeed, (2.18) is the half of the contributions of the Lagrangian brane. This is just what we want because this reduces to 1 when m = n = 0, namely, no Lagrangian brane appear, as required. Actually, this expression matches with the result of [46]. Moreover, the weights in the sum of the Young diagrams change as 19) and this is nothing but the ones in the partition function for the web diagram with (M, N−1) lines. Our prescription for the refined geometric transition appropriately works for s < r. For s ≥ r. Let us turn to the sector for s ≥ r. In addition to the first step (2.3), by using (2.11), the relation holds under the restriction (2.14). As a result, we have Similarly for (2.19), the overall factor can be absorbed into the associated weight so that where we implemented the shift (2.5) as the third step of our prescription. Note that a due to (2.12). Namely, the shift (2.5) allows the remaining contribution (2.24) to satisfy the requirement that this becomes trivial when m = n = 0. As the conclusion of this subsection, the refined geometric transition in our scheme correctly produces open string contributions associated to the Lagrangian brane given by (2.18) and (2.24). qq-characters from refined geometric transition In this section, we apply our prescription for the geometric transition to the qq-character, which has been recently proposed in the context of the BPS/CFT correspondence [19][20][21]. We propose that when we consider the geometric transition so that two Lagrange submanifolds emerge, the contributions of two Lagrange submanifolds becomes Y-operator, depending on the position of the brane insertion. Let us examine our prescription with some examples. Seiberg-Witten geometry and qq-character Let us briefly remark some background of the qq-character in gauge theory. Nekrasov-Pestun [47] pointed out an interesting connection between the quiver gauge theory and the representation theory of the corresponding quiver. Their statement is that the Seiberg-Witten geometry of the Γ-quiver gauge theory in 4d is described by the characters of the fundamental representations of G Γ -group, where G Γ is the finite Lie group associated with the (ADE) quiver Γ under the identification of the quiver with the Dynkin diagram. In fact, the prescription of Nekrasov-Pestun uses the Weyl reflection to generate the Seiberg-Witten curve, which is generic and applicable to any quiver, even if there is no finite group G Γ corresponding to the quiver Γ. Let us check this process with A 1 quiver, which is the simplest example. Since G Γ = SU(2) for Γ = A 1 , the fundamental character is given by where the first term corresponds to the fundamental weight y, and the second term is generated by the Weyl reflection y → y −1 . On the other hand, the Seiberg-Witten curve is an algebraic curve given as a zero locus of the algebraic function where (x, y) ∈ C × C * for 4d and (x, y) ∈ C * × C * for 5d gauge theory. For A 1 quiver gauge theory with SU(N ) vector multiplet without any matter fields, the function H(x, y) turns out to be In other words, the curve is characterized by the polynomial relation 6 Now it is obvious that the l.h.s. agrees with the SU(2) character (3.1). It is possible to derive this polynomial relation from the Γ-quiver gauge theory partition function with the Ω-deformation [3], and taking the Seiberg-Witten limit ( 1 , 2 ) → (0, 0), which is essentially the same approach as Nekrasov-Okounkov [4]. In particular, the y-variable appearing in the algebraic relation is realized as an expectation value of the Y-operator, which we focus on in this paper, The Y-operator is a generating function of the chiral ring operators, so that it is realized as a codimension-4 defect operator. See [33] for its realization as the line operator in 5d gauge theory. Furthermore the Y-operator itself has a cut singularity in the complex plane x ∈ C in the Seiberg-Witten limit, and its crossing-cut behavior is indeed described by the Weyl reflection. This is the reason why the Weyl reflection generates the Seiberg-Witten curve. It was then shown by Nekrasov-Pestun-Shatashvili [24] that this representation theoretic structure in gauge theory has a natural q-deformation: the Seiberg-Witten curve in the Nekrasov-Shatashvili (NS) limit ( 1 , 2 ) → ( , 0) [23] is promoted to the q-character, which was originally introduced in the context of the quantum affine algebra [22] with emphasis on its connection with the quantum integrable system. See also [48][49][50] for further developments. This means that the polynomial relation holds in the NS limit just by replacing the character with q-character. In this case, the Seiberg-Witten curve is not an algebraic curve any longer, but lifted to a quantum curve, which is a difference equation. For example, for A 1 quiver theory, it is given by 7 The r.h.s. is again a degree N monic polynomial in x, but the coefficients may depend on the equivariant parameter 1 . In particular, this difference equation, also called the quantum (deformed) Seiberg-Witten curve [51][52][53], is equivalent to (precisely speaking, the degenerate version of) the TQ-relation of the G Γ XXX/XXZ/XYZ spin chain for 4d/5d/6d 6 There should be the coupling constant dependence on the l.h.s. , but it is be now absorbed by redefinition of the y-variable. 7 We use the 5d notation (q1, q2) = (e 1 , e 2 ) and define q = q1q2 = e 1 + 2 . The unrefined limit is given by q1 = q −1 2 , namely q = 1. JHEP01(2018)025 gauge theory. Then, as a corollary, the SUSY vacuum (twisted F-term) condition of the 4d gauge theory in the NS limit is equivalent to the Bethe ansatz equation of the G Γ XXX spin chain. Recently it has been shown in the context of BPS/CFT correspondence [19][20][21] that a similar polynomial relation actually holds even with generic Ω-background parameters ( 1 , 2 ) by replacing the q-character for the NS limit with a further generalized character, called the qq-character. For A 1 quiver, in the 5d notation, it is given by 8 The qq-character has a gauge theoretical definition due to the invariance under the deformed Weyl reflection, which is called the iWeyl reflection, reflecting the non-perturbative aspects of the instanton moduli space. This qq-character relation is interpreted as a (nonperturbative version of) Ward identity or Schwinger-Dyson equation since it gives a relation between correlation functions in quiver gauge theory. The y-function, which is the gauge theory average of the Y-operator, has pole singularities. But such singularities are canceled in the combination of y(x) and y(q −1 x) −1 . In general, the iWeyl reflection shows how to cancel the pole singularity of the y-function. Y-operator Before discussing the topological string setup, let us mention more about the Y-operator to fix our notation. For generic quiver theory, we define Y-operator associated with each gauge node, Y i for i ∈ Γ 0 where Γ 0 is a set of nodes in the quiver Γ. Then, in the 5d (K-theoretic) notation, the contribution of the Y-operator for the configuration µ is [24,47] where we put SU(N i ) gauge group for the i-th node, and define The parameter Q i,α is the multiplicative (K-theoretic) Coulomb moduli. The Y-operator has several expressions where ∂ ± µ is the outer/inner boundary of the partition µ, where we can add/remove a box, and q j−1 1 q k−1 2 Q i,α is the q-content of the box (j, k) ∈ µ i,α . From this expression it 8 Precisely speaking, y(q −1 x) −1 means Y(q −1 x) −1 here. JHEP01(2018)025 is easy to see the asymptotic behavior of the Y-operator, which does not depend on the configuration µ, where we define the Coulomb moduli product We remark that Q i = 1 for SU(N i ) theory, but keep it for latter convenience. In addition, from the expression (3.8) we obtain Here O i,n is the contribution of the chiral ring operator for the configuration µ, which is given by the single trace operator with respect to the complex adjoint scalar field O i,n = Tr Φ n i in 4d, and the loop/surface operator wrapping the compactified S 1 /T 2 in 5d/6d. Actually, for the gauge theory on R 4 × T 2 , the variable x takes a value in x ∈Ť 2 wherě T 2 is a dual torus of T 2 [47]. Thus the Y-operator is interpreted as a codimension-4 defect operator, which plays a role as the generating function of the chiral ring operator. Let us introduce the elliptic Y-operator corresponding to 6d gauge theory, which is obtained by replacing the factors in (3.8) with the elliptic functions, 9 . (3.14) This is reduced to the operator in 5d gauge theory (3.8) in the limit Im τ → ∞. We also have a similar combinatorial expression to (3.10) in the elliptic theory, We will use this expression in the following sections. A 1 quiver Let us consider the Y-operator in A 1 quiver gauge theory. The Y-operator is a codimension-4 defect operator, and we here try to find its realization using the lower codimension surface defects. Here we give the prescription: JHEP01(2018)025 1. Consider the geometric transition so that the brane and the anti-brane emerge, and tune the distance between these branes. U(1) theory For simplicity let us first consider the Abelian gauge theory. Comparing the Yoperator (3.15) with the contribution of the defect insertion shown in (2.24), it turns out to be a half of the Y-operator. Thus we can construct the Y-operator by merging two surface operators with respect to the q-brane and anti-q-brane, corresponding to the geometric transition shown in figure 5. Now the dashed lines on the right and on the left denote the q-brane and anti-q-brane, respectively. We remark that the coupling constant is given by q −1 for the anti-q-brane instead of q, since the sign of the string coupling is opposite to the ordinary one [41], which also corresponds to applying the negative integer to (2.1). In addition, the most right panel of figure 5 shows that two D3-branes are extended to the opposite directions from the centered NS5-brane, and this is consistent with the brane configuration of the supergroup Chern-Simons theory [54], which is also similar to the ABJ(M) model [55,56]. The partition function corresponding to figure 5 is Z M =2,N =3 defined in (2.6). For this partition function, by setting where we shift Q where Q 1 is the multiplicative Coulomb moduli of U(1) theory. Thus the partition function Z 2,3 gives rise to the average of the Y-operator This average is defined with respect to the partition function Z 2,1 , which is the 6d U(1) where the parametersQ f,1 , and Q (s) 1 correspond to the gauge coupling and the (anti)fundamental mass, respectively. We remark that we have to multiply the factor θ 1 (Q x ) to obtain a precise agreement with the definition of Y-operator [26] because the µ-independent factor cannot be fixed in the current formalism. We can also consider the following geometric transition, corresponding to the partition function Z 2,3 as well. This configuration corresponds to the parametrization given by and define In this case the contribution of the Lagrange submanifolds reads . (3.23) JHEP01(2018)025 However this naive expression does not work. We have to shift the argument in the numerator as discussed in section 2.3, to obtain a consistent result, Under the identification Q x = Q 1 /x, this configuration gives rise to the Y-operator inverse by multiplying a factor θ 1 (qQ x ) −1 , . (3.25) Thus the partition function Z 2,3 under the parametrization (3.21) leads to the average of the Y-operator inverse Although the Y-operator and its inverse themselves have pole singularities, we can construct a regular function using these two operators, as discussed in section 3.1. In this case, the fundamental qq-character of A 1 quiver, which has no singularity, is given by the average of the T-operator defined with the gauge coupling q =Q f,1 and the matter factor The average is taken with respect to the 6d U(1) Nekrasov function (3.20) as before. This shows that the T-operator average is given by the qq-character discussed in section 3.1, and its regularity is proven using the iWeyl reflection We provide a proof of the regularity of this qq-character in appendix B. We remark that, comparing with (3.7), we have additional factors q and P(x) in this case. The former one can be absorbed by redefinition of the Y-operator Y → q 1 2 Y, and the latter is due to the (anti)fundamental matters, which is necessary for gauge/modular anomaly cancellation in 6d gauge theory. The Y-operator and its inverse Y −1 correspond to the brane insertion to the right and left NS5-branes, respectively, as shown in figures 5 and 6. These are all the possibilities for the brane insertion because there are only two NS5-branes for A 1 quiver theory where the right and left branes are connected by a suspended D5-brane. On the other hand, as mentioned in section 3.1, the qq-character is generated by the iWeyl reflection (3.29) converting the Y-operator to its inverse, Y(x) → Y(q −1 x) −1 . The iWeyl reflection is a consequence of creation/annihilation of instantons [19], which is a fluctuation on the suspended brane. Since the fluctuation affects the branes on the both sides, the brane insertion on the right is transferred to the left through the iWeyl reflection. SU(N ) theory One can easily generalize this result to the non-Abelian case. Let us consider the following geometric transition corresponding to SU(N ) theory with the insertion (figure 7). In this case we have two possible brane insertion to the right and left NS5-branes, which is actually the same as U(1) theory discussed in section 3.2.1. For the case 1, where the defect brane is inserted to the right NS5-brane, we obtain the Y-operator under the parametrization where we define N -tuple partition µ = (µ 1 , µ 2 , . . . , µ N ), and the µ-independent factor The operator average is now taken with respect to 6d SU(N ) N f = 2N Nekrasov function where we define the total instanton number | µ| = N a=1 |µ a |. Imposing the condition Q (1) i+1 , the Coulomb moduli parameter in this SU(N ) Nekrasov function is related to that defined in (3.31b) asQ Similarly we obtain the Y-operator inverse Y −1 from the case 2 with the defect brane inserted to the left. The Y-operator and its inverse have pole singularities as before, but we can use essentially the same combination as (3.7) to obtain a regular function, which is the qq-character where the coupling constant and the (anti)fundamental contribution are now given by q = Q f , and One can show the regularity of the qq-character (the T-operator average) in a similar way to U(1) theory, using the iWeyl reflection (3.29). We remark that the expression of the qq-character for SU(N ) theory (3.35) coincides with that for U(1) theory (3.7) apart from the matter factor P(x). The qq-character provides a universal relation, which does not depend on the gauge group rank, but does only on the quiver structure. Higher qq-character The Seiberg-Witten curve and its quantizations for Γ-quiver theory are described using the fundamental (q-and qq-)characters of G Γ -group. In addition, we can consider the higherrepresentation qq-character, which plays a role to determine the OPE of the generating currents of quiver W-algebras [25]. In this case, we have to consider several Y-operators at JHEP01(2018)025 case 1 case 2 case 3 case 4 Figure 8. In this geometric transition we obtain the T-operator which consists of two Y-operators for A 1 quiver. We set the Kähler parameters in the blue and red parts. the same time, and construct a regular function which is invariant under the iWeyl reflection. Let us demonstrate how to treat multiple Y-operators in U(1) theory for simplicity. We start with the web diagram shown in figure 8. In this case we tune the following parameters to obtain two Y-operators, The parameters (3.37a) and (3.37b) correspond to the blue brane and the red brane in figure 8, respectively. We show how to set the parameter in order to realize the brane configuration in each case: JHEP01(2018)025 In the cases 2, 3, 4, we have to perform the q 1 q 2 -shift as before, where we define Then the partition function Z 2,5 gives rise to the two-point function of the Y-operator, by multiplying the µ-independent factor, where the average is taken with respect to the U(1) Nekrasov function (3.20). Then the average of the T-operator defined yields the qq-character of the degree-2 symmetric representation for A 1 quiver, and its regularity is again shown using the iWeyl reflection (3.29). Now the S-factor is defined [26] S(x) = θ 1 (q 1 x)θ 1 (q 2 x) θ 1 (qx)θ 1 (x) (3.42) and the matter factor P(x) is the same as (3.28). This qq-character is regular even in the collision limit x 2 → x 1 , involving a derivative term, which is a specific feature to the qqcharacter [19]. In this limit, the cycle between the blue and red ones shrinks in figure 8. We show the proof of the regularity in appendix B. We remark that we put the µ-independent factors S(x) and P(x) to define the T-operator because it's a matter of the normalization of the partition function. In general, the n-point function of the Y-operator for SU(N ) theory is obtained from the partition function Z 2,N +2n with 2 n possible brane insertions, , . . . (3.43) We can construct the qq-character of the degree-n representation R n = · · · n for A 1 quiver by summing up all the possible n-point functions of the Y-operator [19,25,26], with a suitable S-factor inserted, (3.44) A 2 quiver Next we consider the A 2 quiver gauge theory to examine the qq-character using the refined geometric transition. As mentioned in section 3.1, the Seiberg-Witten curve and its quantization are associated with the fundamental representation character of G Γ -group for Γ-quiver gauge theory. Thus in this case it is deeply related to the representation theory of SU(3) group. Since the qq-character generated by the iWeyl reflection does not depend on the gauge group rank, let us focus on the Abelian A 2 quiver theory, U(1) × U(1), for simplicity. We have three possible ways to insert the defect brane as shown in figure 9. Case 1. We consider the defect brane inserted to the right-most NS5-brane. In this case, the calculation is essentially the same as that for A 1 quiver shown in figure 5. We apply the following configuration JHEP01(2018)025 with the Coulomb moduli parameter Comparing with the Y-operator definition (3.15), the contribution of the defect brane leads to Y 1,µ 2 (x) by multiplying the factor θ 1 (Q 1,1 /x). Thus the partition function Z 3,3 gives rise to the average of Y 1 (x) under the parametrization (3.45): where the operator average is taken with respect to 6d U(1) × U(1) Nekrasov function where we define the gauge couplingsQ f,1,2 and the Young diagrams µ 1,2 as follows, Case 2. In this case, the defect brane is inserted to the middle brane. This configuration corresponds to the following parametrization and two Coulomb moduli parameters defined We remark that the difference between Q 1,1 and Q 1,2 is given by the factor Q 1 =: Q m , which is interpreted as the bifundamental mass parameter, because such a bifundamental mass can be absorbed by the shift of U(1) Coulomb moduli [24]. In this paper we do not explicitly write the bifundamental mass parameter. In this case, the contribution of the Lagrange submanifolds reads (3.53) JHEP01(2018)025 In order to obtain a consistent result, we have to shift the parameters of the numerator in the second factor, as discussed in section 2.3, Multiplying the µ-independent factors, θ 1 (Q 2,1 /x) and θ 1 (qQ 1,1 /x) −1 , the µ 1 -and µ 2contributions are written as Y 2 (x) and Y −1 1 (q −1 x), respectively. Thus the partition function Z 3,3 becomes the average of the Y-operator ratio, by tuning the parameters as (3.51), The average is again taken with respect to the U(1) × U(1) Nekrasov function (3.48). Case 3. The remaining situation is that the defect brane is inserted to the left-most brane. In this case, the calculation is essentially the same as figure 6 for A 1 quiver theory. Applying the parametrization with a suitable q 1 q 2 -shift of the arguments to be consistent with the geometric transition, the partition function Z 3,3 yields qq-characters. Now we can construct the qq-character using all the possible brane insertions. The qq-character of the fundamental representation for A 2 quiver theory, denoted by 3, is given by the T-operator average, where the coupling constants are given by q 1 =Q f,1 and q 2 =Q f,2 , and the matter factors are defined Although each factor in (3.58) has pole singularities as before, the qq-character itself is a regular entire function in x, as shown in appendix B. The local pole cancellation is performed by the iWeyl reflection For A 2 quiver, we have another representation, which is the anti-fundamental representation denoted by3. The corresponding qq-character is generated by applying the iWeyl reflection (3.60) to the highest weight Y 2 (x), We remark that the operator Y 2 (x) itself cannot be constructed by a single insertion of the defect brane, but is realized as a composite operator: In other words, the operator Y 2 (x) is obtained by two insertions of the defect branes to the right-most and the middle branes (see the case 1 in figure 10). Similarly the remaining terms in (3.61) are obtained as Thus the qq-character of3 for A 2 quiver is given by summing all the possible configurations with two defect branes shown in figure 10. Generic quiver The argument discussed above is extended to generic (simply-laced) quiver gauge theory. A r quiver For A r quiver, there exist r weights, associated with the gauge nodes, and the fundamental representation is obtained from each (highest) weight, which is the antisymmetric representation of SU(r + 1). The qq-character of the degree n antisymmetric representation R n (n = 1, . . . , r) is given by [19] χ R n (A r ; q 1 , q 2 ) = T n (x) where q n is the gauge coupling of the n-th gauge node, and we define We can see that the qq-character is generated by the iWeyl reflection In this case there are r + 1 NS5-branes, so that r + 1 possibilities for the brane insertion. Indeed the factor Λ i (x) defined as (3.66) corresponds to the insertion of single defect brane. Thus the qq-character of R n is realized as the summation of all the possible configurations with n brane insertions, since it involves a product of n Λ-factors as shown in (3.65). DE quiver Let us then discuss DE quiver theory. In this case, it is not straightforwardly possible to obtain the toric Calabi-Yau threefold reproducing DE quiver gauge theory, due to the trivalent node in the quiver. Recently it has been proposed that DE-type gauge theory can be constructed from the (non-toric) Calabi-Yau geometry [57], and thus it is expected that we can discuss the qq-character by inserting the defect brane to such a DE-type configuration. The simplest non-trivial DE-type theory is D 4 quiver. In this case there are four fundamental representations corresponding to the nodes in D 4 quiver, three 8-dimensional and one 28-dimensional representations. The three 8-representations are essentially equivalent JHEP01(2018)025 to each other, which is so-called the SO(8) triality. In particular, for the 28-representation, the corresponding qq-character involves a derivative term, due to the collision limit of the Y-operators [19,27], corresponding to the vanishing cycle as discussed in section 3.2.3, and it would be interesting to study the geometric meaning of the collision limit. Beyond ADE quiver For ADE quiver, all the fundamental representations are finite dimensional, and thus the (qq-)character is given by a finite (elementary symmetric) polynomial of {Λ i }, which is a ratio of the Y-operator (3.66). In general, we can consider the quiver, which does not correspond to the finite ADE-type Dynkin diagram, namely affine and hyperbolic quivers. Although, in such a case, the fundamental representations become infinite dimensional, we can discuss the qq-character generated by the iWeyl reflection. For example, the affine quiver r is realized using the infinitely-long linear quiver A ∞ by imposing periodicity. Thus there are infinitely many possibilities for the brane insertion. This is a geometric interpretation of the infinite sum in the affine qq-character. For the simplest case 0 corresponding to 4d N = 2 * (5d N = 1 * ) theory, the qq-character is described as a summation over the partition [19,25]. Summary and discussion In this paper, we have proposed the prescription of the geometric transition in the refined topological string enforced along the preferred direction. In order to obtain a proper contribution of the brane insertion, in addition to the specialization of the Kähler moduli, we have to shift the variable by hand to satisfy consistency, which becomes trivial in the unrefined limit. We then have applied this prescription to the codimension-4 defect operator, called the Y-operator as its stringy realization. The pole singularity of the Yoperator is cancelled out in a proper combination of the Y-operators, which is given by the qq-character. We have examined the pole cancellation in the qq-character as a nontrivial check of our prescription of the refined geometric transition. Let us finally provide several open questions which we would like to resolve. As commented, the refined large N duality between the resolved and deformed conifold has been clarified in terms of the refined Chern-Simons theory [15]. Nevertheless, the corresponding brane configuration is not clear from their argument, and as the first issue, we would pursue that our geometric transition may give a actual brane picture compatible with their result. Second, it may be possible that our prescription in section 2.2 is generalized so as to incorporate the labels (p, q) of the fivebrane charges, as mentioned there. The third thing is concerned with the exact definition of the refined version of the open topological vertex formalism. As far as we know, it is not yet established, and thus, the direct computation of the open string amplitude respecting the Lagrangian brane on the inner brane is still a nontrivial problem. In the unrefined case, the Schur function is suitable to capture the holonomy of D-branes corresponding to the insertion of the Lagrangian brane. It is expected from the results of [15] that the Schur function would be replaced with the Macdonald function in the refined case as done for the refined topological vertex in [10]. JHEP01(2018)025 Combining the expression obtained via the refined geometric transition, we hope that the successful direct approach would be reported in the near future. We also hold some technical and qualitative issues on the Y-operator. In the topological string approach, there is an ambiguity of the normalization. Actually the Y-operator and the qq-character have factors independent of the partition µ, and we need to add such a factor by hand to obtain a proper result. It would be interesting to clarify a systematic way to discuss the µ-independent factor in the framework of refined topological string. The brane configuration of the Y-operator proposed in this paper is due to the comparison with the gauge theory definition. The current construction of the codimension-4 Y-operator uses the codimension-2 surface defects with the q-brane and anti-q-brane. Such a relation between defect operators with different codimensions is not yet obvious. One possible interpretation is the tachyon condensation, which could be related to the (refined) supergroup Chern-Simons theory [41]. For example, it is interesting to compare the Y-operator contribution with the partition function of the refined U(1|1) Chern-Simons theory [58]. More detailed analysis is necessary for understanding its geometric meaning in refined theory. Figure 11. The Young diagram and its parameters. JHEP01(2018)025 The first one in (A.2) is the total number of boxes of µ. The partitions {µ i } and {µ t j } concretely characterize the instanton partition function, which can be removed by using In the paper, these are implicitly applied as expressing the Y-operator in a convenient fashion from the general form obtained via the refined geometric transition in section 2.2. Theta function. The topological string amplitude for the compactified web diagram of our interest is nicely expressed in terms of the theta function, where a variable is z ∈ C, and τ ∈ C is a constant with Im(τ ) > 0. Equivalently, the theta function is frequently used in the multiplicative form, where x := e 2πiz , q := e 2πiτ , and the q-Pochhammer symbol (q-shifted factorial) is defined by JHEP01(2018)025 In addition, (x; q) ∞ := lim n→∞ (x; q) n with |q| < 1 and we use the shorthand notation (x 1 , x 2 , · · · , x r ; q) n := (x 1 ; q) n (x 2 ; q) n · · · (x r ; q) n . (A.7) Note that (A.4) and (A.5) are nothing but the Jacobi's triple product identity. This theta function actually has the simple inversion property and satisfies the q-difference equation, We further give another type of the theta function defined by This theta function is simply translated into θ 1 (x; q) via the Jacobi's triple product identity, We can immediately verify that this theta function actually satisfies the q-difference equations, where we define It will be turned out that this limiting formula is actually the operation of the dimensional reduction from 6d to 5d at the level of the partition function. Elliptic gamma function. The elliptic gamma function is defined by with |p|, |q| < 1, and x ∈ C * . For specific values of x, the elliptic gamma function get simplified as JHEP01(2018)025 The certain combinations of elliptic gamma function are related to the theta function defined above as follows: 19) because p and q are encoded symmetrically into the elliptic gamma function, in addition, we find the difference equations involving the theta function, for n, m ∈ Z. Note that the first line represents the finite difference equations of the first order [59] that can lead to the second line, in other words, the last relation can be derived in the recursive manner from the first one. Furthermore, there are the limiting relations [59], Moreover, we have the reflection identity, The usage of the elliptic gamma function is underlying a nontrivial property linking its specific ratio to the theta function involving Young diagrams [60] (see also [61]), Note that it has been reported in [4] that there exists a similar formula involving the gamma function for the Nekrasov function for the 4d theory. Further, the 5d Nekrasov function is similarly written in terms of the q-gamma function. A.2 Refined topological vertex In this paper, we rely on the Iqbal-Kozçaz-Vafa formalism [11] for the refined topological vertex C λµν (t, q) given by JHEP01(2018)025 where s λ/µ (x) is the skew Schur function and The functionZ ν (t, q) is essentially the Macdonald function P ν (x; q, t) [62] We do not go further details of the refined topological vertex and trace back the calculation of the partition function (2.6) that has been accomplished in [43]. Note that the parameters (q, t −1 ) are replaced in the main context of the paper with (q 1 , q 2 ), respectively. We would like to comment on the fact that this partition function is absolutely reproduced by using the Awata-Kanno formalism for C λµν (t, q) [10,63]. B Regularity In this appendix we show the regularity of the qq-character in the case of A 1 , A 2 quiver with the single Y-operator, and A 1 quiver with two Y-operators. The strategy is as follows: 1. We write the partition function and the Y-operator to the infinite product form. 2. We calculate the ratio Z µ+1 and the product Y µ Y µ+1 , where µ + 1 denotes the Young diagram that we add the one box to some row µ I , namely µ I → µ I + 1. 3. Then, we find that the ratio of the partition functions relates to the product of the Y-operators. We will demonstrate these steps. Note that we consider the regularity for the variable Q x instead of the x-variable while we focus on U(1) theory. B.1 A 1 quiver B.1.1 U(1) gauge theory with single Y-operator To begin with, let us consider the simplest case. By using the formula in appendix A, we write the partition function and the Y-operator to the infinite product form as follows, JHEP01(2018)025 where we denote the elliptic gamma function Γ e (x; q −1 2 , Q τ ) =: Γ e (x) for simplicity, and Q x = Q 1 /x. Note that the µ-independent factors are interpreted as the one-loop contribution, and the remaining ones are the full partition function. By using the reflection of the theta function θ 1 (x) = −θ 1 (x −1 ), the Y-operator can also be written as . (B.3) This coincides with the definition in [26], up to a trivial factor. Let us consider the ratio Z µ+1 and the product Y µ (q −1 x)Y µ+1 (x). After some calculations, we have where µ + 1 =: µ denotes the Young diagram that we add the one box to some row µ I , namely µ I → µ I + 1, as we defined in the beginning of this section. Then by using the relation This means that the Y-operators Y µ (x) and Y µ (q −1 x) −1 have the poles, but the summation is regular since these poles cancelled with each other. Therefore we obtain the T-operator average for U(1) theory (3.27), which is regular for arbitrary Q x , by the summation over the partition µ. B.1.2 U(1) gauge theory with two Y-operators In this subsection we show the regularity for the U(1) theory with the two Y-operators. The calculation is almost done in the previous subsection. In this case we have to rewrite the factor S(x) in terms of the Y-operator. This factor can be written as . (B.9) JHEP01(2018)025 We remark Q x 1 = Q 1 /x 1 and Q x 2 = Q 1 /x 2 . Also we show the ration of the Y-operator, Y µ+1 (x) Y µ (x) = θ(Q x q µ I +1 2 q I−1 1 )θ(Q x q µ I 2 q I 1 ) θ(Q x q µ I +1 2 q I 1 )θ(Q x q µ I 2 q I−1 1 ) , (B.10) . (B.11) These two expressions are related each other, (B.12) One can obtain the similar equations for Q x 2 . Then, according to the discussion in the appendix B.1.1, we can show the regularity for the arbitrary Q x 1 and Q x 2 . However, when we take the collision limit Q x 1 = Q x 2 , the S-factor might have the pole. In order to consider this matter, let us consider the following case, and take the limit w → 1. Then, by using the following formula we have One can show that this coefficient c(q 1 , q 2 ) is regular. Therefore, even if Q x 1 = Q x 2 , the expectation value of the T-operator is regular. B.2 A 2 quiver Let us consider the regularity for the T-operator average in A 2 quiver theory. Again by using some formulas in appendix A, we obtain 1 )Γ e (Q The product of Y-operators is given by (B.5). Then, we find that Note that the variable x is given by (3.52). Therefore the average T 1 (x) is regular for the arbitrary x.
2023-01-21T14:26:28.252Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "e5aad43adc94faa6a8fa6c835b2e3813811b96f2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2018)025.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e5aad43adc94faa6a8fa6c835b2e3813811b96f2", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [] }
2039873
pes2o/s2orc
v3-fos-license
Molecular analysis of bovine viral diarrhoea virus isolates from South Africa KABONGO, N., BAULE, C. & VAN VUUREN, M. 2003. Molecular analysis of bovine viral diarrhoea virus isolates from South Africa. Onderstepoort Journal of Veterinary Research, 70:273–279 The presence of bovine viral diarrhoea virus in South Africa has been confirmed by several serological surveys. However, little is known about its biological properties. Twenty five isolates obtained by isolation in tissue culture and detected by means of the antigen capture ELISA from clinically sick cattle and from foetal calf serum in South Africa were characterized on the basis of analysis of the 5’ non-translated (NTR) region of the genome. A reverse-transcription polymerase chain reaction (RT-PCR) was used to amplify specific sequences from the 5’NTR of the genome. The oligonucleotide primers corresponding to positions 105–125 and 399–378, respectively, in the sequence of BVDV strain NADL were used to generate the PCR products. Both strands were sequenced directly with these primers and fluorescence-labelled dideoxynucleotides in an automated nucleic acid sequencer. Reference strains of pestiviruses [(BVDV type I, BVDV type II, border disease virus (BDV) and hog cholera virus (HCV)] and isolates from a previous investigation on BVDV in southern Africa were included for comparative purposes. All the BVDV strains obtained during this study belong to subgroups of BVDV genotype I. No association could be demonstrated between the geographic origin of the isolates. A number of isolates formed another branch separate from the existing branches Ia, Ib and Ic. These findings suggest that extensive genetic diversity can be found within BVDV type I isolates from southern Africa. Isolates that group with the classical BVDV type I strains, particularly of American origin, coexist with variants that appear to represent a local genetic pool and or variants evolving from the classical strains. INTRODUCTION Bovine viral diarrhoea virus (BVDV) is a pathogen of cattle with a worldwide distribution.It causes a variety of prenatal and postnatal clinical syndromes.Together with the viruses of border disease (BD) and hog cholera (HC) it forms the genus Pestivirus within the family Flaviviridae (Horzinek 1991;Collett 1992). Two genotypes of BVDV have been described on the basis of the 5'NTR analysis, namely BVDV type I, which is subdivided into subgroup Ia and represented by the reference strain NADL, and subgroup Ib, represented by the reference strain Osloss.BVDV type II has strain 890 as reference and comprises especially isolates associated with a new form of acute infection in cattle, the haemorrhagic syndrome, originally described in North America (Pellerin et al. 1994).BVDV type II has also been shown to include ovine isolates (Vilcek, Nettleton, Paton & Belák 1997;Ridpath, Neil, Frey, Landgraft & Thiel 2000); and, recently, to be present outside North America (Canal, Strasser, Hertig, Masuda & Peterhans 1998;Dean & Leyh 1999;Flores, Weiblen, Gil, Tobias, Lima, Garcez & Botton 2000). The natural transmission of pestiviruses which are not highly host specific between hostspecies has prompted a new classification of the members of the Pestivirus genus.The suggested classification takes into account the antigenic and genomic relationship rather than the species of origin (Becher et al. 1997;Sullivan, Chang & Akkina 1997;Vilcek et al. 1997).Accordingly, the genus Pestivirus is divided into four genotypes: genotype 1 (Pestivirus type 1) that includes the present BVDV type I strains; genotype 2 (Pestivirus type 2) represents isolates of HCV; genotype 3 (Pestivirus type 3) includes sheep and pig isolates defined as "true BD" viruses and genotype 4 (Pestivirus 4) includes isolates of cattle and sheep currently defined as BVDV type II. The aim of this paper was to expand knowledge on the genetic characteristics of local isolates and those from the previous study by Baule et al. (1997). Specimens Specimens of blood, organs and lymphoid tissues from sick and dead cattle were obtained from feedlots, commercial beef farms and dairy farms or were submitted by private practitioners and feedlot consultants.Other specimens comprised cell lines (n = 3) submitted for testing for the presence of adventitious viruses, and pooled serum obtained from foetuses (n = 7) at an abattoir.Some specimens were tested in duplicate, which accounts for the total number of 117 (Table 1).Tissue filtrates of the original specimens or sera were inoculated either on Madin Darby Bovine Kidney (MDBK) line cells or primary and secondary cells calf foetal kidney cells (CFK).The viruses that were isolated, were identified by means of specific fluorescein-conjugated antisera and antigen capture ELISA tests. RT-P C R of the 5'NTR of the BVDV genome Total RNA was extracted from supernatants of infected cells, tissue homogenates and serum specimens, using TRIzol (Gibco, Life Technologies), according to the manufacturer's instructions.cDNA was synthesized by random priming with pdN6 (Amersham-Pharmacia, Uppsala, Sweden) using Moloney murine leukaemia virus reverse transcriptase (M-MLV RT) (Gibco, Life Technologies), as follows: 5 µl of total RNA were mixed with 0.02 U of pdN6 and 3 µl of ddH 2 O and denatured at 65°C for 10 min, then quickly chilled on ice.A reaction mix containing 4 µl of 5x 1 st strand buffer, 2 µl of 0.1 M DTT, 0.5 µl of each dNTP (10 mM each), 24 U of RNAse inhibitor (RNA guard, Amersham-Pharmacia) and 200 U of M-MLV RT was added.Synthesis was carried out at 37 °C for 90 min, followed by the inactivation of the enzyme at 95 °C for 5 min. A polymerase chain reaction (PCR) was used to amplify specific sequences from the 5'NTR of the genome.The oligonucleotide primers used were as follows (corresponding to positions 105-125 and 399-378, respectively, in the sequence of BVDV strain NADL): Forward -5'-AGCCATGCCCTTAGTAGGACT-3' Reverse -5'-ACTCCATGTGCCATGTACA-3' Amplification was carried out in a total volume of 50 µl containing 10 mM Tris-HCl, pH 9.0, 50 mM KCl, 1 µg/µl of BSA, 0.2 mM of each deoxynucleotide, 15 pmol of each primer, 2.5 mM MgCl 2 , 2.5% Formamide, 1 U of Taq DNA polymerase (Perkin Elmer-Cetus, Norwalk, CA, USA), and 5 µl of cDNA.The reaction mixes were overlaid with two drops of mineral oil.PCR cycles were as follows: 5 cycles of denaturation at 94°C for 45 s, annealing at 55°C for 45 s and extension at 72 °C for 1 min, followed by 30 cycles of denaturation at 94 °C for 45 s, annealing at 50°C for 45 s and extension at 72°C for 1 min.A final extension step at 72 °C for 7 min was included.Precautions to avoid contamination as described by Belák & Ballagi-Pordány (1993) were followed throughout the RT-PCR.The PCR products were visualized by ethidium bromide staining, after electrophoresis on 2 % agarose gels. Sequencing and sequence analysis The amplicons were purified using the QIAquick DNA purification kit (Qiagen), according to the manufacturer's instructions and spectrophotometrically quantified.Both strands were sequenced directly with the same primers used to generate the PCR products and fluorescence-labelled dideoxynucleotides in an automated nucleic acid sequencer (ABI PRISM 377).The primers were selected based on alignments of sequences of various pestiviruses (BVDV type I, CSFV and BDV).Highly conserved parts of the 5'NTR were used for the selection of primers.These primers have also been evaluated for the amplification of BVDV type II as well and were therefore suitable for the detection of all known pestiviruses. Nucleotide sequence editing, analysis and alignments were done using multiple programmes from the DNAstar package (DNASTAR Inc., Madison, Wi.).The phylogenetic analysis presented was completed following alignment of nucleotide sequences using the Megalign.Reference strains of pestiviruses, NADL-BVDV type I, subgroup Ia (American type), Osloss-BVDV type I, subgroup Ib (European type), 890-BVDV type II, BDV and HCV and isolates Ic from a previous investigation on BVDV in southern Africa (Baule et al. 1997) were included for comparative purposes.The criteria for assign-ment of genotype were based on sequence similarity as shown in the phylogenetic tree.Strains branching with or similarly to NADL are considered subgroup Ia, with Osloss subgroup Ib and so forth.The EMBL/Genbank/DDBJ for the nucleotide sequences corresponding accession numbers are: AF041040, M31182, M96751, M96687, L32885, L32888 and sequences selected from U97409-U97481.The phylogenetic tree was edited with the Deneba Canvas (5.0) graphic programme. RESULTS A total of 117 specimens were subjected to molecular characterization of which 25 were confirmed positive with PCR (Table 1).Eighteen isolates obtained by isolation in tissue culture and seven isolates detected in foetal calf sera by means of the antigen capture ELISA were confirmed as BVDV with PCR.Eight other specimens that included two sera, three buffy coats, two spleens and one lung gave inconclusive readings with the FA test in cell cultures.Two were confirmed negative and six yielded a weak band with PCR.They were not molecularly analyzed and were not included in the phylogenetic tree (Fig. 1). All of the strains were identified as BVDV I, either subgroups BVDV Ia (NADL-like) or BVDV Ib (Osloss-like) or BVDV I*. Table 2 shows isolates that were confirmed by PCR and the predominant clinical signs associated with them.Seven isolates obtained from 156 pooled serum specimens and three cells lines of unknown history were included under the heading "others", since no clinical syndrome could be ascribed to them. The phylogenetic assignment of these isolates, compared to reference strains of pestiviruses and to sequences from a previous investigation with BVDV isolates from southern Africa is shown in Fig. 1.The phylogenetic tree was generated based on a comparison of 245 nucleotide long sequences in the 5'NTR.The distances were calculated using the neighbor-joining method.The BVDV isolates listed in Fig. 1 were determined to be BVDV type I.The 25 isolates analyzed were phylogenetically discriminated as follows: two (ST22F/99, ST2G/99), segregated clearly as subgroup Ia; none was found under subgroup Ib; three (ST25G/99, ST23F/99, ST24G/99) were included in a cluster provisionally termed Ic (Baule et al. 1997), whilst the remaining isolates formed a separate cluster named I*. DISCUSSION There was no relationship between the geographic origin, the nature of the clinical signs and the typing of the BVDV isolates.Animals from the North-West (NW); Free State (F); G (Gauteng) and Eastern Cape (EC) Provinces were infected with the same strain.This may inter alia be the result of the free movement of animals, the absence of closed herds or vaccination.Throughout South Africa, there is a diversity of farming systems from extensive to intensive, including closed herds where artificial insemination (AI) is used.Isolates were obtained from samples collected in feedlots, dairy herds and commercial beef farms in all provinces, indicating the ubiquitousness of BVDV in South Africa. The reverse-transcription PCR based on the 5'NTR of the virus genome and further sequencing enabled differentiation of BVDV genotypes and subgroups; this is of epidemiological importance and might be of value in control programmes.It has been reported that direct detection of the virus in serum or homogenized tissue specimens clinical samples by RT-PCR is often unsuccessful (El-Kholy, Bolin, Ridpath, Arab, Abou-Zeid, Hamman & Platt 1998).This might be due to either the presence of certain elements in the clinical specimens that are inhibitory to reverse transcriptase or taq polymerase enzymes or to masking of the target template by proteins coagulated during extraction of nucleic acids in the clinical specimens.The enteric syndrome manifests as acute or chronic diarrhoea; and the respiratory syndrome as nasal discharge, respiratory distress, sneezing and coughing, while "others" include those of unknown history and those in which no clinical syndrome ascribed to the case ** Identification of isolates: ncp: noncytopathic biotype, followed by S for South Africa, T for tropical diseases, the isolate ID and the area where it came from, the number after province of origin, where applicable, represents number of samples from the same sender in order of submission, which is followed by year of isolation.The letters that represent the province of origin: NW: North-West; F: Free State; EC: Eastern Cape; G: Gauteng Six clinical samples from which virus had not been isolated showed a weak band with RT-PCR although it was situated at the correct molecular weight position.These six specimens were not molecularly analyzed nor were they included in the phylogenetic tree.The results obtained with PCR were in agreement with those obtained by virus isolation in all the negative cases except in seven out of 156 pooled sera that were negative for virus isolation after one passage but tested positive on antigen capture ELISA.This confirms the need for more than one passage before virus becomes detectable with the FA test. All the BVDV strains obtained during this study were ncp BVDV I (BVDV Ia (NADL-like), BVDV Ic subgroups or BVDV I* although Theodoris isolated cp BVDV in 1974.No association could be demonstrated between the geographic origin of the isolates and branch discrimination.The three groupings formed by the South African isolates (subgroup Ia, cluster Ic and cluster provisionally called I*) included BVD viruses from different regions: F, G, NW and EC.It is worth noting their similarity to isolates of the BVDV cluster provisionally termed Ic in a previous investigation (Baule et al. 1997) which did not segregate with either the Ia or the Ib subgroups.The presence of isolates of this cluster in South Africa may reflect a local genetic subgroup that is spreading in the region since genotype I shows an intragenotyping diversity.This might have occurred because of cattle movement or the use of biological such as cell culture-derived vaccines.No type BVDV type II were found, however the vaccine appears to be protective against both types I and II. A number of isolates I* (n = 20) formed another branch separate from Ia, Ib or Ic.This branch was, however; distinct from the one defining a cluster preliminary termed Id by Baule et al. (1997) and was found to comprise isolates particularly distinct from the Ia and Ib subgroups.These findings suggest that an extensive genetic diversity can be found within BVDV type I isolate from southern Africa. Isolates that group with the classical BVDV type I strains, particularly of American origin, coexist with variants that appear to represent a local genetic pool and/ or variants evolving from the classical strains. A clustering of isolates with regards to farms of origin was not observed with the isolates investigated, as has been reported by others (Paton 1995;Vilcek et al. 1999).Differences in farming practices, i.e. extensive farming versus intensive farming may con-tribute to this difference in virus ecology.Closed herds and restricted contact among cattle may be a determinant factor to establish BVDV in a herdspecific manner.Most herds from which the samples originated were managed extensively. Molecular analysis of bovine viral diarrhoea virus isolates from South Africa TABLE 1 BVDV isolates obtained by virus isolation, ELISA and confirmed with PCR
2017-05-10T06:50:16.468Z
2003-11-08T00:00:00.000
{ "year": 2003, "sha1": "f56f7d39a3905e63bab1aaa1478d98867320663f", "oa_license": "CCBY", "oa_url": "https://ojvr.org/index.php/ojvr/article/download/292/271", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f56f7d39a3905e63bab1aaa1478d98867320663f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
136364821
pes2o/s2orc
v3-fos-license
Dielectric-function analysis of metals for plasmonic-device application We study the potential of various metals (Pt, Al, Cu and Ni) as plasmonic material used for devices by analyzing their complex permittivity and comparing with other metals. Metals were characterized by using high-resolution spectroscopic ellipsometry covering energy range of 0.5 to 6.5 eV. In fitting process, instead using Drude model, we used the combination of Lorentz model to describe optical properties of metals. The results show that each metal has unique different features of ε1 and ε2 in range of far-infrared to vacuum-ultraviolet. Also, the loss by interband transition is observable for some metals. Furthermore, the plasmonic quality-factor, which are related to electric-field enhancement and heat production generated by surface plasmon, of metal nanoparticle have been calculated and we found the optimum region of device application for each metal. From this study, Cu is promising metal working in near-infrared to visible area potentially to substitute noble Ag and Au. On the other hand, Al is the best metal to be applied as plasmonic device working in ultraviolet region. Moreover, enhancement of plasmonic quality-factor by changing geometry and environment of metal is also discussed. Our studies give an alternative of fundamental perspective for plasmonic-device development especially for energy-harvesting purposes. Introduction Plasmonic materials are recently used as complementing materials for various and advanced devices for many field application [1][2]. By definition, plasmonic materials are substances that have negative values of real-complex permittivity ε 1 within certain energy range. Metals are still prime in practice as plasmonic materials because of their high electron mobility in broad energy spectrum. They can be applied for optoelectronic devices as part of sensors, data storage, imaging, and waveguide systems [3][4][5][6]. Specially, plasmonic materials can be utilized as part of heterostructure system for energy-harvesting purposes, for solar-cell and water-splitting devices [7][8][9][10][11]. Devices based on plasmonic material work relied on surface-plasmon characteristic, which is an electron oscillation, at metal-insulator boundary. The electron oscillation is caused by interaction between collective electrons and incident electromagnetic (EM) wave, which produces intense heat and electric field around metal's surface [12]. Surface-plasmon behavior is divided into two-type oscillation based on its host medium in metal's structure. Surface plasmon emerges on metal slab that borders with insulator material and creates evanescent wave called surface plasmon polariton (SPP) along metal-insulator interface. This SPP, which is decay quickly, enhances considerably heat and electric field alongside metal's surface [12]. Otherwise, localized surface plasmon resonance (LSPR) is aroused when subwavelength EM interacts free electrons in metal nanoparticle structure. The collective electrons oscillate with the same frequency as that of the incident EM wave and it creates heat inside nanoparticle and electric field outside nanoparticle. During resonance condition, LSPR can produce tremendous heat and electric field. In this LSPR mode, there is no SPP wave due to existence of an effective restoring force produced by the small curved nanoparticle against driven electrons [12]. The properties of two-type surface-plasmon modes that are the energy resonance and the level of enhanced heat and electric field extremely depend on material type, geometry, size and surrounding medium of metals, and all of these factors are related with metal's dielectric function [13][14][15][16][17][18][19]. Plasmonic materials for energy-harvesting devices Energy-harvesting devices based on plasmonic material exploit SPP or LSPR features coupled with other materials (e.g. semiconductor thin film) to enhance the photocatalytic process of entire system to improve its energy-conversion efficiency [7][8][9][10][11]20]. For solar-cell and water-splitting devices, various metal nanoparticles that utilize LSPR's attributes are commonly used to complement oxide-metal semiconductor in order to improve the efficiency of devices. Metal nanostructures are located on top of metal-oxide semiconductor and it arranges Schottky-barrier junction (figure 1) [11,[21][22]. Gold (Au) and silver (Ag) are plasmonic materials which have been usually employed by many researchers in solar-energy-converter devices on account of their high plasmonic response in visible energy spectrum [11,[21][22]. Unfortunately, Au and Ag are high-cost materials and as noble metals they can be oxidized easily which decreases the capability of plasmonic devices, and these problems make the system inefficient for energy conversion. The enhancement of producing efficient energy in devices based on plasmonic materials is relied on plasmon-semiconductor-coupling mechanism which utilizes LSPR's properties to produce electron-hole pairs (charge separation) [23][24][25]. There are two-type mechanisms of plasmon-semiconductor coupling in solar-cell and water-splitting devices: first, plasmon-induced hot-electron generation (PIHE), second, plasmon-induced-energy transfer (PIET) [23][24][25]. As mentioned before, when EM wave interacts electrons in nanoparticle, LSPR is produced. After that, scattering and damping process of electron oscillation in metal nanoparticle produce heat inside and near-electric-field enhancement outside metal. PIHE mechanism performs when heat produced by surface plasmon transfers its energy to electrons in metal, which it generates hot electrons to surpass Schottky-barrier energy and migrate through conduction band of semiconductor. Then, hole is existing in metal and charge separation is emerged in metal-semiconductor system resulting an electric current. Another aspect that is crucial for this mechanism is the existence of electron-donor solution or hole-transporting material (HTM) surrounding around metal. HTM solution gives its electrons to metal to keeping the charge balance and sustaining the electric current (figure 1) [24]. For PIET mechanism, the energy from near-electric-field enhancement outside metal is harnessed to excite electrons of semiconductor in valence band moving through conduction band, and the energy transferred by PIET to electrons have to be greater than the energy bandgap of semiconductor. As result of this, charge separation exists in semiconductor and creates an electric current. To quantifying the goodness ability of metal to produce heat and electric field, Andrien Lalisse et al. proposed the new way to calculate accurately plasmonic quality-factor of nanoparticle; Faraday number and Joule number, which will be explained later [19]. Plasmonic quality-factor of nanoparticle The efficiency of energy-harvesting devices or for other devices that work in plasmon-semiconductor mechanism is directly associated to plasmonic quality-factor (PQF), which measures the goodness of plasmonic response when EM wave interacts materials; devices with higher PQF will produce more efficient energy. PQF is expressed with the same frequency as that of the incident EM wave within certain energy. Andrien Lalisse et al. suggested two dimensionless PQF parameters as function of plasmonic material's dielectric function, which are Faraday number (Fa) and Joule number (Jo), in order to quantify the ability of a plasmonic nanoparticle to enhance the electric-field intensity and produce heat. Fa and Jo describe the properties caused by LSPR [19], while Fa describing near-field enhancement relates with PIET mechanism and heat generation is expressed by Jo which corresponds to PIHE mechanism. For nanospherical structure, Fa and Jo can be calculated by equation 1 and 2; is complex permittivity consisting real 1 and imaginary 2 part, s is refractive index of surrounding medium and E is the energy of EM wave. To study the alteration of PQF by different geometry apart spherical nanoparticle, we also discuss another nanoparticle structure that is spheroid, which is divided into prolate and oblate structure (figure 2). These structures are elongated sphere with x = y , stretched in z-axes for prolate and in x-and yaxes for oblate, and we define r as aspect ratio that is z / x , For prolate, r is always higher than 1, otherwise, the value of r is less than 1 for oblate structure. Prolate Oblate International [19]. (equation 5) is depolarization factor, which describes polarization of EM in j-axes of structure fulfilling restriction X + Y + Z = 1, and it has different equation's solution and value for specific type structure, for sphere L = 1/3. Deriving analytical solution of in other particular geometries is the challenge. and explicate the PQF that is aroused at around nanoparticle's boundary which is parallel with j-axes. In spheroid case, because radius of x and y are same, in both axes will have the same depolarization factor and the same value of Fa and Jo, so that, only Z have to be calculated, then X = Y = (1 − Z )/2. The analytical approach of the depolarization factor of prolate ( p,z ) and oblate ( o,z ) in zaxes can be calculated differently by equation 6 and 7 as function of aspect ratio of spheroid [26]. Dielectric function of metals Determination of certain type, size and geometry of metals to be applied as plasmonic material is pivotal so as to yield the highest efficiency of energy conversion. All of these important aspects are related strictly to the dielectric function of metals. It is important to get precisely the dielectric-function data in broad spectrum from far-infrared to vacuum-ultraviolet of metals in order to understand the metals to be applied as plasmonic material in extensive application of broad energy range. Visible range is only special case for solar-energy converter devices (e.g. solar cell and water splitting). Many studies by researchers characterized various metals only in limited energy spectra while some works obtained the data in very extensive spectrum using electron-scattering spectroscopy, which was defective method for materials due to changing the structure and morphology of samples, so that, the obtained data will be slightly different compared to the real data [27][28]. Therefore, other apparatus that is non destructive and working in broad spectrum is needed. Spectroscopic ellipsometry (SE) is a non-invasive tool to characterize materials and can be operated in energy ranging from far-infrared to vacuum-ultraviolet. By fitting process in SE-data analysis, we can obtain the both of real and imaginary part of complex permittivity which are mutually Kramers-Kronig consistent. Our group showed that SE was the best method to obtain the optical properties of materials in broad spectrum of energy and could be used precisely to determine the state of plasmon and its interaction to exciton in ZnO film [29][30][31][32][33][34][35][36]. In this work, we study the potential of various metals: platinum (Pt), aluminum (Al), copper (Cu) and nickel (Ni), as an alternative plasmonic materials compared to gold (Au) and silver (Ag) by analyzing their complex dielectric function. Metals were characterized by SE apparatus in energy from 0.5 to 6.5 eV. The obtained dielectric-function of metals was used in calculating plasmonic quality-factor for each metal in certain structure. Experimental method In order to acquire the complex permittivity of metals in vast spectrum from far-infrared to vacuumultraviolet, a SENTECH SE850 ellipsometer was equipped by three different light sources: UV (deuterium), UV-VIS source (Xe-lamp) and NIR source (halogen lamp from FTIR spectrometer). Attached by polarizer, this instrument provides a linear-polarized EM wave with energy ranging from 0.5 to 6.5 eV. The EM incident wave illuminated metal's surface plane at angle 50 ₀ , 60 ₀ and 70 ₀ to get information about amplitude ratio (ψ) and phase difference (∆) between p-and s-polarized light which are detected by detector. The dispersive SE's data experiment is attributed in simple complex number associating with complex reflection amplitude of p-light ( p ) and s-light ( s ) when the EM wave is reflected by metals' surface (equation 8) [37]. In SE data analysis, fitting process is required to extract the complex dielectric function of metals. In this process, to match the SE's data experiment (equation 8) to calculated model, non-linear regression algorithm Levenberg-Marquardt was performed in order to get the best fitting. The goodness of fitting is judged intuitively by looking in graph how match ψ and ∆ data between experiment and generated model, and evaluated numerically by MSE (mean squared error). The best value of MSE is approximately 1, and it may exhibit much larger value (>10) depending on sample's structure; thick sample and multilayer sample would result higher MSE and it is acceptable as long as the functions (data between measurement and model) are fit [38]. The MSE value can be calculated by equation 9; n is a number of EM-wave data, m is a number of model's parameters, = cos(2 ) , = sin (2 ) . cos(∆) and = sin (2 ) . sin(∆), while E and M indicate experimental and model data respectively [38]. CompleteEASE software was used in our fitting process [38]. Because of our simple air/bulk architectures with indifference of roughness, the complex pseudo dielectric function can be used in fitting and be computed by equation 10, 0 is angle of the incident wave to the line which is perpendicular to the sample's plane [37]. Then, we have to build our dielectric function model to fit with pseudo dielectric function by using equation 8 and 10. Instead of using Drude model, which cannot take into account absorption by interband transition, we used combination of Lorentz model to represent the optical properties of metals (equation 11). The first term is constants to describe real permittivity at infinite energy, second term is a Lorentz model and third term is a pole function which only delineates real permittivity in infrared and UV region outside data measurement [38]. Figure 3 is the results of fitting process of Pt, Al, Cu and Ni with their each error (MSE). It is shown in figure 3 that our Lorentz model (black line) is fit with ψ (red) and ∆ (blue) data measurement for 70 ₀ , 60 ₀ and 50 ₀ respectively in brighter gradation for almost all spectrum with MSE less than 8, and this optical model is admissible to be extracted as permittivity of metals as we can see that the model is fit enough to the measured data. The complex permittivity of all metals is depicted in figure 4. For comparison, also Au and Ag from Palik's optical constants handbook are inserted in energy range 1.2-5 eV [28]. Metals have negative ε 1 in broad energy spectrum and the patterns are unique for each metal in around ε 1~ 0, which is the key state of existence of surface plasmon that only will come up in negative ε 1 and it's value have to be small closer to zero. Cu, Ag and Au approach this condition in visible area whereas Pt, Al and Ni in UV region. In absorption ε 2 spectra, Drude response is highly observed for Pt, Al and Ni in infrared energy and is low for Cu. The low absorption in plasmonic material is critical to keep the existence of surface plasmon in condition low loss. Also, for Al, Cu and Ni, the loss by interband transition are observed. Result and discussion To numerically quantifying the response of surface plasmon in nanoparticle metals, PQF for Fa and Jo in spherical geometry surrounded by vacuum (n s = 1) was calculated, and it is showed in figure 5 and is superimposed with solar spectrum to understand the possibility of metals applied for solar-energyharvesting devices. It can be seen clearly that in both for electric enhancement (Fa) and producing heat (Jo), Cu is promising metal working well in all visible-light area comparable to superiority of Ag and Au. Al is the best plasmonic material for devices working in blue-visible to vacuum-ultraviolet. In enhancing electric field, Ni and Pt seem ineffective, however they are promising as heat producer in UV region. The highest peak of each metal is a condition during LSPR resonance. Because of the high ability of Cu to collect solar energy in almost all visible spectrums, for further analysis to comprehend the alteration of PQF by changing geometry and surrounding medium, only Cu will be investigated. In solar-energy-harvesting devices, using plasmonic material that is responsive in visible region is important. As calculated before in vacuum condition, Cu is a great material for energy application. In fact, as we explained above, in solar-cell and water-splitting devices, the metals are surrounded by HTM solution contributing its electrons to metal to sustain electric current through the system, so that, should be higher than 1. Nowadays, there is still lack of characterization of various types of HTM solution used in devices and the value of refractive index for various HTM solutions is still unrevealed. To know this HTM effect to Fa and Jo, we used water (n s =1.33) to represent refractive index of HTM. Figure 6 displays the improvement of PQF in terms of increasing refractive index from 1 to 1.33. This result is substantial for further development of plasmonic-device system which is not only to discover effective HTM solution to donor electrons but also to fabricate HTM with higher values of refractive index to increase efficiency of devices. There are many ways to elevate PQF of metal nanoparticle, which one of them is to tune geometry of nanoparticle. Spheroid (prolate and oblate) architectures have been proposed for enhancing PQF than that of sphere. We recalculated PQF in Cu prolate structure with aspect ratio 2, and figure 7 shows considerably enhancement of Fa and Jo in z-axes of metal while it decreases in x-and y-axes. By changing Cu prolate to aspect ratio 2, we can produce PQF greater than that of Au and Ag in spherical geometry, and higher aspect ratio will improve extremely PQF of Cu. Also, the energy resonance of LSPR is shifted to red frequency. On the other hand, Cu oblate with aspect ratio 0.5 creates rising of PQF and results blue-shifted LSPR of resonance energy in x-and y-axes of metal, while lower aspect ratio of oblate will enhance greatly plasmonic quality-factor (figure 8). In device application, it is necessary to consider attached position of prolate and oblate along semiconductor to generate optimum efficiency. These results of prolate and oblate contexture give us another perspective that it is not need to fabricate high-cost plasmonic materials when we can manipulate the geometry of low-cost materials to have greater PQF. Conclusion Cooper (Cu) is promising plasmonic material with optimum region in near infrared to visible energy potentially to substitute gold (Au) and silver (Ag). Aluminium (Al) is the best metal to be applied as plasmonic devices performing in UV region. Nickel (Ni) and platinum (Pt) are impractical for plasmonic devices utilizing electric-near-field enhancement but are good metals to produce heat in UV region. The enhancement of plasmonic quality-factor of Fa (electric-field enhancement) and Jo (heat generation) of metal nanoparticle can be achieved by changing the geometry to spheorid (prolate and oblate) instead of sphere. For prolate, increasing aspect ratio will improve PQF of metal, otherwise, by decreasing aspect ratio in oblate will raise efficiency of PQF. Also, alteration of refractive index of surrounding medium around metal to higher value will lift up plasmonic response of metal. Our studies give an alternative of fundamental perspective for plasmonic-device development especially for energy purposes.
2019-04-29T13:17:47.099Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "8c3cf49025c70143e00dbcc23fb606ca579a8a57", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/877/1/012040", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2e5a75796f3687706db72e1c5e9aaccb790f577c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
226056314
pes2o/s2orc
v3-fos-license
Efficacy and Safety of Human Serum Albumin–Cisplatin Complex in U87MG Xenograft Mouse Models Cisplatin (cis-diamminedichloroplatinum (II), CDDP) is a chemotherapeutic drug widely used against many solid tumors. A pharmacokinetics study found that CDDP can bind to human serum albumin (HSA), which is the most abundant plasma protein in serum. HSA has the advantage of being a nanocarrier and can accumulate in tumors by passive targeting and active targeting mediated by the secreted protein acidic and rich in cysteine (SPARC). In this study, we investigated the possibility of using a CDDP–HSA complex (HSA–CDDP) as a SPARC-mediated therapeutic agent. To investigate the HSA-dependent therapeutic effect of HSA–CDDP, we used two types of U87MG glioma cells that express SPARC differently. HSA–CDDP was highly taken up in SPARC expressing cells and this uptake was enhanced with exogenous SPARC treatment in cells with low expression of SPARC. The cytotoxicity of HSA–CDDP was also higher in SPARC-expressing cells. In the tumor model, HSA–CDDP showed a similar tumor growth and survival rate to CDDP only in SPARC-expressing tumor models. The biosafety test indicated that HSA–CDDP was less nephrotoxic than CDDP, based on blood markers and histopathology examination. Our findings show that HSA–CDDP has the potential to be a novel therapeutic agent for SPARC-expressing tumors, enhancing the tumor targeting effect by HSA and reducing the nephrotoxicity of CDDP. Introduction Cisplatin (cis-diamminedichloroplatinum (II), CDDP) is a chemotherapeutic drug widely used against many solid tumors [1,2]. It was first described in 1845, and the antitumor potential of CDDP was first discovered in the 1960s [3]. Since its approval for clinical use in the United States in 1978, CDDP has been actively used for tumor therapy. However, CDDP is associated with significant dose-limiting toxicities, including nephrotoxicity and neurotoxicity; hence, solving this toxicity problem is important [4]. According to a pharmacokinetic study, 65-98% of CDDP administrated intravenously is bound to blood plasma proteins within one day, particularly albumin [5]. Human serum albumin (HSA) plays an important role in the transport and disposition of endogenous and exogenous ligands present in blood [6]. HSA is the most abundant plasma protein with a serum concentration of 40-45 g/L in healthy adults. It has the advantages of being a nanocarrier, water-soluble and biocompatible, and has a long half-life in blood and low toxicity [7,8]. The main reason for using HSA as a drug delivery system is its ability to accumulate in tumors by an enhanced permeability and retention (EPR) effect [9,10]. It is also known that the secreted protein acidic and rich in cysteine (SPARC), an albumin-binding protein, can sequester albumin in tumor stroma and contribute to the tumor-specific uptake of albumin [11,12]. SPARC is an albumin-binding protein highly expressed in some cancers, which functions to modulate cell-matrix interaction, proliferation, survival, and migration [13][14][15][16]. Our recent study showed that SPARC mediates the active targeting of HSA [17]. Using HSA as a CDDP carrier can also help reduce the nephrotoxicity associated with CDDP. The nephrotoxicity comes from the main excretion of CDDP, which occurs in the kidney and is due to the small size of CDDP [18,19]. By binding CDDP to HSA, HSA can inhibit the excretion of CDDP through the kidney. CDDP bound to HSA (HSA-CDDP) is widely thought to be therapeutically inactive, but its biological effects are still controversial [20][21][22]. Some clinical trials showed increased patient survival times [23][24][25]. To utilize CDDP bound to HSA (HSA-CDDP) as a therapeutic agent, it should be examined to verify its chemotherapeutic effect. For this research, the interaction of CDDP with serum albumin was investigated at the molecular level. It is thought that five platinum atoms-His105, Met298, Met329 and Met548 and His288-can bind onto albumin molecules through the formation of coordinative bonds, and CDDP binds to His and Met side chain residues located on the albumin surface [26][27][28]. Even through this detailed structural investigation, the feasibility of HSA-CDDP as a therapeutic agent remained unsolved. Moreover, despite the high expression of SPARC in glioma, an albumin-binding protein, the feasibility of drug-carrying HSA as a therapeutic agent against glioma has not been sufficiently studied. In this study, we investigated the potential of HSA-CDDP as a therapeutic agent for a glioblastoma model and focused on the SPARC-mediated efficacy and safety. Characterization of HSA-CDDP In this study, we used CDDP-bound HSA as a therapeutic drug for tumors. Based on a previous paper, we set the CDDP binding condition to HSA [29]. The molecular weight of HSA-CDDP was analyzed by the matrix-assisted laser desorption/ionization time of flight (MALDI-TOF). The molecular weight of HSA-CDDP was 67,522 ± 129 Da. The proportion of CDDP per mole of HSA was calculated from the molecular weight difference between HSA-CDDP and HSA. The average number of molecules of CDDP bound to one mole of HSA was 4.07. After we conjugated CDDP to HSA, we measured the molecular weight of HSA-CDDP using MALDI-TOF. To evaluate whether HSA-CDDP can be taken up by glioblastoma cells in a SPARC-mediated HSA-dependent manner, we performed cellular uptake imaging using confocal microscopy based on our previous paper [17,30]. Two types of U87MG cell lines were used: U87MG cells, which highly express SPARC protein; U87MG-shSPARC cells, which exhibit low expression of SPARC protein. For objective comparison, the confocal microscopic images of cells were acquired and quantified (n = 5 for each group). Our results showed that there is higher uptake of HSA-CDDP in U87MG cells than U87MG-shSPARC cells (Figure 1a,c, FNR648-HSA-CDDP, p < 0.001). To evaluate SPARC-mediated HSA-CDDP uptake in cells, exogenous SPARC protein was co-treated in cells. By co-treatment of the exogenous SPARC in cells, HSA-CDDP was highly accumulated in U87MG-shSPARC cells (Figure 1a, FNR648-HSA-CDDP in U87MG-shSPARC, and Figure 1c, +SPARC). This SPARC-dependent uptake pattern was similar to the manner of HSA uptake in cells (Figure 1a, FNR648-HSA and FNR648-HSA-CDDP). This result demonstrated that the uptake of HSA-CDDP to cells was HSA dependent. FNR648-HSA-CDDP). This result demonstrated that the uptake of HSA-CDDP to cells was HSA dependent. The cellular uptake of each group was calculated as follows: The average signal intensity of FNR648 was divided by the number of DAPI-positive cells, which represents the number of viable cells in the image. The cellular uptake of FNR648-HSA or FNR648-HSA-CDDP in U87MG cells was considered 1 and, based on this value, we expressed each group as a ratio relative to U87MG. Data are presented as means ± SD (n = 5). *** p < 0.001. Cytotoxic Effect of HSA-CDDP In Vitro The effect of HSA-CDDP on cancer cell viability was also examined. The cytotoxicity of HSA-CDDP was studied in U87MG and U87MG-shSPAR cells using a cell counting kit-8 (CCK-8) assay. HSA-CDDP showed dose-dependent toxicity toward cells ( Figure S1b, HSA-CDDP). Specifically, the toxicity of HSA-CDDP was higher with U87MG cells than U87MG-shSPARC cells ( Figure S1b and Table 1; IC50 for HSA-CDDP in two cells). Since U87MG and U87MG-shSPARC showed similar cytotoxicity to CDDP (Figure S1a and Table 1; IC50 for CDDP in cells), this indicates that the cytotoxicity of HSA-CDDP to U87MG and U87MG-shSPARC is dependent on the SPARC-mediated cellular uptake of HSA-CDDP. The cellular uptake of each group was calculated as follows: The average signal intensity of FNR648 was divided by the number of DAPI-positive cells, which represents the number of viable cells in the image. The cellular uptake of FNR648-HSA or FNR648-HSA-CDDP in U87MG cells was considered 1 and, based on this value, we expressed each group as a ratio relative to U87MG. Data are presented as means ± SD (n = 5). *** p < 0.001. Cytotoxic Effect of HSA-CDDP In Vitro The effect of HSA-CDDP on cancer cell viability was also examined. The cytotoxicity of HSA-CDDP was studied in U87MG and U87MG-shSPAR cells using a cell counting kit-8 (CCK-8) assay. HSA-CDDP showed dose-dependent toxicity toward cells ( Figure S1b, HSA-CDDP). Specifically, the toxicity of HSA-CDDP was higher with U87MG cells than U87MG-shSPARC cells ( Figure S1b and Table 1; IC 50 for HSA-CDDP in two cells). Since U87MG and U87MG-shSPARC showed similar cytotoxicity to CDDP ( Figure S1a and Table 1; IC 50 for CDDP in cells), this indicates that the cytotoxicity of HSA-CDDP to U87MG and U87MG-shSPARC is dependent on the SPARC-mediated cellular uptake of HSA-CDDP. It is well known that cell apoptosis is the basic mechanism of action of CDDP [31]. Apoptosis triggered by HSA-CDDP was observed by flow cytometry analysis using Annexin V-FITC/PI co-staining ( Figure 2 and Figure S2). The result showed that HSA-CDDP induced apoptosis in a dose-dependent manner ( Figure S2b). We compared the percentage of apoptotic cells in the concentration of CDDP as 4.9 µM and HSA-CDDP as 19.3 µM, which showed that 30-40% of cells were alive after drug treatment in cck-8 results. U87MG exhibited a higher apoptosis rate than U87MG-shSPARC cells ( Figure 2, HSA-CDDP, 19.3 µM; U87MG, 23.2%; U87MG-shSPARC, 8.8%). Comparing CDDP 4.9 µM and HSA-CDDP 19.3 in U87MG cells, HSA-CDDP exhibited a higher apoptotic cell percentage than CDDP, although they both showed similar cellular toxicity in CCK-8 ( Figure 2). These results strongly indicate that the cellular toxicity of HSA-CDDP is SPARC-mediated, and HSA enhanced uptake in cells and apoptosis. It is well known that cell apoptosis is the basic mechanism of action of CDDP [31]. Apoptosis triggered by HSA-CDDP was observed by flow cytometry analysis using Annexin V-FITC/PI costaining ( Figure 2 and Figure S2). The result showed that HSA-CDDP induced apoptosis in a dosedependent manner ( Figure S2b). We compared the percentage of apoptotic cells in the concentration of CDDP as 4.9 μM and HSA-CDDP as 19.3 μM, which showed that 30-40% of cells were alive after drug treatment in cck-8 results. U87MG exhibited a higher apoptosis rate than U87MG-shSPARC cells (Figure 2, HSA-CDDP, 19.3 μM; U87MG, 23.2%; U87MG-shSPARC, 8.8%). Comparing CDDP 4.9 μM and HSA-CDDP 19.3 in U87MG cells, HSA-CDDP exhibited a higher apoptotic cell percentage than CDDP, although they both showed similar cellular toxicity in CCK-8 ( Figure 2). These results strongly indicate that the cellular toxicity of HSA-CDDP is SPARC-mediated, and HSA enhanced uptake in cells and apoptosis. Antitumor Effect of HSA-CDDP In Vivo The therapeutic effect of HSA-CDDP was investigated in a xenograft tumor mice model using U87MG and U87MG-shSPARC cells. Drug administration began when tumor size reached 50 mm 3 and tumor growth was observed until the tumor volume reached 2000 mm 3 . Mice were administered PBS, CDDP, or HSA-CDDP by intravenous injection every other day, to a total of seven times. With regard to the U87MG tumor model, 12 days after the first drug treatment, the CDDP-and HSA-CDDP-treated mice showed significantly reduced tumor growth than the PBS group ( Figure 3a, p < 0.001). In the U87MG-shSPARC tumor model, 26 days after first drug treatment, CDDP-treated mice showed significantly reduced tumor growth than the PBS group ( Figure 3b, p < 0.001), but HSA-CDDP-treated mice showed no difference in tumor growth to PBS. The weight of mice was significantly decreased in the CDDP-treated group (Figure 3c,d), but this did not occur with HSA-CDDP-treated mice. This means that CDDP may cause the negative effect in the in vivo system, but Antitumor Effect of HSA-CDDP In Vivo The therapeutic effect of HSA-CDDP was investigated in a xenograft tumor mice model using U87MG and U87MG-shSPARC cells. Drug administration began when tumor size reached 50 mm 3 and tumor growth was observed until the tumor volume reached 2000 mm 3 . Mice were administered PBS, CDDP, or HSA-CDDP by intravenous injection every other day, to a total of seven times. With regard to the U87MG tumor model, 12 days after the first drug treatment, the CDDP-and HSA-CDDP-treated mice showed significantly reduced tumor growth than the PBS group ( Figure 3a, p < 0.001). In the U87MG-shSPARC tumor model, 26 days after first drug treatment, CDDP-treated mice showed significantly reduced tumor growth than the PBS group ( Figure 3b, p < 0.001), but HSA-CDDP-treated mice showed no difference in tumor growth to PBS. The weight of mice was significantly decreased in the CDDP-treated group (Figure 3c,d), but this did not occur with HSA-CDDP-treated mice. This means that CDDP may cause the negative effect in the in vivo system, but this effect was not seen in the HSA-CDDP treatment. For the survival rate in the U87MG tumor model, CDDP-and HSA-CDDP-treated mice exhibited a prolonged survival time than the PBS group ( Figure 3e and Table 2). In the U87MG-shSPARC tumor model, HSA-CDDP treated mice showed a similar survival rate to the PBS group, whereas CDDP-treated mice showed prolonged survival ( Figure 3f, Table 2). These results strongly suggest that the antitumor effect of HSA-CDDP is based on the SPARC-mediated HSA-dependent uptake in tumors and is similar to CDDP. Table 2). In the U87MG-shSPARC tumor model, HSA-CDDP treated mice showed a similar survival rate to the PBS group, whereas CDDP-treated mice showed prolonged survival ( Figure 3f, Table 2). These results strongly suggest that the antitumor effect of HSA-CDDP is based on the SPARC-mediated HSA-dependent uptake in tumors and is similar to CDDP. Biodistribution of HSA-CDDP In Vivo The biodistribution of HSA-CDDP was analyzed 72 h after the last HSA-CDDP treatment as an antitumor effect (treatment administered every other day seven times, beginning 15 days after the first HSA-CDDP treatment) in normal mice and tumor-bearing mice. The amount of CDDP in each organ (brain, heart, liver, kidney, spleen, lung, and blood) was measured using inductively coupled plasma mass spectrometry (ICP-MS). HSA-CDDP showed higher blood distribution than CDDP Biodistribution of HSA-CDDP In Vivo The biodistribution of HSA-CDDP was analyzed 72 h after the last HSA-CDDP treatment as an antitumor effect (treatment administered every other day seven times, beginning 15 days after the first HSA-CDDP treatment) in normal mice and tumor-bearing mice. The amount of CDDP in each organ (brain, heart, liver, kidney, spleen, lung, and blood) was measured using inductively coupled plasma mass spectrometry (ICP-MS). HSA-CDDP showed higher blood distribution than CDDP (Figure 4a). Due to the prolonged blood distribution, all organs except the brain showed higher accumulation of CDDP in the HSA-CDDP group (Figure 4a). Particularly, HSA-CDDP was highly accumulated in the liver (Figure 4a). In the U87MG tumor accumulation of CDDP data, the amount of CDDP in the HSA-CDDP group was significantly higher than the CDDP group (Figure 4b, p < 0.05). In the HSA-CDDP group, U87MG tumors showed a higher accumulation of CDDP than U87MG-shSPARC tumors (Figure 4b, HSA-CDDP, p < 0.05). In the CDDP treatment group, there was no difference in CDDP accumulation between U87MG and U87MG-shSPARC (Figure 4b). In U87MG tumors, the HSA-CDDP group showed significantly increased CDDP accumulation than the CDDP group ( Figure 4b). The result shows that HSA-CDDP is taken up by tumors in a HSA-dependent manner. Int. J. Mol. Sci. 2020, 21, x FOR PEER REVIEW 6 of 15 accumulated in the liver (Figure 4a). In the U87MG tumor accumulation of CDDP data, the amount of CDDP in the HSA-CDDP group was significantly higher than the CDDP group (Figure 4b, p < 0.05). In the HSA-CDDP group, U87MG tumors showed a higher accumulation of CDDP than U87MG-shSPARC tumors (Figure 4b, HSA-CDDP, p < 0.05). In the CDDP treatment group, there was no difference in CDDP accumulation between U87MG and U87MG-shSPARC (Figure 4b). In U87MG tumors, the HSA-CDDP group showed significantly increased CDDP accumulation than the CDDP group (Figure 4b). The result shows that HSA-CDDP is taken up by tumors in a HSA-dependent manner. Biosafety of HSA-CDDP In Vivo In the antitumor effect result, mice treated with CCDP showed significant weight loss compared to the group treated with PBS. This result shows that there can be negative effects of the CDDP in vivo system, and this needs to be verified. It is well known that the major dose-limiting side effect of CDDP is nephrotoxicity [4]. In biodistribution studies, HSA-CDDP was highly accumulated in the liver. We assessed the toxicity of HSA-CDDP in the mice by monitoring the blood marker of liver, kidney function, and body weight 72 h after the final HSA-CDDP treatment as an antitumor effect plan (every other day, seven times, 15 days after the first HSA-CDDP treatment). The CDDP treated group showed significant body weight loss compared to the PBS and HSA-CDDP groups, similar to the antitumor effect result for weight (Figure 5a). In terms of liver function, aspartate transaminase (AST) and alanine aminotransferase (ALT) showed no significant difference among groups (Figure 5a). For kidney function, blood urea nitrogen (BUN) increased in the CDDP group (Figure 5a). In the hematoxylineosin (H&E) staining images, CDDP treated kidney tissue showed tubular degeneration (Figure 5b, black arrows) and extensive epithelial vacuolization (Figure 5b, yellow arrows). In the terminal deoxynucleotidyl transferase (TUNEL) assay, TUNEL positive cells were only identified in the kidney of CDDP-treated mice ( Figure S3). From this result, it was confirmed that HSA-CDDP reduced the nephrotoxicity of CDDP in vivo. Biosafety of HSA-CDDP In Vivo In the antitumor effect result, mice treated with CCDP showed significant weight loss compared to the group treated with PBS. This result shows that there can be negative effects of the CDDP in vivo system, and this needs to be verified. It is well known that the major dose-limiting side effect of CDDP is nephrotoxicity [4]. In biodistribution studies, HSA-CDDP was highly accumulated in the liver. We assessed the toxicity of HSA-CDDP in the mice by monitoring the blood marker of liver, kidney function, and body weight 72 h after the final HSA-CDDP treatment as an antitumor effect plan (every other day, seven times, 15 days after the first HSA-CDDP treatment). The CDDP treated group showed significant body weight loss compared to the PBS and HSA-CDDP groups, similar to the antitumor effect result for weight (Figure 5a). In terms of liver function, aspartate transaminase (AST) and alanine aminotransferase (ALT) showed no significant difference among groups (Figure 5a). For kidney function, blood urea nitrogen (BUN) increased in the CDDP group (Figure 5a). In the hematoxylin-eosin (H&E) staining images, CDDP treated kidney tissue showed tubular degeneration (Figure 5b, black arrows) and extensive epithelial vacuolization (Figure 5b, yellow arrows). In the terminal deoxynucleotidyl transferase (TUNEL) assay, TUNEL positive cells were only identified in the kidney of CDDP-treated mice ( Figure S3). From this result, it was confirmed that HSA-CDDP reduced the nephrotoxicity of CDDP in vivo. Discussion The current standard therapy for glioma is surgical resection. However, it is difficult to remove all gliomas completely because they are occasionally located in the functional brain area [32]. There are other issues related to using CDDP for glioma therapy, as a limited amount of any given platinum drug dose was delivered due to the blood-brain barrier (BBB) in the brain [33][34][35]. Using the HSA-CDDP complex as a therapeutic agent can be a solution to eliminate the remnant glioma in the brain. The interaction of different platinum-based drugs with albumin has received much attention in the last 30 years [36,37]. HSA is a famous nanocarrier that can prolong the blood circulation time and enhance tumor accumulation by the EPR effect and SPARC-mediated active targeting [9,10,17]. In this study, we used HSA-CDDP as a therapeutic complex, using HSA as a nanocarrier. Discussion The current standard therapy for glioma is surgical resection. However, it is difficult to remove all gliomas completely because they are occasionally located in the functional brain area [32]. There are other issues related to using CDDP for glioma therapy, as a limited amount of any given platinum drug dose was delivered due to the blood-brain barrier (BBB) in the brain [33][34][35]. Using the HSA-CDDP complex as a therapeutic agent can be a solution to eliminate the remnant glioma in the brain. The interaction of different platinum-based drugs with albumin has received much attention in the last 30 years [36,37]. HSA is a famous nanocarrier that can prolong the blood circulation time and enhance tumor accumulation by the EPR effect and SPARC-mediated active targeting [9,10,17]. In this study, we used HSA-CDDP as a therapeutic complex, using HSA as a nanocarrier. The cellular uptake of HSA-CDDP was SPARC-dependent. In vitro toxicity results showed that SPARC-expressing U87MG cells exhibited higher cellular toxicity to HSA-CDDP. The antitumor effect of HSA-CDDP also showed a SPARC dependent HSA-mediated therapeutic effect in U87MG, which was similar to that of CDDP. The tumor accumulation of CDDP was significantly higher in HSA-CDDP treated U87MG tumors. The cellular cytotoxicity data revealed that the cytotoxicity of HSA-CDDP was lower than CDDP alone. However, CDDP and HSA-CDDP showed a similar antitumor effect in vivo. There are two possible explanations for this: First, it can be postulated from the characteristics of SPARC, which is a secreted form of protein from tumor cells. SPARC is known to facilitate albumin accumulation in tumor stroma [12,38]. As the in vitro environment is somewhat different to the in vivo tumor environment, SPARC-mediated HSA accumulation may not be as dominant in vitro. Another reason may be due to the enhanced blood circulation time of HSA-CDDP by the nanocarrier of HSA. The half-life of CDDP in plasma is 0.27 h [39]. SPARC is highly expressed in many tumors, such as glioma, melanoma, and breast cancer [40,41]. In this study, we used glioma cell line U87MG, which exhibits the highest expression of SPARC in the cell line. To broaden the applicability of HSA-CDDP as a tumor targeting therapeutic effect, studies using other SPARC expressing tumors are also required. CDDP binding to HSA is known to be irreversible [42][43][44]. We showed the irreversible binding of CDDP to HSA through the serum stability test, where no free CDDP was detected from HSA-CDDP incubation with serum ( Figure S4a). Due to the irreversible binding of CDDP to HSA, it can be postulated that the distribution of HSA-CDDP can be similar to that of HSA. From the single photon emission computed tomography (SPECT) tests on 177 Lu labeled HSA and HSA-CDDP distribution imaging in mice, HSA-CDDP showed a similar distribution and blood circulation to HSA ( Figure S4b). Due to the HSA-dependent pharmacokinetics of HSA-CDDP and enhanced blood circulation in vivo, HSA-CDDP was highly accumulated in tumors and showed a similar therapeutic effect to that of HSA-CDDP in U87MG tumors. The mechanism of action of CDDP is based on the aquation of CDDP in cytoplasm because of the reduced cytoplasmic concentration of chloride ions [45,46]. This aquated CDDP can bind with nuclear DNA and induce a DNA damage response [47]. Aquated CDDP can also induce the accumulation of reactive oxygen species (ROS) or can physically interact with cytoplasmic nucleophiles such as mitochondrial DNA and proteins [48][49][50][51]. As a result of this activity, cells die via mitochondrial apoptosis [52]. During CDDP binding to HSA, CDDP loses chloride ions [28]. In light of the therapeutic effect of CDDP, it is unfavorable to use HSA-CDDP as an anticancer agent. This could be the reason for the similar therapeutic effect observed for HSA-CDDP and CDDP in the in vivo tumor models, despite the enhanced tumor accumulation of HSA-CDDP. Without free CDDP release from HSA-CDDP, HSA-CDDP showed an antitumor effect in vitro and in vivo. It can be postulated that HSA-CDDP can be taken up to cells by endocytosis and then degradation of HSA-CDDP could release free CDDP from HSA [53,54]. The detailed anticancer mechanism of HSA-CDDP at the cellular level is unknown. Further studies are needed to unravel the underlying mechanism of action of HSA-CDDP as a therapeutic agent at the cellular level. CDDP induces nephrotoxicity. CDDP is primarily excreted by the kidney where it is accumulated during the excretion process, preferentially in the S3 segment of the renal proximal tubules [55]. The accumulation of CDDP in proximal tubular cells is also mediated by the membrane transporters copper transporter-1 and organic cation transporter-2 (OCT2), which are mainly expressed in the basolateral membrane of renal proximal tubular cells [56,57]. A biosafety test of the blood marker and structural observations indicated that only the CDDP treated group showed increased kidney function blood marker (BUN) and structural malformation of the kidney. As a result of HSA being bound to CDDP, kidney accumulation of CDDP mediated by excretion and OCT2 can be reduced. In the biodistribution data, HSA-CDDP showed a higher accumulation in all normal organs than CDDP. We observed the biosafety of HSA-CDDP in mice by monitoring the blood marker and body weight 72 h after the last HSA-CDDP treatment. This biosafety marker and histological imaging showed no liver toxicity, but this result only represents short-term organ toxicity. In the liver, HSA-CDDP degradation can occur and this can cause the release of CDDP from the liver and induce late organ toxicity. To insist on the low toxicity effect of HSA-CDDP, long-term organ toxicity studies are needed. The possibility that a CDDP adduct of serum albumin can be used as a therapeutic agent is still highly controversial and has not been well investigated [58,59]. In this study, we showed that the HSA-CDDP complex can be used as a therapeutic agent and achieves a similar therapeutic effect to CDDP. Using HSA-CDDP as a therapeutic agent has some advantages: HSA-CDDP has a higher molecular size than CDDP, which can enhance the blood circulation time of CDDP and reduce the nephrotoxicity caused by the rapid excretion of CDDP through the kidney. In this study, we showed that this advantage worked well in an in vivo model, reducing nephrotoxicity. HSA-mediated tumor targeting can enhance the accumulation of CDDP in tumors, through the EPR effect or active tumor targeting. In this study, we showed that HSA-CDDP can target tumors in a SPARC-mediated HSA-dependent manner. Preparation of CDDP Conjugated HSA Human serum albumin (MP biomedicals, Irvine, CA, USA) was dissolved in PBS as 22 mg/mL concentration. CDDP (Sigma-Aldrich, St. Louis, MO, USA) was dissolved in PBS as a 1 mg/mL concentration. HSA and CDDP solutions were added to a new bottle at a molar ratio of 1:10 (1:1 volume ratio) and stirred for 24 h at 37 • C. To remove unconjugated CDDP and concentrate HSA-CDDP, centrifugal filtration was conducted using an Amicon Ultra centrifugal filter unit (nominal molecular weight limit 30 kDa; Millipore, Burlington, MA, USA). The HSA-CDDP concentration was measured with a bicinchoninic acid (BCA) protein assay kit (Pierce Endogen, Rockford, IL, USA), and the molecular weight was analyzed by matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF) using the TOF-TOF 5800 System (AB SCIEX, Framingham, MA, USA) to check the amount of CDDP per HSA in HSA-CDDP. Conjugation of Fluorescence Dye to HSA and HSA-CDDP We conjugated HSA with FNR648 fluorescence dye following the procedure described in our previous publication [60]. For FNR648 dye labeling to HSA-CDDP, we used the same method as that used for HSA. First, HSA-CDDP was modified using dibenzocyclooctyne (DBCO)-NHS. DBCO-HSA-CDDP was reacted with FRN648 dye at a molar ratio of 1:1 for 30 min at 37 • C. Fluorescence labeled HSA-CDDP was purified using PD-10 columns (GE Healthcare, Buckinghamshire, UK) and eluted with PBS. Confocal Microscopy Imaging for Cellular Uptake of FNR648-HSA-CDDP Cellular uptake of fluorescence labeled HSA or HSA-CDDP followed the same procedure described in our previous publication [17,30]. We incubated cells with FNR648-HSA or FNR648-HSA-CDDP for 2 h at 37 • C. In the exogenous SPARC treatment group, human SPARC (5 µg/mL) and each compound (FNR648-HSA or FNR648-HSA-CDDP) were co-incubated for 2 h at 37 • C. Five randomized images were acquired from all tests to quantify FNR648-HSA or FNR648-HSA-CDDP uptake by the cells. In each group, the average signal intensity of FNR648-HSA or FNR648-HSA-CDDP was divided by the number of DAPI-positive cells, which represented the number of viable cells in the image. This ratio was considered as cellular uptake of each group and used to quantify cellular uptake of each material. Cellular Cytotoxicity Studies U87MG and U87MG-shSPARC cells were added to 96-well plates (4000 cells/well). After overnight incubation, the cells were incubated with CDDP, HSA-CDDP at different concentrations for 72 h. Cell viability was analyzed using a Cell Counting Kit-8 (CCK-8, Dojindo Molecular Technologies, Tokyo, Japan) assay according to the manufacturer's protocols. Cell Apoptosis Study U87MG and U87MG-shSPARC cells were placed into 6-well plates (1.2 × 10 5 cells/well). After overnight incubation, cells were treated with CDDP, HSA-CDDP for 72 h. Cells were then harvested and co-stained with PI and Annexin V using an Annexin V-FITC apoptosis detection kit (BD, San Jose, CA, USA). The apoptosis of cells was analyzed using flow cytometry. Animal Xenografts Tumor Model and Anti-Tumor Effect In Vivo All animal studies were performed under approval from the Seoul National University Institutional Animal Care and Use Committee (IACUC No. 18-0231, 1 November 2018). BALB/c nude mice (5-week-old, male) were purchased from Orient Bio Inc. (Seongnam, Korea). U87MG or U87MG-shSPARC cells (2 × 10 6 cells/site) were injected subcutaneously into the right lower flanks. When tumor volume approached 50 mm 3 , mice were randomly divided into 3 groups (U87MG: n = 12 for PBS group, n = 9 for CDDP group and n = 11 for HSA-CDDP group. U87MG-shSPARC: n = 7 for PBS group, n = 6 for CDDP group and n = 9 for HSA-CDDP group). Mice were IV administrated (7 times, every other day) with PBS, CDDP, or HSA-CDDP. The drug dose of CDDP was 3 mg/kg and the dose for HSA-CDDP was the equivalent amount of CDDP (3 mg/kg CDDP from HSA-CDDP). The tumor size and body weight of each mouse were recorded every other day. Tumor volume was calculated using the equation V = 0.5 × L × W 2 , where L represents tumor length and W represents tumor width. To evaluate the survival rate of CDDP and HSA-CDDP in the tumor model, mice were monitored and euthanized following the humane endpoints guideline (specifically, rapid weight loss of 15-20% within a few days or tumor volume is larger than 2000 mm 3 ). Biodistribution of CDDP Using ICP-MS Normal mice were divided into 2 groups (n = 4 for each group) and IV administrated with CDDP or HSA-CDDP at the same dose and time schedule as the anti-tumor effect examination (3 mg/kg CDDP and equivalent dose of HSA-CDDP as CDDP every other day, seven times). At 72 h after the final IV administration (15 days after first IV administration), mice were euthanized and organs (brain, heart, liver, kidney, spleen, and lung) and blood were acquired and weighed. For tumor CDDP distribution, U87MG or U87MG-shSPARC cells (2 × 10 6 cells/site) were injected subcutaneously into the right lower flanks. When tumor volume approached 50 mm 3 , mice were randomly divided into 2 groups (n = 5 for each group) and IV administrated CDDP or HSA-CDDP (every other day, seven times). At 72 h after the last IV administration (15 days after first administration), mice were euthanized, and the tumor was acquired. All samples (organs, blood, and tumors) were lyophilized and we analyzed the amount of CDDP using the ICP-MS (NexION 350; Perkin-Elmer, Waltham, MA, USA) installed at the National Center for Inter-university Research Facilities (NCIRF) at Seoul National University. Hematology Analysis Mice were treated with PBS, CDDP, or HSA-CDDP (n = 4 for each group) as the therapy treatment (every other day, seven times) and murine blood samples were acquired at 72 h after the final IV administration (15 days after first treatment). After centrifugation at 4 • C for 20 min, the plasma was collected for blood biochemical analysis. The concentrations of AST, ALT, BUN, and creatinine were analyzed at DKKorea (Seoul, Korea). Histopathology Examination The kidney, liver, and spleen of the drug-treated mice were acquired and fixed with 4% paraformaldehyde. Tissues were embedded in paraffin and stained with hematoxylin and eosin (H&E). H&E staining images were acquired using an optical microscope (Olympus BX43, Tokyo, Japan). Serum Stability of HSA-CDDP The CDDP stability of HSA-CDDP in serum was examined following method in another publication [61]. Serum albumin/HSA-CDDP solution were incubated at 37 • C and free CDDP in serum was measured at 0.5, 1, 2, 4, 8, 16, 24, 48, 72 and 120 h after incubation. To obtain free CDDP from serum, an Amicon Ultra centrifugal filter unit (nominal molecular weight limit 30 kDa; Millipore, Burlington, MA, USA) was used. The CDDP concentration measured in the supernatant (bottom of the tube, smaller than 30 KDa) corresponded to the free CDDP level. The concentration of CDDP in the supernatant was analyzed using the ICP-MS (NexION 350; PerkinElmer, Waltham, MA, USA) installed at the National Center for Inter-university Research Facilities (NCIRF) at Seoul National University. SPECT Imaging Small animal SPECT imaging of tumor-bearing mice was performed using Nano SPECT/CT plus (Mediso medical imaging system, Budapest, Hungary). Mice were injected with 18.5 MBq of 177 Lu-HSA or 177 Lu-HSA-CDDP via the tail vein. SPECT images were acquired at 10 min, 4 h, 24 h, 48 h and 72 h after injection. To acquire SPECT images, mice were anesthetized with 2% isoflurane and placed in the prone position. SPECT scans were acquired at 30 s per frame and 40 projections (frames) at an 18 • angular step. The energy peaks of 177 Lu were set to 56.1 keV ± 10%, 112.9 keV ± 10%, and 208.4 keV ± 10%. Reconstructed data from SPECT were visualized using InVivoScope (Bioscan, Washington, DC, USA). Immunohistochemistry and TUNEL Assay The tumor and kidney were fixed with 4% paraformaldehyde and embedded in paraffin, which was further cut into 4 µm sections. To evaluate cellular apoptosis in kidney, a kidney section was stained using a TUNEL assay kit-HRP-DAB (ab206386; abcam, Cambridge, UK) according to the manufacturer's protocols. Statistical Analysis All statistical analyses were performed using GraphPad Prism. Student's t-test was used to determine the statistical significance of cellular uptake of HSA-CDDP, the antitumor effect of HSA-CDDP in the xenograft tumor model, and the biodistribution of HSA-CDDP in mice. P values below 0.05 were considered statistically significant. Conclusions In this study, the HSA-CDDP conjugate examined exhibited SPARC-dependent HSA-mediated tumor accumulation in a U87MG xenograft mouse model. The therapeutic effect of HSA-CDDP was similar to that of CDDP, but nephrotoxicity-the major dose-limiting effect of CDDP-was reduced. In conclusion, we showed the feasibility of CDDP-carrying HSA as a therapeutic agent against glioma.
2020-10-29T09:07:21.225Z
2020-10-26T00:00:00.000
{ "year": 2020, "sha1": "b0eeaa2b71192a7ea3f42c65736384ffb8fb925a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/21/7932/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "feb93af921737efec12b8a1229adb3d4e336ba1e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
265180157
pes2o/s2orc
v3-fos-license
Empowerment Teachers of SMA Malang Raya Making Eco Enzyme and Biology Module for Teachers In this 21st century, educators are required to be able to facilitate and inspire students to be critical and creative, involve students in exploring real-world issues and solving various problems. Learning caring for the environment needs to be associated with the application in the daily lives of students in the family, community, and environmental environment, so that learning is contextual, and students get meaningful learning. There are still quite a lot of teachers who have not been able to develop contextual learning and raise problems from the surrounding environment or daily activities. Eco-enzymes are one of the learning resources and media in the surrounding environment that can be raised in the learning process. This activity aims to hold eco-enzyme training and preparation of teaching modules for Biology Teachers of SMA Malang Raya. Training on making Eco-enzymes was conducted for one day starting with the theory of Eco-enzymes, independent curriculum, making teaching modules, and continued with direct practice of making Eco-enzymes from various kinds of fruit skins. All participants had the opportunity to make Eco-enzymes . Eco-enzymes that have been made are maintained for 3 months in individual homes and harvested after their time. This training was followed by group work for one month to create teaching modules related to Eco-enzymes so that teachers can apply them to students in schools. Based on the training, 6 teaching modules related to Eco-enzymes were produced. Based on this activity, Malang Raya High School teachers can develop learning sourced from the environment around students. INTRODUCTION Learning activities designed by educators need to develop skills demanded in this 21 stcentury learning era, such as critical thinking and problem-solving skills, communication and collaboration skills, and creativity and innovation skills.The main goal of 21 st -century learning is to build individual learning abilities and support their development into lifelong, active, independent learners, therefore teachers need to be learning trainers (Susriyati Mahanal, 2014).Learning an attitude of caring for the environment needs to be associated with application in students' daily lives in the family, community, and environmental environment so that learning becomes meaningful for students.Eco-enzymes are one of the learning resources and media in the surrounding environment that can be raised in the learning process.Eco-enzyme is a liquid extract produced from the fermentation of vegetable and fruit residues with a substrate of brown sugar or molasses.Household organic waste can be processed into Eco-enzyme.Ecoenzyme is a liquid resulting from the fermentation of organic household waste such as fruit and vegetable peels, with a sugar substrate (brown sugar, brown sugar or cane sugar), and water.Basically, Eco-enzyme accelerates biochemical reactions in nature to produce enzymes that are of widespread benefit by using fruit or vegetable waste.Eco-enzyme has many benefits can be a cleaning fluid at home, body care, it can also be as a natural fertilizer and an environmentally friendly pesticide.In this pandemic condition, Eco-enzyme can be used as a hand sanitizer (Agustina Monalisa Tangapo & Febby Kandou, 2022). General description Malang Raya is a metropolitan area which is a combination of three regions such as Batu City, Malang City, and Malang Regency, East Java Province (Ghozali et al., 2017).According to the Dapodikdasmen of the Ministry of Education, Culture, Research, and Technology (Kemendikbudristek), Malang Raya has more than 100 high schools and 1,209 high school teachers.Malang has a distinctive feature with the nickname of the city of education because many educational institutions were established in Malang, ranging from elementary school to tertiary level.In 2020, the Central Bureau of Statistics of East Java Province noted, there were 3 state universities and 46 private universities in Malang City. Problem A common problem experienced by teachers is that the development of learning tools in schools has not been carried out optimally because teachers are still confused and the task load is large, lack of references, and lack of training for teachers.The problem of education and teaching is a fairly complicated and complex problem, because many things affect it (Seidman, 2018).One of the factors that influence the teaching and learning process is the teacher.So far, a lot of criticism has been directed at teachers, especially on the way teachers teach who are considered to place too much emphasis on mastering a number of concepts alone without considering how to communicate a concept in a way that is more pleasant and easy for students to understand and like (Ferdinandus, 2018).The task of the teacher is to convey the subject matter to students using certain methods or methods in the learning process that is carried out.The success of the teacher delivering the material to his learners largely depends on the method used.Whether or not the achievement of educational goals is successful depends on the learning process experienced by the student.Problems in everyday life can be related to materials that involve many disciplines in their studies, such as ecosystems, the environment, and biotechnology (Aasen & Sadownik, 2019).The activeness and independence of students in the learning process is the responsibility of the teacher, every material accompanied by practicum so that practicum activities can be carried out simple and easy to understand.These practicum activities will have a positive impact on the development of student process skills so as to train students in finding their own knowledge.The reality that often occurs is that practicum activities are not carried out due to various reasons, such as lack of time to do practicum, lack of laboratory/assistants, lack of supporting facilities and infrastructure, and so on (Junedi et al., 2020).Target solution Some of the solutions offered are eco-enzyme training and teaching module preparation training.Teaching modules can also be interpreted as learning tools or learning designs based on the curriculum that is applied with the aim of achieving predetermined competency standards (Nurdyansyah, 2018).Eco-enzymes are organic solutions made by utilizing vegetable and fruit waste in creating a clean and comfortable environment.Eco-enzymes are one of the sources and media of learning in the surrounding environment that can be raised in the learning process and become an example for several materials such as environmental changes and their impact on life, anaerobic metabolism, and biotechnology.This can be used as a breakthrough in compiling teaching modules as a learning medium with P5 (Andina, 2019). METHOD This activity uses an empowerment approach through eco-enzyme training and assistance in making Teaching Modules by raising eco-enzymes as teaching modules.The initial stage is carried out through the delivery of material by the community service team regarding the benefits of eco-enzymes, the relationship between eco-enzymes and microbes, the practice of making eco-enzymes, and P5.Furthermore, direct training on making eco-enzymes with partners involved the fathers and mothers of biology teachers at SMA Malang Raya.Before the training, a pretest is held and after the training, a pos, test is held.The next stage is assistance Empowerment Teachers of SMA Malang Raya Making Eco Enzyme and Biology Module for Teachers Balqis, Susilo, Saptasari, et all Based on the average results in the table above, a further test is needed in the form of paired t-test.The paired t-test was analyzed using Microsoft Excel 2010.Based on the table above, it can be seen that the two-tail P(T<=t) is smaller than the alpha value of 0.05.Thus, it can be concluded that there is a significant difference between the pretest and posttest in eco-enzyme training for the development of teaching materials in the form of modules.Teaching modules have a major role to support teachers in designing learning.In the preparation of learning tools that play an important role in teachers, teachers are honed in the ability to think to be able to innovate in teaching modules (Nesri, 2020).The teaching module is closely related to the independent curriculum.The independent curriculum prioritizes character development through content on learning and Pancasila student profiles (Sungkono, 2009).The character that is formed is the important point in Pancasila, noble character, devotion, independence, thinking, critical, and can work together, as well as be creative (Zubaidah, 2019). CONCLUSIONS AND SUGGESTIONS The Eco-enzyme Making Training was carried out offline in Building B21 of the Department of Biology and online through Google Meet with participants consisting of biology teachers from Malang districts and cities.Eco-enzyme training is fairly successful because it exceeds the desired target.Preparation of teaching modules (Lesson Design) based on the results of training on making Eco-enzymes on several basic competencies of students.The teaching module is in the form of (P5), where each participant is divided into 6 groups to make it easier to compile a teaching module (P5). Table 1 . Average Pretest and Posttest Results Table 3 . Hasil Uji t-Test: Paired Two Sample for MeanS
2023-11-15T17:51:20.221Z
2023-04-30T00:00:00.000
{ "year": 2023, "sha1": "069ae38603a7ee597917e95cca3edf01d14ba9c6", "oa_license": "CCBYSA", "oa_url": "https://journal2.unusa.ac.id/index.php/CDJ/article/download/3874/2065", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "21a1311d799781c44548a4bb2312e9e861233645", "s2fieldsofstudy": [ "Education", "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
243823907
pes2o/s2orc
v3-fos-license
Cancer-Related Alopecia: From Etiologies to Global Management Simple Summary Although it does not represent a condition that threatens the life of patients, alopecia nevertheless has an essential impact on the quality of life of patients, particularly in terms of the psychological and social aspects. Indeed, while it has long been considered an acceptable side effect in the management of patients, the progressive emergence of a patient-centered approach coupled with a better knowledge of the pathophysiological processes involved has led to a better consideration of alopecia, both on the preventive and palliative sides. Thus, cancerous alopecia can be multifactorial: iatrogenic (in particular via conventional chemotherapy), induced by a vitamin/nutritional deficiency, or even caused by the disease itself. In this state-of-the-art review, we therefore cover alopecia in an exhaustive manner by considering the different mechanisms involved and their frequency as well as the various therapies offered. Abstract Alopecia represents a multifaceted challenge with distinct etiologies and consequences. Transposed to the world of oncology, different types of alopecia and molecular pathways have been characterized, allowing a better understanding of the underlying mechanisms. In patients with cancer, alopecia can be iatrogenic (i.e., due to conventional chemotherapies, endocrine therapies, targeted therapies, immunotherapies, radiotherapy and surgery) or a direct consequence of the disease itself (e.g., malnutrition, scalp metastases and paraneoplastic syndromes). Identification of the actual incriminated mechanism(s) is therefore essential in order to deliver appropriate supportive care, whether preventive or curative. On the preventive side, the last few years have seen the advent of the automated cooling cap, a prophylactic approach supported by several randomized clinical trials. On the curative side, although the treatments currently available are limited, several promising therapeutic approaches are under development. Appropriate alopecia management is essential, particularly regarding its psychological repercussions with significant consequences on the quality of life of patients and their family and with a potential impact on treatment compliance. Introduction Alopecia, which is defined as a decrease in hair density, exhibits a wide range of features. Indeed, it may be localized, diffuse or total, acute or chronic, sudden or gradual, reversible or permanent. Alopecia might be considered as a common symptom to several pathologies with varied etiologies (e.g., immunological, inflammatory and infectious). In addition, alopecia can be the sentinel sign of systemic diseases: endocrine (e.g., thyroid dysfunction), autoimmune (e.g., systemic lupus erythematosus), psychiatric (e.g., trichotillomania) or infectious. Alopecia can be classified into three classes: non-scarring (such as androgenetic alopecia-AGA, which is the most common alopecia), scarring (with destruction of the hair follicle) and congenital alopecia [1]. Understanding hair alterations requires knowledge of its physiology. Briefly, hair is a complex mini-organism with rapid renewal, a dense vascular network and immune privilege. The hair's life cycle is schematically divided into four phases: anagen (4-6 years growth), catagen (3 weeks with massive apoptosis), telogen (3 month resting followed by expulsion) and kenogen (2-12 months latency). Hair follicles are asynchronous in humans, with ≈85% of anagen hair, ≈1% of catagen hair and ≈15% of telogen hair. Hair loss is considered as pathological when it represents more than 150 hairs per day. Importantly, hair loss is not necessarily accompanied by a visible decrease in hair density. Of note, effluvium corresponds to a sudden, abundant and diffuse hair loss, which can be acute or chronic and affect anagen or telogen hair [1]. Although it is not life threatening, alopecia represents one of the most important parameters affecting the quality of life (QoL) of patients with cancers, particularly in terms of the psychological and social aspects [2][3][4]. Notably, caregivers tend to underestimate the impact of alopecia on patients [5]. Indeed, while it has long been considered an acceptable side effect in the management of patients, the increasing number of cancer survivors coupled with a better knowledge of the pathophysiological processes involved has led to a better consideration of alopecia, both on the preventive and palliative sides. To our knowledge, although there are several excellent reviews focused on a specific aspect of cancer-related alopecia (e.g., reviews assessing mechanisms involved in hair disorders in cancer, notably through anticancer therapies), there is not currently an integrative review that could help physicians and healthcare providers involved in the supportive care in cancers in gaining a global approach. As such, the scope of this review is to provide physicians with a state-of-the-art and clinical practice-driven review, assessing in a comprehensive manner the distinct mechanisms involved in cancer-related alopecia, their respective frequencies and the current and future treatment approaches. Classification and Diagnosis In spite of being a common side effect of many cancer therapies, its clinical presentations, frequencies and underlying mechanisms are plural and mainly related to the type of treatment used, without distinction of gender or age. Clinically, alopecia can be accompanied by dysesthesia, pruritus and dryness of the scalp [6]. It is not restricted to the scalp and can therefore affect other body hair, such as the eyebrows, eyelashes and axillo-pubic hair. Table 1 shows the main characteristics of these attacks according to the treatment used [7][8][9][10][11][12][13][14]. Schematically, it is possible to classify antineoplastic-induced alopecia (ANIA) into three groups (which can be overlapping) according to their mechanism: follicle destruction (mainly chemotherapy and radiotherapy), follicle miniaturization (mainly endocrine and targeted therapies) and hair cycle blockage (mainly immunotherapy). By knowing the exact mechanism of action, the clinician is able to adjust the management of alopecia. Alopecia diagnosis is based on both anamnesis and clinical presentation, which becomes visible when hair density loss is greater than 50%. ANIA and other hair disorders share common clinical patterns. A macroscopic examination with dermoscopy (called "trichoscopy" when applied to the scalp) represents an important tool of diagnosis and prognosis, allowing a finer assessment of hair density, thickness and hair shaft anomalies. In addition, the trichogram, a semi-invasive quantitative technique consisting of a microscopic analysis of a plucked hair's bulb, allows the quantification of anagen and telogen hairs, which may be relevant for ANIA diagnosis and to perform differential diagnosis [15]. The severity of ANIA is mainly scored with the Common Terminology Criteria for Adverse Events (CTCAE) classification and the Severity of Alopecia Tool (SALT) score (Table 2) [16,17]. The SALT score, defined by Olsen and colleagues, is expressed as a percentage of hair loss and can be used either as a continuous or as a categorical variable ( Figure 1). Figure 1. Severity of Alopecia Tool (SALT) score scheme (adapted from reference [16]). Table 2. Comparison of SALT, Olsen and CTCAE v5.0 scores to assess the severity of chemotherapyinduced alopecia (adapted from references [16,17] CTCAE in its version 5.0 defines two grades of severity. Grade 1 refers to hair loss of <50%, which is not obvious from a distance but only on close inspection, while grade 2 ties in with hair loss of ≥50% normal for that individual, which is readily apparent to others. While grade 1 can be hidden with a different hairstyle, grade 2 alopecia requires a wig if the patient desires to completely camouflage the disorder; furthermore, grade 2 alopecia is likely to lead to an impact on QoL. As such, QoL needs to be evaluated, notably in the context of severe and/or persistent alopecia. Two auto-questionnaires assessing the psychological distress induced by alopecia have been developed: the Chemotherapy-Induced Alopecia Distress Scale (directly assessing PDIA) and the Hairdex assessing alopecia-related QoL [2,18]. Importantly, the EORTC QLQ-BR45 includes an item on alopecia (item 34: "Have you been bothered by hair loss?") [19]. Chemotherapy In spite of a frequency of ≈65% (all chemotherapy protocols combined) and the significant impact on patients' lives, research on chemotherapy-induced alopecia (CIA) has remained elusive until recently [20]. CIA appears sub-acutely quickly after the chemotherapy protocol initiation and turns maximal within a few weeks. After chemotherapy, hair growth will resume a normal rhythm within 3 months and normally reach an aesthetically suitable result at 6 months. Predominance in frontal and occipital regions is classically observed [16]. Molecularly, several pathways have been characterized, eventually leading to a massive apoptosis [21]. The mechanism is mainly an anagen effluvium, although telogen effluvium may also be observed [22]. The main predictive factor of CIA is the chemotherapy class (Table 3): alkylating agents, anthracyclines, taxanes and etoposide are the ones with the most common and severe effects [10,11]. Conversely, some molecules infrequently cause CIA; notably, molecules such as fluorouracil and methotrexate cause more patchy alopecia. Beyond the therapeutic class, various predictive factors are known: polychemotherapy regimen and dosage, concomitant anticancer treatment, nutritional and hormonal statuses and the presence of an underlying AGA [23]. Furthermore, a CIA risk stratification based on genomics is emerging, which could pave the way for better identification of high-risk patients in order to provide them with appropriate supportive care [24]. Table 3. Main chemotherapy molecules used against solid tumors and their corresponding frequency of alopecia (adapted from references [10,11]). Focus on Persistent CIA (pCIA) Although classically considered as a temporary mechanism, several studies have shown that CIA might be persistent [10,20,25]. Initially described in hematology through allograft conditioning before bone marrow transplant [26], pCIA was later reported with solid tumors [27]. This latter discovery is manifested by a lesser knowledge of pCIA by oncologists compared to dermatologists [28]. pCIA is defined by the presence of alopecia beyond 6 months after chemotherapy completion. It can exhibit various clinical aspects: mostly diffuse and non-scarring (≈50% of cases), with possible scarring involvement. Histologically, destruction of the follicular epithelial stem cell pool and follicular miniaturization are the main suspected mechanisms [25]. Actually, this is not an uncommon phenomenon, especially in patients treated for breast cancer (BC) with taxane-based protocols [11]. Indeed, a prospective study has shown that BC patients treated with taxanes exhibited pCIA in ≈40% at 6 months, with persistence at 3 years [29]. These recent data reinforce previous observations, both retrospectively and prospectively [11]. Furthermore, pCIA also exhibits modifications in hair quality: indeed, it has been estimated that up to 75% of patients with pCIA still had hair thinning at 3 years postchemotherapy [30]. Notably, an association has recently been shown between a regulatory portion of the ABCB1 gene and pCIA in patients with BC treated with taxane-based chemotherapy [31]. Specific attention should be given after bone marrow transplant pCIA in pediatric oncology, where a frequency of ≈20% has been reported [32]. Targeted Therapies (TTs) TTs cover very different mechanisms of action and targets and their impact on alopecia mainly depends on the molecular target(s). TT-induced alopecia (TIA) confirms the necessary activation of several signaling pathways, such as SHH, EGFR and VEGF, in hair physiology [1]. Way different from CIA, TIAs exhibit specific profiles and evolutions [12]. Indeed, TIA does not appear suddenly and can regress through treatment course, with a whole range of hair modifications: texture, density, color and renewal rate [10,13]. According to a 2015 meta-analysis, TIA affects ≈15% of patients, with a molecule-dependent high variability (Table 4) [14]. Some molecules need to be outlined. Importantly, vismodegib (an SHH inhibitor) has the highest TIA rate, with an estimated frequency of ≈60%; furthermore, cases of persistent TIA have been reported [33,34]. Anti-EGFR molecules present a certain risk of non-scarring alopecia, sometimes completed with a scarring pattern as a consequence of the iatrogenic anti-EGFR facial papulo-pustulosis. Furthermore, it should be noted that the hair has a dry, brittle and curly appearance; in addition, patients show characteristic hypertrichosis and trichomegaly [13]. BRAF inhibitors lead to initial alopecia in ≈20% of patients, although hair regrowth may occur despite continued treatment [35]. Notably, while the use of trametinib (MEK inhibitor) leads to alopecia in about 13% of cases, dual BRAF/MEK inhibition with vemurafenib/cobimetinib or dabrafenib/trametinib leads to alopecia in 13% and 6% of cases, respectively [13,14]. Table 4. Main * targeted therapy classes used against solid tumors and their corresponding risk of alopecia (adapted from references [10,[12][13][14]). Endocrine Therapies (ETs) ET-induced alopecia (EIA) has mainly been described in the context of hormone receptor-positive BC; indeed, the high prevalence of BC and the ET prescription length led to a fine characterization of this toxicity. Molecularly and clinically, EIA emerges in a different way, since it appears progressively and exhibits an AGA-like pattern [8]. The EIA overall incidence-assessed via a large meta-analysis-is ≈5%, with significant variation depending on the therapeutic class(es) used (Table 5) [38]. Notably, total alopecia has been described in patients treated with tamoxifen [39]. Importantly, EIA can be a vector of non-compliance; indeed, it has been reported that 8% of patients stopped taking aromatase inhibitors because of EIA [40]. Notably, CDK4/6 inhibitors (used concomitantly with ET in BC) seem to potentiate EIA [41,42]. In the context of prostate cancer, none of the androgen deprivation therapies have been associated with EIA to date [9]. Table 5. Main endocrine therapies used against solid tumors and their corresponding frequency of alopecia (adapted from references [8][9][10]). Radiotherapy (RT) Radiation-induced alopecia (RIA) has to be considered in two situations: central nervous system primary tumors and brain metastases. Apart from stereotactic RT, the classical treatment of brain metastases is the "pan-encephalic" RT (PERT) protocol. Historical studies have determined threshold doses per fraction: 0.75-2 Gy for temporary depilation and 8-16 Gy for hair follicle sterilization. Mechanistically, RIA consists of an anagen effluvium. Numerous predictive factors of RIA have been characterized: doses (per fraction and total), the type of ionizing radiations (photons vs. protons), the surface and volume of irradiation, concomitant treatment, hair capital and the genetic constitution of the patient [7]. RIA appearance is relatively abrupt and occurs within 1-3 weeks after treatment initiation. It concerns 75-100% of PERT-treated patients (since the dose per fraction is >2 Gy) with a regrowth around 2-4 months post-protocol [43]. Persistent RIA (pRIA) is defined as the presence of alopecia over 6 months post-RT; it is estimated to occur in 60% of PERT-treated patients, notably through scarring alopecia. Different predictive irradiation thresholds have been proposed: from ≈21 Gy in children (treated with concomitant high-dose chemotherapy) to ≈43 Gy in adults [44,45]. Recently, a study suggested a 36 Gy threshold [7]. The evolution of RT protocols and new technologies will certainly lead to a revision of radiotoxicity data in the future. Immunotherapy Immune checkpoints function by maintaining immunological homeostasis through the inhibition of T-cell activation. Immune checkpoint inhibitor (ICI) actions lead to constitutive T-cell activation and anti-tumor activity; however, they are counterbalanced by a range of dysimmune toxicities grouped as immune-related adverse events (IRAEs) that can affect virtually any organ [13,46]. Skin toxicities, including maculopapular rash, eczema and vitiligo, are the most common IRAEs, affecting ≈40% of patients [46]. Notably, some of these skin conditions may lead to alopecia when they extend to the scalp. Currently, ICPI-induced alopecia (IIA) is estimated at 1-2%, with a molecule-dependent variation [47]. In a recent meta-analysis focused on melanoma treatment, the following incidences of alopecia were found: 1.7% for ipilimumab, 2% for nivolumab and 3.4% for pembrolizumab [48]. According to the mechanism involved, namely direct or indirect, IIA can be classified as primary (IIA-P) or secondary (IIA-S), respectively. IIA-P occurs when the reactivation of ICPIs directly leads to scalp dysimmunity. Mechanistically, IIA-P exhibits alopecia areata features; the hair follicle loses its immune privilege, with an intense perifollicular lymphocytic infiltration. Concerning onset kinetics, high variability has been described, from a few weeks to over a year [11]. Persistent IIA has already been described through case reports [49]. To date, treatment mainly involves class IV topical steroids or systemic immunosuppressants in recalcitrant cases [47]. Very interestingly, case reports proposed that IIA-P could be a predictive marker of nivolumab efficacy in melanoma [50]. Beyond IIA, changes in texture and hair repigmentation processes have also been described [46]. IIA-S should be observed in the context of IRAE, as one of its indirect consequences is alopecia. Therefore, it is necessary to rule out an underlying IIA-S before diagnosing an IIA-P. IIA-S management has to consider an etiologic therapy and to favor a multidisciplinary approach [51]. The most often identified IIA-S is through thyroid dysfunction; indeed, it is estimated that ≈10% of ICPI-treated patients will develop thyroid complications [46]. Emerging upon ICPI introduction, genuine autoimmune (e.g., systemic lupus erythematosus and scleroderma) and inflammatory (e.g., sarcoidosis-like and severe drug conditions) diseases have been reported, leading to secondary scarring alopecia [52]. Other Mechanisms A certain number of mechanisms potently leading to alopecia are worth mentioning. The first one is cancer-induced malnutrition, affecting 30-50% of patients [53]. Moreover, hair renewal requires a sufficient vitamin/mineral/energy supply; for instance, vitamin (e.g., vitamin D) and/or micronutrient (e.g., iron and zinc) deficiencies can lead to alopecia [54]. Apart from these, alopecia may unravel an underlying cancer by two main mechanisms. Firstly, scalp metastases can exhibit alopecia features, the so-called alopecia neoplastica [55]. Secondly, a paraneoplastic mechanism can be the first clinical sign of cancer; several case reports of paraneoplastic alopecia have been described, notably through paraneoplastic dermatosis [56,57]. As previously described, alopecia has been described following allogeneic BMT via graft-versus-host disease [58]. Alopecia Management To date, no curative treatment is indicated for ANIA; however, several studies have demonstrated some effectiveness of certain treatments. Global alopecia management (from prophylaxis to palliative approaches) is summarized in Table 6 [10][11][12][13]59]. Importantly, although it is clearly out of the scope of this review, psychological alterations related to alopecia and their consequences on quality of life should not be overlooked [7,20,25]. Table 6. Currently issued recommendations for the management of ANIA based on the literature (adapted from references [10][11][12][13]59]). Type of Alopecia Recommendation(s) Level of Evidence Global approaches ANIA Prevention As the number of patients cured or in remission is constantly growing and with the move toward a patient-centered medicine, supportive care concerning ANIA appears essential [25,59]. Initial management, before any anticancer treatment, is hair status evaluation. Indeed, a pre-existing pathological condition (such as vitamin/mineral deficiency or a more general disorder) is likely to increase ANIA, either by a complementary mechanism or in synergy. Although there is no standard test currently recommended, the following minimum biological test should be performed when facing alopecia without an obvious etiology: complete blood count, [TSH], [vitamin D], [iron] +/− hormonal assays. In the therapeutic arsenal, the cooling cap (CC)-with or without cooling mittens/socks in order to protect extremities-has been used empirically for several decades. CC is based on cooling the scalp during chemotherapy administration in order to provoke a local vasoconstriction and, thus, a lesser exposure of the hair follicles. Two techniques currently exist: CC filled with gel and kept cold (gCC, requiring a regular change of CC during the same cure) and electric CC (eCC, based on the circulation of a capillary cooling liquid through an automated, meanwhile more expensive, technique). CC was quickly considered relevant because of its non-invasiveness and its low cost, counterbalanced by a formal absence of proof of effectiveness and safety. Nevertheless, democratization of its use has been slowing down for a long time, considering the theoretical risk of scalp metastases, the multiplicity of care practices, protocols with a long IV infusion and patient acceptability [60]. Indeed, historical reports of a few cases of scalp metastases have been reported, leading to an initial precautionary principle. Since the 1980s, numerous exploratory studies have been performed, primarily in patients with BC; however, the lack of randomized clinical trials and the multiplicity of chemotherapy protocols and of ways to use CC led to limited data [61]. Meanwhile, two prospective clinical trials published in 2017 have shown the efficacy of eCC. The first one was a prospective multicenter cohort study of patients (106 using the DigniCap© eCC and 16 as controls) with localized BC and treated with adjuvant taxanebased chemotherapy and followed-up annually for 5 years. The primary outcome was whether the use of eCC was associated with ANIA prevention (defined as CTCAE grade ≤1 alopecia, equivalent to hair loss ≤50%): indeed, two-thirds of the patients in the eCC group had ANIA prevention versus 0% in the control group. Furthermore, the impact on QoL was also assessed: only one-quarter of patients felt "physically less attractive" in the eCC group versus >50% in the control group [62]. The second study evaluated the efficacy of the PaxMan© eCC on CIA in a multicenter randomized clinical trial, which enrolled 119 patients in the eCC group (versus 63 in the control group) with localized BC and treated with anthracyclines and/or taxanes. The primary endpoint was ANIA prevention (defined as CTCAE grade ≤1 alopecia) after four cycles of chemotherapy. Hair preservation was found in 50% (CI95% = 40.7-60.4%) of patients in the eCC group versus 0% (CI95% = 0-7.6%) in the control group [63]. In addition, a meta-analysis (pooling 24 studies concerning BC patients) did not find an increased risk of scalp metastases with CC [64]. Furthermore, meta-analyses have shown that eCC exhibits more efficacy compared to other techniques available for ANIA prevention [65,66]. Importantly, CC is currently the only treatment validated by the FDA for CIA prophylaxis and could therefore represent an effective and safe tool for some chemotherapy protocols. However, due to the current dynamics of the ambulatory shift, chemotherapies given through a long IV infusion as well as those dispensed orally are not eligible for the use of CC [67]. ANIA Treatments Topical minoxidil (TMX) is indicated in AGA in both genders. Recently, TMX-5% has been demonstrated to be efficient in EIA (n = 46), with 80% of patients showing moderate to significant improvement within 3-6 months [8]. Furthermore, TMX-2% has shown some efficacy after chemotherapy for hair regrowth (by reducing CIA from 137 days for placebo to 87 days), without an effect on CIA prevention [68]. In a cohort of patients with pRIA (n = 34), TMX-5% exhibited some effect, with 12% complete response and 38% partial response rates [7]. Spironolactone (owing to its anti-androgenic effect) tends to be prescribed off-label for female AGA when accompanied by signs of hyperandrogenism. Since AGA and EIA share the same pathophysiology, spironolactone could represent an interesting treatment of EIA. Concerning its safety, a recent study has shown that spironolactone did not interact with ET and did not increase the risk of BC [69]. Bimatoprost (a synthetic prostaglandin analogue) is classically indicated as an eye drop for intraocular hypertonia. Since 2008, it is also indicated for ciliary hypotrichosis treatment. Several controlled clinical studies have shown bimatoprost gel efficacy for chemotherapy-induced ciliary hypotrichosis, reporting a faster regrowth and an increased density of treated lashes [70]. Hair autotransplantation (HAT), a surgery technique, consists of harvesting follicles from a donor area and then transplanting them into a recipient area in order to increase hair density; in the context of ANIA, the main limit is an insufficient follicular density of the donor area. Recently, it has been demonstrated that HAT could be a relevant treatment for EIA in women: interestingly, a single session was sufficient to recover a satisfying hairline in 70% of patients. In addition, a 3-year follow-up confirmed the persistence of the graft as well as its safety [71]. This technique could be extended to other ANIAs when the donor area is sufficient and when alopecia is considered stable. Finally, a skin expansion (by placing prostheses under the scalp and followed by plastic surgery) could be a promising surgical technique for pRIA/pCIA [72]. Regarding the therapeutic perspectives, various preclinical and clinical studies have reported the efficacy of molecules such as cyclosporine, topical vasoconstrictors and antioxidants [73]. However, large-scale transposition in humans has not yet demonstrated clinically significant efficacy. The Clinicaltrials.gov database currently references 15 active clinical studies for ANIA treatment [74]. For CIA, studies include CC (n = 10), keratinocyte growth factor (n = 1) and LED lights (n = 1). One study is currently evaluating the effect of platelet-rich plasma in EIA and pCIA. For pCIA, ongoing studies include oral minoxidil (n = 1) and CC (n = 1). In addition to new synthetic entities, interesting perspectives are cell therapy and in vitro neogenesis of hair follicles from autologous cells [75]. Palliative Care and Supplementary Management Beyond the medical approach stricto sensu, it is important to remember that supportive care in oncology is essential through an early and multidisciplinary approach [25,59]. The use of wigs and turbans represents one of the current methods of dealing with established alopecia. Several types exist, using different hair-making technologies. Having an early haircut should be advised for limiting the putative change in self-perception induced by alopecia. For psychological concerns, referral to patient associations and psychological support should also be proposed, depending on the situation [4,5]. As the aesthetic aspect can play a major role in improving QoL, patients should be referred to nurses and socio-aestheticians specialized in these types of cares. The latter can propose different camouflage techniques, such as wigs/turbans, dermopigmentation and keratin powder. Dermopigmentation consists of tattooing micro-dots/-traits on the skin through a dermal injection of bioresorbable pigments, giving the illusion of an increased hair and eyebrow density. The treatment requires an average of 2-3 sessions for an optimal result, lasting 2-5 years. Keratin powder is based on natural keratin attaching to the remaining hair via static electricity, allowing a dramatic but temporary gain in hair density [20,25,59]. Conclusions Alopecia, apart from being a well-known side effect of cancer treatments, can reveal a diverse panel of etiologies linked to the disease itself. As such, (bio)clinical investigation should be performed when facing (a risk of) alopecia in the context of cancer in order to offer proper management. ANIA management frequently involves pluridisciplinary management, with the need to assess psychological repercussions and the consequences on the quality of life of patients. Each class of cancer treatment exhibits its own characteristics, and ANIA should not be seen as a homogeneous concern. On the preventive side, eCC has been positioned as a potent tool. Although current treatments exhibit mild efficacy, several therapeutic approaches are under development. Author Contributions: All authors participated equally to writing-original draft preparation and review and editing S.Q., A.G. and F.F. All authors have read and agreed to the published version of the manuscript.
2021-11-07T16:11:32.735Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "65dca75cf7036f14c4e75f1282ee945b8f8c7210", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/21/5556/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf5b5636ab411ff67c28aabbf08c84baa86b5331", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245329641
pes2o/s2orc
v3-fos-license
Benchmarking Differentially Private Synthetic Data Generation Algorithms This work presents a systematic benchmark of differentially private synthetic data generation algorithms that can generate tabular data. Utility of the synthetic data is evaluated by measuring whether the synthetic data preserve the distribution of individual and pairs of attributes, pairwise correlation as well as on the accuracy of an ML classification model. In a comprehensive empirical evaluation we identify the top performing algorithms and those that consistently fail to beat baseline approaches. Introduction While there are many compelling reasons to share data about individuals, such sharing is often prevented due to privacy concerns. Differentially private synthetic data generation stands out as an appealing solution to this problem: it provides strong formal privacy guarantees, while producing a synthetic data set that "looks like" the real data from the perspective of an analyst. This problem has received considerable attention from the research community, with a wide variety of approaches available in the literature. [27,26,38,40,36,29,18,29,15,35,5,10,2,20,19,21,41,22,23,32,16,33,11]. Despite the variety of mechanisms available for this task, the community is lacking a systematic empirical study that compares a variety of mechanisms on different datasets, tasks, and privacy levels. Prior work in this space includes [29], which focuses on GAN-based algorithms in terms of the machine learning classification accuracy; [13], which focuses on GAN-based algorithms in terms of the utility of classification, clustering, aggregation queries and privacy protection; [9], which focuses on algorithms from the NIST 19 synthetic data challenge in terms of the marginal distribution, joint distribution and correlation; [8], which focuses on algorithms before 2016 in terms of the statistical utility; [4], which proposes a general framework for evaluating the quality of private synthetic data; and [37], which proposes a framework SDGym to benchmark the performance of synthetic data. However, none of these works both include a representative set of state-of-the-art algorithms and cover a representative set of metrics. Inspired by DPBench [17], we focus on benchmarking differentially private synthetic data generation algorithms selected from a specific set of inclusion criteria. Most DP synthetic data generation algorithms learn a model over the data from which synthetic data records are sampled. We categorize the algorithms included in our study into three broad classes: GAN-based methods learn a generative adversarial network (GAN) privately, mainly by adding noise to the gradient calculation; Marginal-based methods measure a subset of the low-order marginals and use them to fit a graphical model; and Workload-based algorithms iteratively improve their model to reduce approximation error on workload queries. We evaluate these mechanisms across different datasets and privacy budgets on whether the synthetic data can preserve the distribution of individual and pairs of attributes, pairwise correlation and on the accuracy of an ML classification model. Our experiments reveal a number of findings: 1. Many mechanisms, especially GAN-based mechanisms, often fail to preserve the most basic statistics of the data distribution -their one way marginals. Moreover, these mechanisms fail to beat simple baseline mechanisms on other more interesting metrics. 2. No single mechanism is best on every dataset and task, and privacy budget considered. However, marginal-based mechanisms consistently rank among the best. 3. Marginal-based methods expect discrete data, and proper discretization is essential to get good performance on numerical attributes. We found that using PrivTree [39] to discretize numerical attributes is far more effective than equal-width discretization. Methodology In this section, we describe the mechanisms included in this study (and the justification for inclusion), the tasks considered, the datasets evaluated, as well as any modifications necessary to run the mechanisms on our datasets. Mechanisms We consider five inclusion criteria for selecting the mechanisms for this benchmark study, enumerated below: 1. End-to-End DP: It is claimed to be an end-to-end differential private algorithm that takes a tabular dataset as input and generates a synthetic data of the same schema. cessible to the public (e.g. either available on GitHub or linked in the paper describing the work). 5. No Public Data: It requires no public data. Table 1 lists the mechanisms included in this benchmark, categorizing them by type. Included in this table is GretelRNN, which satisfied the inclusion criteria but is not shown in the experimental results because we found that even when the epsilon that it claims is over one million, it generates an empty dataset after one hour of the generation stage due to the high rejection rate of invalid samples. While we expect all the mechanisms to be able to take a dataset with mixed-type columns as input, some of them only accept categorical datasets or numerical datasets. For mechanisms that expect numerical data, we one-hot encode all categorical features. For mechanisms that expect categorical data, we discretize all numerical features. We considered two approaches for discretization: an equal-width binning strategy, and a strategy based on PrivTree [39]. We find that PrivTree binning was never worse than equi-width binning in most of the cases and lead to significant improvements for some metrics. Details omitted due to space. Three mechanisms, MWEM-PGM, FEM and RAP, also require a workload as input. For datasets that include a classification label (see next section), we set the workload to be all 2-and 3-way marginals that include the label as one of the attributes. For other datasets, we set the workload to be all 2-way marginals. For all algorithms except Kamino, we use default hyper-parameters. For Kamino, the search algorithm (Algorithm 6 from [15]) was not included in the available implementation, so we implemented a variant of it. Datasets We consider seven datasets with different characteristics, numbers of records, and column types. Datasets Car and Mushroom contain only categorical attributes; Scooter contains only numerical attributes; all other datasets contain a mix of attribute types. Most datasets have a classification label. All datasets are from the UCI machine learning repository [12] Metrics We consider four groups of metrics to measure the goodness-of-fit of the synthetic data generated by each algorithm. Each group might include more than one metric but with similar goals. 1 These metrics are inspired by SDGym [3]. For the first three metric groups, numerical attributes are discretized into 19 bins of equal-depth (based on the original data). Since the algorithms may generate synthetic data that lies outside of this range, an additional bin is added to each end of the range. Individual Attribute Distribution Similarity (Ind) This group of metrics measures the similarity of oneway marginals between the synthetic data and the original data. We use total variation distance (TVD), to measure the distance between two one-dimensional distributions, and use 1-TVD as the score. We report the average score over all one-way marginal as the final score. 2. Pairwise Attribute Distribution Similarity (Pair) Similar to the Individual Distribution Similarity, this group of metrics measures the TVD for each two-way marginal and we average across all attribute pairs. Pairwise Correlation Similarity (Corr) We use Cramer's V with bias correction [6] to measure the correlation between two attributes, and, following convention, discretize it into one of four levels: Findings We use SDGym [37] as the platform for all the experiments. The privacy parameter varies within {0.1, 1.0, 10.0}. In this section, we briefly summarize our main findings. F1: No algorithm dominates. We consider a mechanism "optimal" for a particular combination of dataset, epsilon, and metric if that mechanism achieves the best performance (averaged over trials) according to the metric. The optimal rate, shown in Fig. 1a, is the frequency at which a mechanism is optimal for a particular category of metric. Any algorithm that has a non-zero optimal rate means that the algorithm performs the best on at least one combination of dataset, epsilon, and metric. Over half of the algorithms have a non-zero optimal rate. F2: While no algorithm dominates, marginal-based approaches are highly ranked and MST, in particular, is the top-ranked algorithm across all metrics. To get a sense of the overall best performing algorithm, we rank the algorithms according to each metric and then average the rankings. In Fig. 1b, we report the average ranking, stratified by category of metric; we also report the average ranking across all metrics ("GT"). The overall average rank of MST is 1.56 indicating that it is frequently the best algorithm, which is also consistent with the results of Fig. 1a. F3: Many mechanisms fail to accurately preserve the distributions of individual attributes (1-way marginals). Fig. 2a reports the average similarity (1-TVD) of individual attribute distributions for two of the datasets in our benchmark, Adult and Mushroom. Several algorithms have an average similarity of less than 0.75. PrivBayes has uneven performance, doing well on Mushroom and worse on Adult; we hypothesize this is due to how PrivBayes discretizes numerical attribues (Mushroom has no numerical attributes). To gain some intuition for how well algorithms are preserving attribute distributions, we display some representative examples in Fig. 3 from the Adult dataset. Fig. 3a uses a stacked bar chart to compactly display the distribution of the relationship attribute. The first row is the distribution in the original data (ground truth) and the remaining rows are the distributions in the synthetic datasets produced by the algorithms, ordered by their similarity to the ground truth (1-TVD is reported to the right of each row). When 1-TVD is below 0.75, the distortion is visually apparent. Some algorithms (DP-GAN, DP-CTGAN) have highly skewed distributions; others appear uniform (FEM) even though the original data is non-uniform. Fig. 3b shows the distributions of the numerical attribute age (after it was discretized). In addition to looking at individual attribution distributions, we also evaluate pairwise attribute correlations. Fig. 2b reveals the extent to which correlations are accurately preserved. It reports the CoreAcc metric for datasets Adult and Mushroom. As a baseline for comparison, we include Independent, an algorithm that assumes all columns are statistically independent (uncorrelated) and generates synthetic data by sampling attribute values from distributions estimated from 1-way marginals perturbed with Laplace noise. We use a divergent color scheme to indicate whether it is above (orange) or below (blue) the baseline. The results in Fig. 2b give us two main findings. F4: In terms of preserving attribute correlations, Marginal-based algorithms consistently obtain the highest correlation accuracy. And, F5: Many algorithms fail to preserve correlations more accurately than independent, a simple baseline that generates uncorrelated data. To gain some intuition about correlations, we look more closely at the correlation accuracy on the Adult dataset. Fig. 4 shows correlation heatmaps for the original data (Ground Truth, center plot) and for synthetic datasets generated by the algorithms. In each heatmap, a cell corresponds to an attribute pair and darker cells indicate stronger correlation (the colors are discretized to the four correlation levels described earlier). The figure shows that marginalbased algorithms (top row) do fairly well (CorAcc=0.75 means 75% of the colored cells match the ground truth figure) though some correlations are not captured. Several algorithms (FEM, PATE-CTGAN, DP-CTGAN) have accuracy matching the baseline Independent. The correlation plots show why: the synthetic data generated by these algorithms has attributes that appear to be statistically independent (uncorrelated), matching the independent baseline. The remaining algorithms have accuracy that is lower than the baseline and it appears that this is due to those techniques introducing spurious correlation. F6: The synthetic data produced by marginal-based approaches MST and MWEM-PGM is of sufficient quality that it can be used to train an accurate classifier, nearly matching the performance of a classifier trained on the original data. In Fig. 2c, we report how well the synthetic data preserves the ability to train a classifier. It shows the f1 score of an XGBoost classifier trained on the synthetic data. On Mushroom, the classifiers trained on the synthetic data from MST and MWEM-PGM achieve nearly perfect accuracy; on Adult, MWEM-PGM achieves the highest f1 score of 0.74, which approaches the f1 on the original data of 0.86. The relatively strong performance of MWEM-PGM may reflect the fact that its strategy is tuned to support classifier learning by favoring marginals that include the class label. F7: The synthetic data produced by GAN-based approaches yields classifers that are generally no more accurate than a simple majority classifier. In Fig. 2c, we include baseline algorithm Independent. Since this algorithm models each attribute independently, a classifier trained on its synthetic data can be no more accurate than a classifier that always predicts the majority label. In this figure, we again use a divergent color scheme to compare performance to this basline and we see the GAN-based approaches often have an f1 score below the baseline. Conclusion We presented a systematic benchmark study of differentially private synthetic data generation algorithms that can generate tabular data. We considered a variety of algorithms including GAN-based, Marginal-based and Workload-based methods and evaluated their utility in terms of how well they preserve low dimensional statistics, pairwise correlations and ML classification accuracy. We found that Marginalbased methods consistently outperformed other methods, and GAN-based methods were unable to preserve the 1dimensional statistics of tabular data. Our research motivates future research directions that include developing better GAN methods for tabular data, methods for pre-processing categorical/numeric data types, and identifying methods to choose the best synthetic data algorithms given a dataset.
2021-12-20T02:15:10.169Z
2021-12-16T00:00:00.000
{ "year": 2021, "sha1": "12f697f4043a4cc8c7d1b81a1cd88770b7066e0d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "12f697f4043a4cc8c7d1b81a1cd88770b7066e0d", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
225096620
pes2o/s2orc
v3-fos-license
M-BiRank: co-ranking developers and projects using multiple developer-project interactions in open source software community Social collaborative coding is a popular trend in software development, and such platforms as GitHub provide rich social and technical functionalities for developers to collaborate on open source projects through multiple interactions. Developers often follow popular developers and projects for learning, technical selection, and collaboration. Thus, identifying popular developers and projects is very meaningful. In this paper, we propose a multiplex bipartite network ranking model, M-BiRank, to co-rank developers and projects using multiple developer-project interactions. Firstly, multiple developer-project interactions such as commit, issue, and watch are extracted and a multiplex developer-project bipartite network is constructed. Secondly, a random layer is selected from this multiplex bipartite network and initial ranking scores are calculated for developers and projects using BiRank. Finally, initial ranking scores diffuse to other layers and mutual reinforcement is taken into consideration to iteratively calculate ranking scores of developers and projects in different layers. Experiments on real-world GitHub dataset show that M-BiRank outperforms degree centrality, traditional single layer ranking methods, and multiplex ranking method. Introduction Open source software community is now a main driven force of innovations, and plenty of software developers collaborate on millions of open source software projects, among which are many popular software projects that drive the innovations of different fields [1,2]. For example, deep learning frameworks such as TensorFlow, PyTorch, and MXNet contributed by famous companies simplify the building of deep learning models, which to some extent speed up the innovations in the field of artificial intelligence in both academia and industry [3,4]. In open source software community, developers from different areas usually take the social collaborative coding paradigm and participate in different portions of a common project using the social and technical functionalities provided by the community [5,6]. Taking GitHub as an example, developers from different areas and with different technical backgrounds can collaborate on a project by committing codes and commenting on issues, star/fork a project for improving technical skills or technical selections, and follow other professional developers for keeping pace with new trends. Much like the role of opinion leaders in social networks, influential developers and projects drive the technical trends and the prosperity of open source community. Thus, identifying influential developers and projects will be of great significance. Existing work on influence analysis for open source software community mainly focused on applying traditional unipartite single layer graph ranking methods [7,8], including PageRank [9] and HITS [10], although many new graph ranking methods for more complex network structures, such as bipartite network [11,12] and multiplex network [13,14], have been proposed. On the other hand, existing graph ranking methods have not merged bipartite network and multiplex network as a single network model, which is necessary for our case to model multiple interactions between developers and projects. Potential applications of influence analysis of open source software community would include service recommendations [15][16][17][18][19] and risk assessment [20]. In this paper, we focus on modeling multiple interactions between developers and projects as a multiplex bipartite network and propose a new ranking method based on it in an iterative and mutually enhanced way. The main contributions of this research are many folds: • We propose a multiplex bipartite network model to represent multiple interactions between developers and projects. • We propose a new ranking model called M-BiRank on multiplex bipartite network which takes into account the mutual reinforcement between different types of nodes as well as different layers. • We apply the proposed model to real-world GitHub dataset, showing that our model outperforms baseline ranking models. The remainder of the paper is organized as follows. Section 2 gives a brief introduction to related works on ranking models from the perspective of network and its applications in software engineering. In Section 3, details about the proposed M-BiRank model are illustrated. Then, the experiment results and discussions are given in Section 4. Finally, we briefly summarize our work and explain future directions in Section 5. Related work Identifying influential nodes in social networks has been a hot topic for decades. Existing works mainly focus on either structural properties or diffusion dynamics. Plenty of structure-based metrics and random walk-based methods have been proposed. Structure-based metrics usually base itself on some intuition for centrality from either local or global views. Degree is the most common local structure-based centrality metrics. Based on degree, Chen et al. [21] proposed a semi-local centrality metric, considering both the nearest and the next nearest neighbors. Chen et al. [22] further considered the negative impact of local clustering on information diffusion in networks and proposed ClusterRank. In addition to extending degree, several local structure-based centrality metrics are originated from H-index, which is originally used to measure the citation (2020) 2020: 215 Page 3 of 18 impact of a scholar or a journal [23]. Zhao et al. [24] first extended H-index concept to networks and defined the h-Degree metrics for weighted networks. Liu et al. [25] combined the H-index of both node itself and its neighbors and proposed a local H-index centrality. Lü et al. [26] revealed the relation among degree, H-index, and coreness and introduced a family of H-indices. Local structure-based centrality metrics benefit from low computational complexity at the cost of reducing effectiveness. While global structure-based centrality metrics can better identify influential nodes from a global view of the whole network. Earlier researches in sociology introduced several global structure-based centrality metrics, including closeness centrality [27], betweenness centrality [28], and eigenvector centrality [29]. Recently, researches introduced eigenvector centrality to more complex network structures. Wang et al. [30,31] extended eigenvector centrality to multilayer networks under a framework of tensor decomposition. Random walk-based methods apply resource diffusion dynamic process in networks and measure node's influence according to the final resource the node obtains at stationary state of the dynamic process. Typical random walk-based ranking methods include PageRank [9] and HITS [10]. To solve the problem of dangling nodes of PageRank, Lü et al. [32] added a ground node to the original network, making the original network connected, and proposed the parameter-free LeaderRank method. Halu et al. [13] extended PageRank to multiplex network and proposed Multiplex PageRank, which included four kinds of intra-layer enhancement mechanism [33]. To address the ranking problem on bipartite networks, He et al. [11] proposed the BiRank method. Instead of modeling pairwise interactions, higher-order network models have recently been proposed and applied to ranking nodes with group interactions. Treating scientific collaboration as a group interaction, Liang et al. [34] modeled it as a hypergraph and proposed HHGBiRank. From another view of higherorder structure of networks, that is motif, Zhao et al. [35] proposed motif-based PageRank. In addition to identifying influential nodes for general purpose social networks, needs have also emerged for open source software community, a special kind of social network. Xuan et al. [8] constructed several social networks based on the communications between developers in Apache and applied degree, PageRank, and HITS for developer ranking. Hu et al. [7] studied the problem of influence identification of developers in GitHub and proposed a Following-Star-Fork-Activity-based approach. Joblin et al. [5] employed several activity counts, centrality metrics, and network structural properties to distinguish core and peripheral developers. Method In this section, we will present a Multiplex Bipartite Ranking method, called M-BiRank, for co-ranking developers and projects in open source software community. As shown in Fig. 1, the proposed M-BiRank consists of three parts and incorporates two basic assumptions that address the issues in Section 1. We will start by giving the definition of multiplex bipartite network and introducing the notations. Then, thorough explanations and mathematical formulations are given for the two basic assumptions, that is, mutual reinforcement between different types of entities and between different layers of network. Finally, we will introduce the overall algorithm and time complexity analysis of it. Step 1. Ranking iteration in Layer A Step 2. Ranking scores in Layer A diffuse to Layer B Step 3. Ranking iteration in Layer B In this paper, we model multiple interactions between developers and projects as a developer-project multiplex bipartite network. The notations we will use throughout the article are summarized in Table 1. Mutual reinforcement between developers and projects Most ranking methods in networks adopt the intuition as PageRank and HITS that an influential node should be linked by many other influential nodes, which is also applicable in the case of developer-project bipartite network. For example, an elite developer usually participates in popular projects. In open source software community, it is quite a practice to estimate the influence of a developer by how popular the projects she/he participates in are and how much she/he contributes to these popular projects. And a project with influential developers or organizations as major contributors always attracts a large number of attention. Taking TensorFlow as an example, it got thousands of stars quickly upon its first release in GitHub because it is supported by Google. This intuition forms our first assumption that a developer (project) should be ranked high if it is connected to high-ranked projects (developers) in a certain layer A, which can be formulized as follows: In order to employ a prior belief on nodes' importance and provide better ranking results, we also adopt query vector and symmetric normalization as BiRank. The prior belief on nodes' importance and rankings from network structure are balanced with two parameters γ and λ. The final formulation of the mutual reinforcement between developers and projects is as follows: Mutual reinforcement between different layers Besides considering the mutual reinforcement between developers and projects in each single layer, we also take into account the mutual reinforcement between different layers. From our experience as open source software developers, we could firmly assume different interactions between developers and projects reflect different aspects of influence and only a composition of all the aspects could reflect a comprehensive influence of developers and projects. For example, committing code to a project indicates a developer's coding skill and commenting issues of a project may show a developer's design skill or bug-fixing skill. The influence of a developer should be measured by summarization of both coding and design skills. To implement mutual reinforcement between different layers, we choose to incorporate the ranking scores of developers and projects from the first layer as an enhancement of the query vectors of the second layer. The mathematical formulations are given in Eqs. (5) and (6). To be more clearer, we transform Eqs. (5) and (6) to their equivalent matrix form in Eqs. (7) and (8): Overall algorithm By combing both the mutual reinforcement between developers and projects in each single layer and that between different layers, we finally propose the M-BiRank method to co-rank developers and projects in open source software community and the overall algorithm is shown in Algorithm 1. Input: Weight matrix W A and W B , query vectors u 0 , p 0 and hyper-parameters γ , λ; Output: Ranking vectors u, p; Initialize p B and u B using p A and u A , respectively; 9: while Stopping criteria is not met do 10: 14: return u and p. Time complexity analysis The overall time complexity of M-BiRank is a summarization of each layer's time complexity. For each layer, according to Eqs. Experiment In this section, the performance of M-BiRank model is evaluated against the GHTorrent dataset [36]. Datasets GHTorrent [36] monitors the GitHub public event timeline and retrieves information of developers, projects, and interaction details between them from these events [37]. We choose the GHTorrent dataset as of November 1, 2018, and extract the relationships between developers and projects, both of which mainly belong to PHP community. The steps of data preprocess include the following: (1) choose issues and commits which belong to PHP projects, (2) keep developers and projects which exist in both issues and commits, and (3) Table 2. Evaluation metrics In order to evaluate and compare the performance of M-BiRank and baseline methods, both correlation analysis and SIR model are adopted. Correlation analysis mainly focuses on comparing predictions against the ground truth, and Pearson's correlation coefficient (PCC) [38] is chosen. In our experiment, the number of watch of projects and the number of followers of developers are set as the ground truth for the rankings of projects and developers, respectively. PCC reflects the correlation degree of two variables through the linear correlation between vectors, which is defined as follows: where n represents the number of elements; x i and y i represent the ith element of sample x and y, respectively; and the value range of PCC is [ −1, 1]. However, the ground truth in correlation analysis is some kinds of degree in networks, which is a rough metric in evaluating the influence of developers or projects. To rank more precisely, dynamic models are needed for simulating the influence diffusion process [39]. SIR model [40] is a classical epidemic model and is often used to evaluate the ability of information spreading of a node in social networks. Generally, an influential user with a higher ranking score will spread his/her opinions to more developers. The transmission process of SIR model is shown in Fig. 2, where S (Susceptible), I (Infected), and R (Removed) denote the susceptible, infected, and recovered nodes. At the initial step of the [43] transmission process, several infected nodes are set, and then, the transmission is iteratively repeated until no new nodes are infected [41]. At each step, infected nodes infect its susceptible neighbors with the probability α, and infected nodes' recovery to removed status with the probability β. So SIR model is suitable for evaluating the ability of information spreading of a node. By applying nodes with highest ranking scores of different ranking methods as the initial infected nodes and comparing the final number of affected nodes (both infected and removed nodes), the effectiveness of different ranking methods can be compared. Baseline methods We compare M-BiRank with several baseline methods: Degree [42]. The degrees of developers and projects in different layers of multiplex developer-project bipartite network are calculated and averaged. PageRank [9]. PageRank ranks nodes by iteratively propagating scores on the network and is usually suitable for single layer monopartite network. In this experiment, we apply it to multiplex developer-project bipartite network with two different setups. PageRank-Avg ignores types of nodes and applies PageRank algorithm directly to different layers of the multiplex developer-project bipartite network. The final ranking score of a node is the average of different layers. PageRank-Add merges different layers of multiplex developerproject bipartite network into a single layer of developer-project bipartite network and uses the average edge weights of different layers as edge weights of this single layer bipartite network. Then, we apply PageRank algorithm to this single layer of developer-project bipartite network ignoring types of nodes. Finally, both PageRank-Avg and PageRank-Add rank developers and projects separately according their final ranking scores. The hyperparameter is set to 0.85. BiRank [11]. BiRank is a propagation-based ranking method on bipartite networks and adopts a normalization strategy in the iterative process. BiRank-Avg applies BiRank algorithm to different layers of multiplex developer-project bipartite network separately and averages the ranking scores in different layers as the final ranking scores. BiRank-Add firstly merges different layers of multiplex developer-project bipartite network into a single layer of developer-project bipartite network with the average of the edge weights in different layers as edge weights. Both of the hyperparameters are set to 0.85. Multiplex PageRank [13]. Multiplex PageRank considers the impact of the centrality of a node in one layer on that in another layer and introduces nodes' centrality of the preceding layer to current layer in four ways. In this experiment, we choose the Additive Multiplex PageRank and have two different setups, that is, MPR-Commit uses the commit layer as the first layer and MPR-Issue uses the issue layer as the first layer. The hyperparameter is set to 0.85. M-BiRank. M-BiRank is the method we proposed for ranking nodes in multiplex bipartite network. As the setup in Multiplex PageRank, M-BiRank-Commit uses the commit layer as the first layer and M-BiRank-Issue uses the issue layer as the first layer. The hyperparameters γ and λ are set to 0.85. Each element of the query vector u 0 (p 0 ) for the corresponding node (developer/project) is set to the sum of its all edges' weights over the total sum of all edges' weights of the whole developer-project bipartite network of the first layer. Results We compare the experimental results of M-BiRank with baseline methods by both correlation analysis and SIR modeling. The hyperparameters for M-BiRank γ and λ are both set to 0.85. Correlation analysis In correlation analysis, the follower number of developers and the watch number of projects are set as the ground truth for ranking developers and projects, respectively. Pearson's correlation coefficient (PCC) is calculated between the ranking results from M-BiRank and baseline methods and ground truth rankings. The results are shown in Table 3. From the results of correlation analysis, we have the following observations: (1) M-BiRank model we proposed outperforms all the baseline methods for both developer ranking and project ranking. This indicates that it is necessary to model multiple interactions between developers and projects as a multiplex bipartite network, which not only considers mutual enhancement between developers and projects but also takes into account mutual enhancement between different interactions. This highly agrees with realworld practice. For example, a project with elite developers participating in is usually a popular project and a developer participating in popular projects is often an elite developer. Developers have different ways to take part in certain projects such as committing code or solving issues, and different ways are tightly coupled. (2) Comparing the different settings of M-BiRank itself, M-BiRank-Commit performs better than M-BiRank-Issue in most cases, which means it is better to take the commit layer of the multiplex developer-project bipartite network as the initial layer for M-BiRank model. This also agrees with real-world practice. Issue is a helper function in social collaborative coding which provides a discussion board for software developers about bugs and designs. While commit is a main function during software development for developers, thus, the commit layer is more important. So M-BiRank-Commit performs better in identifying more influential developers and projects. In Section 4.4.2, we only compare M-BiRank-Commit with benchmark methods. SIR simulation In this section, to evaluate the information spreading ability of top 100 developers ranked by different methods, SIR model is adopted on commit layer of the developer-project multiplex bipartite network. M-BiRank is compared against each baseline method separately. For each comparison, the initial infected nodes (developers) for SIR model are the top 100 developers ranked by each method excluding those ranked top 100 by both methods. During the SIR process, an infected node infects each of its neighbors with probability α = 0.005 simultaneously and recoveries to removed state with probability β = 0.006. For each SIR simulation, we run 300 iterations at most and repeat 10 times to average the value of each step. The results are shown in Figs. 3, 4, 5, 6, and 7, and several significant observations are found: (1) The result of comparison between different settings of M-BiRank itself in Fig. 3 indicates M-BiRank-Commit performs better, which is in perfect accordance with the result found in correlation analysis in Section 4.4.1. Thus, only M-BiRank-Commit is compared against baseline methods in the rest part of this section. (2) M-BiRank outperforms all the baseline methods in identifying influential developers, which means nodes' types and mutual reinforcement among different interactions play important roles and multiplex bipartite network can model multiple interactions between two different types of nodes more precisely. Specially, the performance difference between M-BiRank and BiRank is larger than that between M-BiRank and Multiplex PageRank (MPR), from which we can conclude that considering mutual reinforcement among different interactions is of more importance than distinguishing nodes' types. (3) The number of final infected projects is more than that of developers in both M-BiRank and all the baseline methods. According to researches on epidemics on networks, information spreads faster and broader in networks with shorter average path length. From Table 2, we can see the average degree of projects is larger than that of developers. Case study In addition to correlation analysis and SIR simulation, we further do a detailed case study to show the effectiveness of our model in identifying influential developers and projects. The top 20 developers and projects ranked by our model M-BiRank are listed in Tables 4 and 5, respectively, followed by their ranks in baseline methods. Table 4 indicates baseline methods, and M-BiRank ranks the first six developers similarly, while some influential PHP developers ranked in top 20 by M-BiRank are not identified or ranked with lower scores by baseline methods. For example, Fabien Potencier (GitHub ID: fabpot) and Taylor Otwell (GitHub ID: taylorotwell), the most active contributors of the two most popular PHP frameworks, Symfony and Laravel, are not identified as influential developers by some of the baseline methods. From GitHub as of June 1, 2020, Symfony and Laravel have 23.3k and 59.4k stars, respectively, and Fabien Potencier and Taylor Otwell have 10.4k and 18.6k followers, respectively. Taylor Otwell has more followers than Fabien Potencier, and Laravel is more popular than Symfony, but Fabien Potencier is ranked higher than Taylor Otwell because Laravel is based on some popular components of Symfony. Thus, we can conclude that Fabien Potencier is more influential than Taylor Otwell. As for projects, from Table 5, we can see both M-BiRank and baseline methods rank popular PHP frameworks with higher scores. But some important PHP components identified by M-BiRank are not identified as influential projects or ranked with lower scores by baseline methods. For example, illuminate/database, a popular ORM library, is ranked with a high score by M-BiRank but is ranked with a lower score by BiRank-Add and PageRank-Add, and is never identified as influential projects by BiRank-Avg, PageRank-Avg, and MPR-Commit. As we know, in modern web development, ORM is quite critical because it is responsible for accessing database. Experimental settings discussion In the experiment, several key settings will affect the performance of M-BiRank and a brief discussion about these settings is shown as follows. First, we will study the impact of edge weight in the ranking process. Both unweighted and weighted developer-project multiplex bipartite networks are constructed, and for weighted case, the interaction times are summed as edge weight. Then, correlation analysis on top k developers and projects is applied and the results are shown in Fig. 8, from which it can be concluded that weighted developer-project multiplex bipartite network performs better than unweighted case and edge weight plays an important role in identifying influential developers and projects. Then, experimental settings for SIR simulation are discussed. It can be seen from Fig. 9 that the more initial infected nodes are set, the more final infected nodes. It is also obvious that the same number of top k projects being set as initial infected nodes will result in more final infected nodes. Finally, the hyperparameters γ and λ of our proposed M-BiRank model are analyzed. For simplicity, we consider the condition that γ and λ are equal. It can be concluded from Fig. 10 that both prior belief of developers' (projects') importance and rankings from network structure play roles in the final rankings of developers (projects) and their contributions to final rankings are approximately equal. Conclusions In this work, we study the problem of identifying influential developers and projects in open source software community. We model multiple interactions between developers and projects as a multiplex bipartite network and propose an iterative refinement ranking method M-BiRank by incorporating the mutual reinforcement between developers and projects as well as between multiple developer-project interactions. The proposed M-BiRank is evaluated against four baseline methods on real-world GitHub dataset. Extensive experimental analysis and case study show M-BiRank significantly outperforms baseline methods in both correlation analysis and SIR simulation. The general idea behind the proposed M-BiRank is modeling multiple kinds of entities and interactions in open source software community into a single network and incorporating mutual reinforcement between different kinds of entities as well as between different types of interactions when ranking. As we know, there are other entities such as blogs and organizations in addition to developers and projects in open source software community and plenty of interactions between them such as user-user following and project-project dependency. In future work, more entities and interactions could be introduced and modeled as a heterogeneous information network and mutual reinforcement in ranking would be generalized using meta-path.
2020-10-29T09:08:45.091Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "8fc67027a4ad3cae681c461a995fc9579ac3021d", "oa_license": "CCBY", "oa_url": "https://jwcn-eurasipjournals.springeropen.com/track/pdf/10.1186/s13638-020-01820-3", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "2eedeaf623d8b94e93c22057d57db5e796e66d64", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
114722646
pes2o/s2orc
v3-fos-license
DESIGNING A CURRICULUM ABOUT ELECTRON MICROSCOPE BASED ON DISCIPLINARY METHOD FOR GRADUATE STUDENTS Nanotechnology has developed in all fields of human life and many research institutes and universities teach this major in their departments. The aim of this research was to design the educational objects and contents about identifying and characterization of nanomaterials by electron microscope for undergraduate students. Research is applied and descriptive. In this study the tool was an objective-content researcher made table. Ten books from reference books and electronic sources were selected as samples for design the syllabus. The objectives and content was developed according to disciplinary method. Finally, researchers propose 4 objectives and 10 contents about electron microscope learning for undergraduates. The main objects are: An introduction to nanomaterials, a review of optics and lenses, an overview to the optical microscope, an introduction to Electron microscope. urgent need to better understand the properties of materials on the nanoscale level. At the technological front, there is a strong demand to develop new techniques to fabricate and measure the properties of nanomaterials and related devices. Fortunately, significant advancement has been made over the last decade in both fronts. It has been demonstrated that materials at Nano scale have unique physical and chemical properties compared to their bulk counterparts and these properties are highly promising for a variety of technological applications. (Zhang, 2009) In nanotechnology, analysis can be made to the level of manipulating atoms, molecules and chemical bonds between them. (Funmilayo & Abiodun-Solanke, 2014) Nanotechnology influences almost every facet of everyday life from security to medicine. The concept of nanotechnology is that when one goes down to the bottom of things, one can discover unlimited possibilities and potential of the basic particle. Nanotechnology has wide industrial and clinical applications in: Medicine, Chemistry and environment, Energy, Information and communication, Heavy industry. (Kanaparthy & Kanaparthy, 2011) The smallest measurable size in analysis depends on several factors: light or electron wavelength as the measurement tool; resolution power of the human eyes; diffraction of light or electron as instrument. Visible light, which is the one our eyes are sensitive to, ranges between 400 and 700 nanometers (one nanometer is one billionth of a meter). By observation of materials using visible waves, only the particles greater than 400 nm can be observed, but considering the diffraction of light on the human eyes, and also the size of the retinal cells, the smallest size objects visually detectable to human naked eye are 75 micrometers. (Waldman, 2002). In the first attempt, scientists constructed optical microscopes which common microscope magnify the objects 1000 times .This is possible to confuse us that we can observe 75 nm particles, but we cannot never observe the particles with dimensions smaller than the visible light wavelength. Therefore, the minimum particle size is 0.4 micrometers (or 400 nm) with the best optical microscope in theory. However, the size of the smallest features that we can distinguish under the microscope is on the order of the wavelength of the light used. This means that we cannot observe things that are smaller than a few hundred nanometers using our eyes and visible light. Because of this constraint, we have to use electron microscope (EM). In 1930, shortly after the discovery of the particle-wave duality of electrons, (Wheeler & Zurek, 2014). People realized that the wave characteristics of electrons could be used to build a microscope that could surpass the resolution limit of optical microscopes. In just a few years, the first generation of transmission electron microscopes became commercially available. However, it was not until after World War II that TEMs began to be widely used for material characterization by metallurgists and material scientists. High-resolution TEM is a technique developed since the 1970s to image the atomic structure of materials. (Tanaka, Usukura, Kusunoki, Saito, Sasaki, Tanji, & Arai, 2013). There are some main components to an electron microscope: an electron optical column, a vacuum system, lens supplies for focusing and deflecting the beam and the high voltage, and control software. Electron microscopes work by using an electron beam instead of visible light and an electron detector instead of our eyes. An electron beam allows us to see at very small scales because electrons can also behave as light. It has the properties of a wave with a wavelength that is much smaller than visible light (a few trillionths of a meter!). With this wavelength we can distinguish features down to a fraction of a nanometer. (Williams & Carter, 1996). An electron microscope uses an 'electron beam' to produce the image of the object and magnification is obtained by 'electromagnetic fields'; unlike light or optical microscopes, in which 'light waves' are used to produce the image and magnification is obtained by a system of 'optical lenses'. It has already been discussed that, the smaller is the wavelength of light, the greater is its resolving power. The wavelength of green light is 0.55µm, in other word, it is 110,000 times longer than that of electron beam 0.05 Å. That is why, despite its smaller numerical aperture, an electron microscope can resolve objects as small as 0.001µ (or 10 Å), as compared to 0.2 µm by a light microscope. Thus, the resolving power of an electron microscope is 200 times greater than that of a light microscope. It produces useful magnification up to X 400,000, as compared to X 2000 in a light microscope. Thus, the useful magnification is 200 times greater in an electron microscope than that in a light microscope. (Egerton, 2006) Let's explore the different types of electron microscopes, how they work and some of their applications. In a scanning electron microscope or SEM, a beam of electrons scans the surface of a sample. The electrons interact with the material in a way that triggers the emission of secondary electrons. These secondary electrons are captured by a detector, which forms an image of the surface of the sample. The direction of the emission of the secondary electrons depends on the orientation of the features of the surface. There, the image formed will reflect the characteristic feature of the region of the surface that was exposed to the electron beam. (Seiler, 1983). the sample, the electrons hit a fluorescence screen that forms an image with the electrons that were transmitted. You can better understand this process by imagining how a movie projector works. In a projector, you have a film that has the negative image that will be projected. The projector shines white light on the negative and the light transmitted forms the image contained in the negative. (Harris, 2015). TEM is widely used to observe microstructures through imaging, revealing phase/crystallographic orientation information through a diffraction pattern and discovering chemical composition by means of the energy spectrum. Like an optical microscope, a TEM can also record images of microstructures but to a much higher resolution. Typical TEM resolution can go down to the nanometer region, making it an excellent tool for nanomaterials characterization. High resolution TEM (HRTEM) can even image lattice points that are in the range of angstroms. In fact, the current resolution of TEMs is not limited by electron wavelength but by astigmatism and aberration. A huge effort is being conducted by microscopists to resolve these issues and thus push TEM's resolution further. The transmission electron aberration-corrected microscope (TEAM) project was initiated as a collaborative effort between different national labs to redesign the electron microscope around this front. (Guo & Tan, 2009) Since identification of nanomaterials is extremely important, students should know the principles of working with electron microscopy. METHOD The aim of this research was to design the educational objects and contents about identifying and characterization of nanomaterials by electron microscope for undergraduate students. Research is applied and descriptive. In this study the tool was an objective-content researcher made table. Ten books from reference books and electronic sources were selected as samples for design the syllabus. RESULT In this study, we proposed four topics for teaching and learning of electron microscope. First topic is introduction to the basic concepts of the nanotechnology such as: nanomaterial definition, some of unique features of nanomaterial, surface effect in Nano scale, different categories of nanomaterials. In part two, students learn optic, electromagnetic waves, the "de Broglie" law, kinds of lenses, image formation, chromatic and spherical aberration in lenses. In Part Three, Students becomes familiar with the history and invention of optical microscope and finally, should be able to explain different parts of optical microscopes, the concepts of resolution, magnification, and the limitations of optical microscopes in observation of very fine particles. In last part, student becomes familiar with history electron microscope and how does the electron microscope work. They must exactly interpreted different parts of the electron microscope and how its work. Finally the important questions to answer: what are the advantages and disadvantages of using electron microscope? An introduction to nanomaterial The concept of nano  Explain the word nano.  Explain the nanoscale.  Which materials are called nanomaterials?  Define the term nanotechnology.  Name some of the applications of nanotechnology.  Name the categories of nanomaterials and give examples for each category. The importance of material properties in nanoscale  Explain the surface effect with an example.  Name and explain different kinds of electron gun in the EM.  Describe different kinds of electromagnetic lenses.  What is the main reason for using lenses in the optic column of EM?  What are the advantages and disadvantages of using transmission electron microscope? CONCLUSION After studying and analyzing valid and available sources, it was concluded that undergraduate students must primarily learn some topics to understand the characterization of nanomaterials with electron microscope. They must be aware about the entity of nanomaterial and essentiality of the knowledge of the nanomaterial. Secondly, they must be familiar with the entity of electromagnetic waves, optics, different types of concave and convex lenses and their application in optical microscope. They can convey the limitation of optical microscope in nanomaterial identification and find out the necessity of using electron microscope and understanding its different parts. They must know two types of electron microscope in nanomaterial identification. All these concepts are gathered in 4 main chapters, 10 contents and 46 behavioral education objective.
2019-04-15T13:06:44.512Z
2016-08-10T00:00:00.000
{ "year": 2016, "sha1": "7d6e2ecadec7254c170016e159208b82b92dc3aa", "oa_license": "CCBYNC", "oa_url": "https://www.ajol.info/index.php/jfas/article/download/142520/132254", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "60e83495cbd46124e30c06057612ed4b49ba8cb1", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Engineering" ] }
238475195
pes2o/s2orc
v3-fos-license
Three years pilot of spinal muscular atrophy newborn screening turned into official program in Southern Belgium Three new therapies for spinal muscular atrophy (SMA) have been approved by the United States Food and Drug Administration and the European Medicines Agency since 2016. Although these new therapies improve the quality of life of patients who are symptomatic at first treatment, administration before the onset of symptoms is significantly more effective. As a consequence, newborn screening programs have been initiated in several countries. In 2018, we launched a 3-year pilot program to screen newborns for SMA in the Belgian region of Liège. This program was rapidly expanding to all of Southern Belgium, a region of approximately 55,000 births annually. During the pilot program, 136,339 neonates were tested for deletion of exon 7 of SMN1, the most common cause of SMA. Nine SMA cases with homozygous deletion were identified through this screen. Another patient was identified after presenting with symptoms and was shown to be heterozygous for the SMN1 exon 7 deletion and a point mutation on the opposite allele. These ten patients were treated. The pilot program has now successfully transitioned into the official neonatal screening program in Southern Belgium. The lessons learned during implementation of this pilot program are reported. Spinal muscular atrophy (SMA) is a neuromuscular disorder characterized by muscle atrophy resulting from the degeneration of motor neurons in the spinal cord. SMA is caused by biallelic pathogenic variants in the SMN1 gene, which encodes Survival of Motor Neuron (SMN), a protein essential for survival of motor neurons 1 . Approximately 95% of patients carry a homozygous deletion of exon 7 in the SMN1 gene, the remaining 5% of cases are due to the deletion of exon 7 on one allele and a deleterious variant on the opposite allele. SMN2 is a pseudogene that differs from SMN1 by only a few nucleotides, including a C to T transition in exon 7. This variant results in the skipping of exon 7 in about 90% of SMN2 transcripts, thereby encoding a truncated, unstable protein. The full-length, functional SMN protein results from approximately 10% of SMN2 transcripts. The number of SMN2 copies is inversely correlated with the severity of the phenotype. Patients with two copies usually present with the most severe and frequent form of spinal muscular atrophy, SMA1. In these patients, symptom onset usually occurs before the age of 6 months, and this type of SMA is associated with high mortality and morbidity 2 . Patients with a larger number of copies of SMN2 may present with symptoms long after acquisition of ambulation; a limited few even develop symptoms in adulthood. Currently, SMA is classified into four types, SMA1, SMA2, SMA3, and SMA4, based on maximal motor ability achieved. Over the last few years, several new treatments for SMA have dramatically improved the prognosis of affected patients 3 . Nusinersen 4 was the first drug to be approved by the United States Food and Drug Administration (FDA) and the European Medicines Agency (EMA) in December 2016 and June 2017, respectively. In Belgium, nusinersen has been reimbursed by the healthcare system since September 2018. More recently, onasemnogene abeparvovec-xioi gene therapy 5 also received FDA and EMA approval, in May 2019 and May 2020 respectively. The marketing authorization of a third drug, risdiplam 6 , was granted by the FDA last year, and it also received a positive opinion from the EMA's Committee for Medicinal Products for Human Use (CHMP) in February 2021. Several other drugs are currently in development 7 . Based on these recent advances in SMA management and on evidence showing that patients treated presymptomatically have better outcomes 8,9 , newborn screening (NBS) for SMA has begun in several countries [10][11][12][13][14][15][16][17][18] . Moreover, in 2018 SMA was included in the Recommended Uniform Screening Panel (RUSP), the list of disorders that the US Department of Health and Human Services recommends be screened for as part of NBS programs 19 . In early 2018, the authors of this paper and Neuromuscular Reference Centers (NMRCs) of Southern Belgium launched a 3-year NBS pilot program for SMA under the project title "Sun May Arise on SMA". The pilot project was done in close collaboration with our industry partners AveXis, Biogen, and Roche, who funded a significant part of the program, as well as with the governmental agency in charge of NBS in Southern Belgium, the Office of Birth and Childhood (Office de la Naissance et de l'Enfance, ONE) 20,21 . It should be noted that NBS is not a federal competency in Belgium, and therefore such initiatives are conducted by a separate government agency in Northern Belgium. The initial pilot phase of the 'Sun May Arise on SMA' project transitioned into an official program in Southern Belgium on 1 March 2021. Northern Belgium has correspondingly made a political commitment to include SMA in their official program in 2022. This manuscript reports the key insights gained during the pilot effort. Results Inclusion of SMA in the NBS program. The process that led to implementation of the NBS program for SMA in Southern Belgium has been previously reported 20 . A key principle was involvement of all stakeholders from the beginning. Political, ethical, and clinical partners, including genetic and screening labs, were involved in the project's governance. Incidence. Over the 3-year pilot study from March 2018 to February 2021, 136,339 neonates were tested for the SMN1 exon 7 deletion using a previously described qRT-PCR test with fluorescence read-out 20 . The dispersion plot of the ratio of SMN1 to the housekeeping gene RPP30 allowed clear discrimination between positive (i.e., SMA patients with a homozygous deletion of exon 7) and negative results (Fig. 1). Nine SMA cases were identified. To our knowledge, no newborn carrying a homozygous deletion was missed over this period. All patients with symptoms of neuromuscular disease in Belgium are referred to an NMRC, thus it is quite unlikely that such a case could happen without one of the centers being informed. Nevertheless, we cannot rule out the possibility that a patient with SMA3 or SMA4 born during the period of the pilot study may be diagnosed in the future. One SMA1 patient was not be diagnosed through NBS. The neonate was heterozygous for the SMN1 exon 7 deletion and had the c.815A>G (p.Tyr272Cys) point mutation on the opposite allele. This patient was referred to an NMRC at the age of 4 months, after the onset of symptoms compatible with SMA. Neonate referral. Positive screening results were immediately communicated by the laboratory to both the neonate's pediatrician and to referent neurologists in NMRC. The parents were contacted on the same day by a referent neuro-pediatrician or by a pediatrician of the maternity ward and consultation was planned as soon as possible. Thanks to the second-tier MLPA testing performed on DBS-extracted DNA, the number of SMN2 copies was available to the clinician at the patient's first visit, and therefore the clinician could immediately explain relevant therapeutic options to parents. The neonate's blood was then drawn to perform the MLPA confirmatory analysis. There were no false positives from the initial DBS testing. The screening and diagnostic timelines for the ten SMA patients are detailed in Table 1. All nine patients identified through NBS began treatment before the age of 2 months. In order to ensure the most efficient management of patients, it is important to save time. Over the course of the project, the turnaround time (TAT) was considerably improved. For the first 9 months, the population coverage was limited to Liège NBS center, where about 300-350 samples were analyzed each week. The median TAT, calculated for the interval between DBS Patient treatment and outcomes. Parents were informed about the different therapeutic options during first visit. Nusinersen was available in Belgium from the start of the study. Risdiplam and the gene therapy onasemnogene abeparvovec-xioi were not commercially available in the country during the pilot study but were accessible through several concurrent clinical trials in NMRC (Spr1nt: NCT03505099, STRIVE-EU: NCT03461289, Rainbowfish: NCT03779334). For the six patients who received nusinersen, treatment began an average of 10 days after the first consultation (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20). Parents of Patient 9 initially refused the treatment, which explains the delay in initiation. The delay between the first consultation and the initiation of treatment was the longest for the three patients who participated in the therapeutic trials (18,22, and 27 days) as participation in a trial required testing prior to inclusion. Patients who showed early clinical manifestations of the disease, even if weak (i.e., only areflexia), were those who had two copies of SMN2. These patients had developmental delays despite treatment. Patients with three or four copies of SMN2 showed no symptoms at the time of treatment initiation and hit motor developmental milestones at the usual ages. SMN2 copy number and modifier variants, treatment regimen, and evolution of symptoms in identified patients are summarized in Table 2. Lessons learned from individual cases. The case of treatment refusal. The parents of one patient initially refused treatment. The child had three copies of SMN2 and was asymptomatic at the time of diagnosis. The parents were not French speakers, and at the initial consultation were accompanied by a French-speaking cousin serving as a translator. This was not an optimal situation, as the translator was emotionally invested and only partially translated the physician's explanation to the parents. Following their refusal, they were offered a second consultation with two different child neurologists and a psychologist with a professional translator in attendance, and a further consultation was also proposed with a German-speaking neurologist. The parents stated several times that they would prefer to wait for their daughter to present with symptoms before discussing treatment. This prompted internal discussions among the clinical team to balance the right of parents to make decisions regarding the care of their child with the rights of the child given that clinical evidence clearly indicates that treatment before symptom onset is necessary to ensure the possibility of normal development 8,9 . After requesting several external medical and external opinions, we explained to the parents that the clinical team could not carry the responsibility of withholding care, and that the family court would have to be consulted. After receiving initial opinions from the prosecutor supportive of intervention, the parents accepted the necessity of treatment. Interestingly, the relationship between the clinical care team and the family remained positive, www.nature.com/scientificreports/ and 1 year after birth the mother stated that they had been in such an emotional state that they were 'unable to make the right decision' and now recognized that treatment was the best solution. No other parents refused treatment. Some parents indicated their preference for a particular treatment. The choice to proceed with a treatment was always made in light of treatment availability, the child's clinical condition, and the scientific data available at the time, and with the mutual agreement of the treating physicians and the parents. Patients and siblings with four copies of SMN2. As mentioned earlier, treatment of children is specifically discussed with the parents. In the two cases with four copies of SMN2 identified during the pilot study, the parents promptly agreed to the proposal to initiate early treatment. One of the patients identified with four copies of SMN2 had two older siblings, aged 4 years and 6 years and 6 months, respectively. Interestingly, the mother presented with two copies of SMN1 and the father with one copy. We then discovered that the maternal grandmother had three copies of SMN1, two on the same chromosome, and the paternal grandmother had only one copy. The mother was 2/0, which means that she would not have been identified as at-risk during carrier testing. The initial clinical examination of the siblings of the patient indicated normal development, but the parents wished to have them tested. This was done, and we found that, like the infant, both children had the homozygous deletion of exon 7 of SMN2 and four copies of SMN2. Their parents opted to delay treatment. Further evaluations of the siblings were performed after 3 months. The physician had concerns regarding the potential muscle weakness of the older sibling, but the parents again opted to delay treatment. When the child was aged 7 years and 4 months, a video sent by the parents clearly confirmed a proximal weakness and fatigability. On examination, there was an absence of patellar reflex, and the need for the child to support himself with a hand on his leg when rising from the floor. The motor function measure and six-minute walk test were stable. The parents refused to treat at this stage. At 7 years and 11 months, the electromyography (EMG) showed a 30% loss of motor amplitude. At 8 years, the same difficulties at the clinical examination were noticed with a complete absence of reflexes, and unchanged compound muscle action potential. The second sibling, who was 4 years old at the time of diagnosis, showed no deficit in either the clinical examination, physiological tests or EMG. Follow-up is continuing with clinical and physiotherapy examinations every 6 months. To date, at the age of 5 years and 6 months, the second child is still wholly asymptomatic. Transition to health authorities: a strong partnership among stakeholders. Retrospectively, the key element in the successful transition from the trial project to a government-sanctioned public health program was the involvement and unanimous support of all stakeholders from the beginning of the project and throughout its duration. Transitioning to an official program was an initial objective of the pilot program. The involvement of patient advocacy groups, neuromuscular reference centers, and newborn screening centers, as well as www.nature.com/scientificreports/ public engagement through broadcast and social media (such as on the study's Facebook page, www. faceb ook. com/ sunma yaris eonsma) also significantly facilitated the rapid and smooth transition to an official program. A clear governance structure helped to build a strong partnership between pilot study leaders, the regional agency in charge of NBS, and NBS centers. Public involvement gave rise to support from across the political spectrum in Belgium. The ordinance incorporating SMA into the NBS list for Southern Belgium was passed by the Parliament of Wallonia on 4 February 2021 for implementation on 1 March 2021, with immediate handover from the study team to the public health service after the completion of the 3-year pilot project. UCLouvain and ULBruxelles NBS centers are incorporating the SMA screening test into their own infrastructure. Discussion The incidence of SMA of 1 in 15,149 determined during the NBS pilot study in Southern Belgium is broadly consistent with previous studies. The incidence reported in Taiwan was 1 in 17,181 neonates 12 . In Germany, 30 SMA cases were identified during screening of 213,279 DBS cards for a incidence of 1 in 7109 infants 17,22 . Australian NBS has identified nine SMA patients in 103,903 newborns screened for an incidence of 1 per 11,544 18 . New York State recently screened more than 225,000 neonates and reported a much lower incidence of 1 per 28,137 23 . The authors of that study argued that the low SMA incidence reported in their area is likely due to biased estimates, coupled with increased awareness and access to carrier screening, genetic counselling, cascade testing, prenatal diagnosis, and advanced reproductive technologies. A better understanding of this low incidence is of primary importance since it could have consequences on reimbursement for disease-modifying therapies and NBS funding decisions 24 . Surprisingly, we did not identify any SMA neonates during the third year of our pilot study. Based on the Poisson distribution of rare events, the probability of diagnosing no cases of SMA over 1 year is 2.5% (Table 3). Given the low probability that there should be no cases in a year, we hypothesized that carrier screening and prenatal testing had contributed to this outcome. We therefore contacted various molecular genetics centers in Southern Belgium to request the number of positive results for SMA based on pre-conceptional and prenatal diagnosis during the corresponding period. However, they reported no positive results that could explain this absence of cases over the previous year. Subsequently, three new cases were identified in the first 4 months following the end of the pilot, which further reinforces the hypothesis of a pure random distribution. Our study is, to our knowledge, the first to report a SMA patient compound heterozygous for the SMN1 exon 7 deletion and a point mutation on the opposite allele, in the context of NBS. Because the first-tier assays specifically target the homozygous SMN1 deletion, this patient was not be identified during the screening process. Rather, the patient was identified at the age of 4 months, after referral for mild hypotonia. The clinical sensitivity of SMA NBS is estimated between 95 and 98%, as affected individuals who are compound heterozygotes (i.e., those with one SMN1 allele lacking exon 7 and a point mutation on the second allele) are missed 11,25 . To date, no false negatives or false positives have been identified in our screening program. The five neonates with either three or four copies of SMN2 were all asymptomatic at treatment start (Table 2). Most presented the highest Children's Hospital of Philadelphia Infant Test of Neuromuscular Disorders (CHOP-INTEND) and Hammersmith Infant Neurologic Examination, Sect. "Results" (HINE-2) scores during their last motor assessment (age range: 12-33 months). The four newborns with two copies of SMN2 showed a slight hypotonia and/or a discrete areflexia when the treatment was initiated. These patients did not get the highest scores on CHOP-INTEND and HINE-2 scales during their last motor assessment (age range: 14-32 months). Of these four patients, three were treated with the approved nusinersen therapy. Treatment initiation may thus be considered as relatively delayed (range: 29-54 days) when compared to first visit (range: 20-32 days). This lag may be a factor that has impaired the most favorable outcome for these patients. In the future, we hope that the recent transition of our pilot study into the official neonatal screening program will facilitate a more prompt care. The overall evidence for the efficacy of early treatment of patients with SMA has been recently reviewed 26 . It is likely that the cost of the new SMA treatments initially hampered the implementation of NBS programs by the political authorities. Presently, the substantial cost burden of standard care for patients with SMA is estimated to be between US$ 75,047 and US$ 196,429 per year for SMA1 patients, and between US$ 27,157 and US$ 82,474 for other types of SMA 27 . Therefore, given the high cost-to-benefit ratio of drugs approved at current prices when administered to post-symptomatic patients 27 , we know it is critical to identify patients prior to symptom onset. A medico-economic evaluation with assessment of patient quality of life is also currently ongoing to assess the cost-effectiveness of our NBS program 20 . Pre-treatment levels of phosphorylated neurofilaments are a validated marker of nerve cell damage in pre-symptomatic and in young SMA1 patients 28 . These levels decrease exponentially in pre-symptomatic SMA patients with two SMN2 copies, indicating acute and severe neuronal loss 9 . www.nature.com/scientificreports/ These data indicate that it is critical to begin treatment of SMA1 patients with as little delay as possible. An NBS program is accordingly an ideal method for early identification of these infants. There were several incidents encountered during this pilot program, the description of which may help other NBS programs more effectively communicate with the parents of recently diagnosed infants. In one case, parents initially refused treatment. In hindsight, this might have been avoided if a professional translator had been present during the first consultation. In another case, three SMA-affected children of a mother with two copies of SMN1 on the same allele were diagnosed as a result of NBS: the youngest through the NBS pilot program itself and his siblings following this initial positive identification. As the mother would not have been identified as at-risk during carrier testing, this clearly indicates that carrier screening should not be relied upon as the sole strategy against SMA. Finally, we were faced with a case of a patient with symptoms that the parents refused to recognize. Political authorities must therefore put plans in place to deal with cases of refusal of treatment. Presently, some countries leave the decision of treatment to a multidisciplinary consultation meeting, whereas others leave all choice to the parents. The present authors believe that the interest of the child must take priority over parents' rights. A collegial discussion of these potential issues prior to implementation of an NBS program is necessary. Our study suffers from the small size of the studied population. Southern Belgium has a total population of approximately 4.5 million people; therefore the number of cases identified in the neonate population remains low. Today, nine countries around the world have started SMA NBS, with the number of newborns screened set to increase in the coming years as further countries embark on similar programs 29 . Our project confirms that a pilot program can be rapidly transitioned into the official NBS program. Given the effective treatments now available for SMA and the importance of treatment prior to the onset of symptoms, testing for SMA should be incorporated into screening of all newborns. Materials and methods Newborn samples. NBS samples were collected on Whatman ® 903 cards between 48 and 120 h of life either in maternity wards or at home, in accordance with legal requirements of the federal authority (Wallonie-Bruxelles Federation) in charge of NBS in Southern Belgium. The dried blood spot (DBS) cards were sent to selected neonatal screening laboratories. No additional sampling was required to incorporate SMA testing in the standard NBS panel as the residual blood spots collected for conventional NBS were sufficient to test for SMA. After analysis, filter papers are stored at room temperature for 5 years. As detailed in our previous manuscript 20 , parental consent was not required for participation in this study. While strongly recommended, NBS is not mandatory in Southern Belgium and parents are informed that they have the right to refuse screening for their child. This opt-out option is not disease-specific; it applies to the neonatal screening panel as a whole. The project was approved by our ethical review board (reference number B412201734396), in accordance with the Declaration of Helsinki. NBS assay and confirmatory method. The flow chart for screening for SMA is shown in Fig. 3. We designed a quantitative polymerase chain reaction (qPCR) assay to specifically detect homozygous deletions of SMN1 exon 7 on DNA extracted from DBS 20 . DNA extraction was performed by alkaline denaturation at 98 °C. qPCR amplification was performed in 96-well plates, preloaded with primers, dye-labeled probes, and master mix provided by Eurogentec. This assay cannot identify heterozygous carriers of the deletion of exon 7 or SMN1 point mutations, and the number of copies of SMN2 were not determined in this first-tier assay. Given the importance of SMN2 copy number in SMA management, qPCR-positive results were confirmed by the multiplex ligation-dependent probe amplification (MLPA) technique, which also provided information on SMN2 status. For this purpose, we used the Salsa MLPA Probemix P021 SMA diagnostic kit (MRC Holland). First-tier positive samples were re-analyzed twice from the same DBS. Simultaneously, a second-tier MLPA assay was performed from the same DNA extracted for the first-tier qPCR. Upon positive results from confirmatory testing, neonates were immediately referred to a neuro-pediatrician in one of the NMRCs involved in the trial. At the first visit, fresh blood was collected to confirm the positive screening result by MLPA on an independent sample. Additionally, we also sequenced the SMN2 gene to look for the presence of both c.859G>C and c.835-44A>G intragenic modifier variants. A SMN2-specific PCR has been used to amplify exons 7 and 8 and study the presence or absence of the positive modifier variants. The primers (available on request) were designed based on the paralogous sequence variants described by Blasco-Pérez et al. 30 , in order to achieve specificity towards SMN2 (Blasco-Perez et al., in preparation). Population coverage. There are approximately 55,000 annual births in Southern Belgium, and NBS for these infants is carried out by three independent academic centers. The current project was launched in March 2018 in Liège's NBS laboratory, which screens about 16,000 newborns per year. Due to strong support from the supervisory authorities and the efforts of the project management team to promote the project, the pilot study rapidly expanded to include the two other screening centers of Southern Belgium, UCLouvain and ULBruxelles. In order to rapidly implement the program in these two centers, DNA was extracted in the lab to which the DBS card was sent. Sealed microtiter plates containing samples for SMA screening were then transferred to the lab in Liège, which ran qPCR assays on all samples. SMA screening was offered to the entire neonate population of Southern Belgium beginning in early 2019. Clinical and therapeutic protocol. Data availability The data that support the findings of this study are available from the corresponding author, FB, upon reasonable request. www.nature.com/scientificreports/
2021-10-09T06:17:18.409Z
2021-10-07T00:00:00.000
{ "year": 2021, "sha1": "3f1f97f08683c2b0e33458cfef539c7f7f5f3c84", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-99496-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df4e0b2b1a415995175b93d5239879296cda8802", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7921918
pes2o/s2orc
v3-fos-license
Insights in Hypothesis Testing and Making Decisions in Biomedical Research. It is a fact that p values are commonly used for inference in biomedical and other social fields of research. Unfortunately, the role of p value is very often misused and misinterpreted; that is why it has been recommended the use of resampling methods, like the bootstrap method, to calculate the confidence interval, which provides more robust results for inference than does p value. In this review a discussion is made about the use of p values through hypothesis testing and its alternatives using resampling methods to develop confidence intervals of the tested statistic or effect measure. particular meaning and it is not calibrated to some kind of relevance. It is just a value between 0 and 1, referring on how likely is to observe "more extreme results" given the null hypothesis. According to the approach suggested by Neyman & Pearson, a "hypothesis test" is actually a test about an alternative hypothesis, which refers to a "minimally relevant effect" (and not about "some non-zero effect" as the null hypothesis). These tests are designed to control error-rates and allow a balance on the expected cost/benefit ratios that are associated with the actions taken based on the test results. To perform such tests, it must be specified a minimally relevant effect and also acceptable error rates. After the experiment or the study is conducted, the decision is actually about rejecting (or not) a hypothesis. So either the "null hypothesis" is not rejected, which means that the assumed effect was not relevant, or the alternative hypothesis is accepted, which means that the effect was relevant. Note that there is no point where the "truthfulness" of an effect is discussed. This does not matter in statistical hypothesis testing. The only thing that matters is what actions are taken based on an effect that is considered relevant [2 -15]. Major Problems Using the p Values as Result of a Hypothesis Test Many investigators, in various research fields refer to Neyman & Pearson hypothesis tests and their associated p values. Indeed, the p value is a widely used tool for inference in studies. However, despite the numerous books, papers and other scientific literature published on this topic, there still seems to be serious misuses and misinterpretations of the p value. According to Daniel Goodman, "a p value is the right answer to the wrong question" [1]. A summary is given by Joseph Lawrence that presented at least four different major problems associated with the use of the p values [16]: "P values are often misinterpreted as the probability of the null hypothesis, given the data, when in fact they are 1. calculated assuming the null hypothesis to be true." "Researchers often use p values to "dichotomize" results into "important" or "unimportant" depending on 2. whether p is less or greater than a significance level, e.g., 5%, respectively. However, there is not much difference between p-values of 0.049 and 0.051, so that the cut off of 0.05 is considered arbitrary." "P values concentrate attention away from the magnitude of the actual effect sizes. For example, one could have 3. a p value that is very small, but is associated with a clinically unimportant difference. This is especially prone to occur in cases where the sample size is large. Conversely, results of potentially great clinical interest are not necessarily ruled out if p > 0.05, especially in studies with small sample sizes. Therefore, one should not confuse statistical significance with practical or clinical importance." "The null hypothesis is almost never exactly true. In fact it is hard to believed that the null hypothesis, H o : µ = µ, 4. is correct! Since the null hypothesis is almost surely false to begin with, it makes little sense to test it. Instead, it should rational to start with the question "by how much are the two treatments different?" There are so many major problems related to p values that most statisticians now recommend against their use, in favour of, for example, confidence intervals. In a previous publication entitled "The value of p-value in biomedical research" alternatives for evaluating the observed evidence were briefly discussed [17]. Here, a thorough review on hypothesis testing is presented. Hypothesis Testing Versus Confidence Intervals Researchers from many fields are very familiar with calculating and interpreting the outcome of empirical research based solely on the p value [18]. The commonly suggested alternative to the use of the hypothesis tests is the use of confidence intervals [19 -26]. As it has been suggested by Wood (2014), "the idea of confidence intervals is to use the data to derive an interval within a specified level of confidence that the population parameter will lie with confidence" [19]. Two-sided hypothesis tests are dual to two-sided confidence intervals. A parameter value is in the (1-α)x100% confidence interval if-and-only-if the hypothesis test whose assumed value under the null hypothesis is that parameter value accepts the null at level α. The principle is called the duality of hypothesis testing and confidence interval [20]. Thus, there is a one-to-one relationship between one-sided tests and one-sided confidence intervals. In addition, there is an exact relationship only if the standard error used in both the confidence intervals and the statistical tests, is identical. However, many statisticians nowadays avoid using any hypothesis tests, since their interpretations may vary and the derived p values cannot, generally, be interpreted in meaningful ways. Moreover, it is adopted that by calculating the confidence interval, researchers may have "insights" to the nature of their data and the evaluated associations, whereas p values tell absolutely nothing. Criticism against hypothesis testing, dating for most of them more than 50 years ago, suggests that "they (hypotheses tests) are not a contribution to science" (Savage, 1957 in Gerrodette, 2011, p. 404) or "a serious impediment to the interpretation of data" (Skipper & et al., 1967, in Gerrodette, 2011, or "worse than irrelevant" (Nelder, 1985 in Gerrodette, 2011, p. 404) or "completely devoid of practical utility" (Finney, 1989, in Gerrodette, 2011, p. 404) [1]. Nevertheless, and despite all the criticism, the hypothesis tests and their associated p values are still widely prevalent. According to Lesaffre (2008) [21], it is important to note that a 95% confidence interval bears more information than a p value, since the confidence interval has a much easier interpretation and allows better comparability of results across different trials. Moreover, in meta-analyses, the confidence interval is the preferred tool for making statistical inference. According to Wood (2104) [19], a (1-α)x100% confidence interval provides directly the strength of the effect, as well as the uncertainty due to sampling error, in an obvious way by providing the width of the interval. The information displayed is not trivial or obvious like the NHST conclusions may be, and misinterpretations seem far less likely than for NHSTs. Thus, the use of the confidence intervals has the potential to avoid many of the widely acknowledged problems of NHSTs and p values [19]. Moreover, several high-impact journals, especially in health sciences and other fields, as well as Societies (e.g., American Psychological Association's (APA) Task Force on Statistical Inference (TFSI)) have strongly discouraged the use of p values to prefer point and interval estimates of the effect size (i.e., odds ratios, relative risks, etc), instead of p values, as an expression of uncertainty resulting from limited sample size and also encouraging the use of Bayesian methodology [21 -22]. It is not surprising to note that, a century following its introduction many researchers still poorly understand the exact meaning of p value, resulting in many miss-interpretations [17]. Advantages of The Confidence Interval Versus p Value It is now common belief that researchers should be interested in defining the size of the effect of a measured outcome, rather than a simple indication of whether it is or not statistically significant [23]. On the basis of the sample data, confidence intervals present a range of alternative values in which the unknown population value for such an effect is likely to lie. Indeed, confidence intervals give different information and have different interpretation than p values, since they specify a range of alternative values for the actual effect size (since they present the results directly on the scale of the measurement), while p values don't. Moreover, confidence intervals make the extent of uncertainty salient, which a p value cannot do. Since the mid 1980's, Gardner & Altman suggested that "a confidence interval produces a move from a single value estimate -such as the sample mean, difference between sample means, etc -to a range of values that are considered to be plausible for the population" [24]. Resampling Techniques It is known from basic statistics that many statistical criteria (e.g., t-test) are asymptotically normally distributed, but the normal distribution may not be always a good approximation to their actual sampling distribution in the empirical samples derived from experiments, clinical trials or observational surveys. Indeed, the validity of the traditional statistical inference is mostly based on a theorem known as the Central Limit Theorem, which stipulates that, under fairly general conditions, the sampling distribution of the test statistic can be approximated by a normal distribution or under more limited assumptions by the t-or chi-square distributions. Based on these assumptions confidence intervals and p values are then calculated; however, with a considerable level of doubts and concerns. The point of resampling method is to not rely on the Gaussian assumptions. Resampling is a methodology suggested in early 1940s in order to estimate the precision of statistics, like means, medians, proportions, odds ratios, relative risks, etc., by using k-subsets of size m (< n) of the originally collected data (i.e., jackknife method) or drawing a random set of data with replacement from the original set (i.e., bootstrap method). Indeed, when the Gaussian assumptions are not true, the validity of the classical inferential statistics tends to be undermined. It is in these situations that the resampling methods really come to the rescue. The main idea of resampling is to obtain an empirical distribution of the test statistics based on what it is observed and use it to approximate the true, but unknown, distribution of the test statistic. An important advantage of this approach is that it could be applied for many statistics (e.g., means, median, etc.) and effect size measures (e.g., correlation coefficients, odds ratios, relative risks, etc.) with the use of computer software. Specifically, there are different types of resampling methods, i.e., bootstrap, jackknife, cross-validation (also called rotation estimation and permutation test, or randomization exact test). In classical parametric test the observed statistics are compared to the theoretical sampling distributions, while in resampling methods we start from theoretical distributions, which makes them innovative approaches [25]. Among all resampling methods, bootstrap is certainly the most frequently used procedure [26]. So, the resampling methods can be a substantial improvement over the traditional inference, since a confidence interval for the true value of unknown statistic or effect size measure has a much more concrete interpretation than has the p value from a statistical test, although there is still no guarantee. However, at this point it should be mentioned that it is often the sampling distribution of various effect sizes to be highly skewed, thus, the traditional confidence intervals will not work well, since they will always be skewed, too. Symmetrical confidence intervals are appropriate for a few things such as means and linear regression coefficients, but they are inappropriate for many other measures [27]. So, it is better not to assume a symmetric confidence interval for a measure of association, and to start from the assumption that they are not normally distributed. The empirical distribution derived for example from the bootstrap method does not assume that the distribution is symmetrical. CONCLUSION In conclusion, it could be recommend for inferencial purposes, to present the results from studies using confidence interval of the statistics and effect size measures of interest, rather than hypothesis test and its associated p value. Moreover, depending on the statistics of interest, bootstrap techniques or another resampling methods are also recommended, because these techniques are independent of the shape of the underlying distribution and can easily performed using software.
2016-10-26T03:31:20.546Z
2016-09-30T00:00:00.000
{ "year": 2016, "sha1": "05b806b064b15800fe6008606218b5e89e248c62", "oa_license": "CCBYNC", "oa_url": "https://opencardiovascularmedicinejournal.com/VOLUME/10/PAGE/196/PDF/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05b806b064b15800fe6008606218b5e89e248c62", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
264808407
pes2o/s2orc
v3-fos-license
On the number of neighborly simplices in R^d Two $d$-dimensional simplices in $R^d$ are neighborly if its intersection is a $(d-1)$-dimensional set. A family of $d$-dimensional simplices in $R^d$ is called neighborly if every two simplices of the family are neighborly. Let $S_d$ be the maximal cardinality of a neighborly family of $d$-dimensional simplices in $R^d$. Based on the structure of some codes $V\subset \{0,1,*\}^n$ it is shown that $\lim_{d\rightarrow \infty}(2^{d+1}-S_d)=\infty$. Moreover, a result on the structure of codes $V\subset \{0,1,*\}^n$ is given. Introduction Let A = {0, 1, * }, and let A n be the set of all words v = v 1 ...v n over the alphabet A, that is, v i ∈ A for i ∈ [n] = {1, ..., n}.Two words v, u ∈ A n are called dichotomous (at the i -th position ) if v i + u i = 1 (u i , v i ∈ {0, 1}) for some i ∈ [n] (compare [11,Section 10]) and they are called neighborly if there is precisely one such i .Two neighborly words v, w ∈ A n are a twin pair if w j = v j for all j ∈ [n] \ {i }.A family of words V ⊂ A n is called a ( dichotomous) code if every two words in V are dichotomous.A code V ⊂ A n is a d-code if | prop (v )| = |{i ∈ [n]: v i = * }| = d ≥ 1 for every v ∈ V .A family of words V ⊂ A n is called a neighborly code if every two words in V are neighborly.By M d we denote the maximal cardinality of a neighborly d -code without twin pairs.Two d -dimensional simplices in d are neighborly if its intersection is a (d − 1)-dimensional set.A family of d -dimensional simplices in d is called neighborly if every two simplices of the family are neighborly.Let S d be the maximal cardinality of a neighborly family of d -dimensional simplices in d .A long standing conjecture says that S d = 2 d ( [5]).It is verified for dimensions d ≤ 3.In [14] J.Zaks showed that S 3 = 8 (earlier V.Baston proved in [6] that S 3 ≤ 9), and in [13] that S d ≥ 2 d .M. Perles proved the estimation S d ≤ 2 d +1 ( [12]), and M.Aigner and G.Ziegler showed that S d ≤ 2 d +1 −1([1, Chapter 14]). Recently in [9,10] , where m n is the n-dimensional Lebesgue measure.)To simplify notations, throughout the paper we shall working mainly with words v ∈ A n rather than boxes v ⊂ [0, 2] n .However, it is very useful to keep in mind the above geometric interpretation of codes V ⊂ A n as it makes reasoning easier.For example, by this interpretation it is Our proof of Theorem 1 is based on properties of neighborly d -code without twin pairs.This technique was introduced by V.Baston in [6], and next it was used by J.Zaks and M.Perles in [12,14].Originally Baston considered families of strings from the set {−1, 0, 1} n arranged as rows of a matrix representation of neighborly family of simplices (the translation into our notation is as follows: −1 = 1, 1 = 0 and 0 = * and rows of a matrix representation form neighborly d -code without twin pairs).He used combinatorial properties of such matrices and its relationships with neighborly simplices.In [14], Zaks used the machinery introduced by Baston together with tools from graph theory (Graham-Pollak theorem) as well as a computer support.Our approach is heavily related to a geometrical interpretation of a neighborly d -code as a set of boxes and we do not use, unlike Baston and Zaks, any relationships between neighborly codes and neighborly simplices that generated such codes. A neighborly d -code without twin pairs is a very special case of a more general set of words V ⊂ A n which is called a k -neighborly family in which every two words from V differ by 0 and 1 in at least one and at most k ∈ [n] positions ( [2,3,7]).Neighborly families are closely related to Graham-Pollak theorem, while k -neighborly families are related to coverings of complete graphs by bicliques ( [2]). The structure of neighborly codes In this section we give two results on the structure of neighborly d -codes. Let V ⊂ A n be a code, j ∈ [n], and let where a ∈ {0, 1, * }.If V is neighborly, then every two words v, u ∈ V are dichotomous at precisely one position i ∈ [n].This property enforces a certain structure of V which is described in the following lemma: LEMMA 1 Let V ⊂ A n be a neighborly code, j ∈ [n], and let In the same way we show that Let v ∈ V j ,1 and k ∈ C j 0 .Since v j = 1, the word v is dichotomous with every word in V j ,0 at the j -th position.Let u, w ∈ V j ,0 be such that u k + w k = 1.Then u k = 0, w k = 1 or u k = 1, w k = 0. Thus, if v k ∈ {0, 1}, then the words v, u or v, w are dichotomous at the j -th and the k -th position which is impossible.Hence v k = * .In the same way we consider the case v ∈ V j ,0 and k ∈ C j 1 .It follows from the above that then there are four words v, u, p , q such that v, u ∈ V j ,0 and p , q ∈ V j ,1 where v k , u k , p k , q k ∈ {0, 1} which is, as we showed above, impossible. Similarly as in ( [6]), words of a code V ⊂ A n can be represented as rows of a |V | × n matrix M (V ).Thus, two codes V ,U ⊂ A n are isomorphic if there are a permutation of columns and rows in M (V ) and flips f (a ), a ∈ A, of letters in some columns of M (V ) which transform the matrix M (V ) into M (U ). From the definition of h Therefore, in many reasoning we may change an initial code V into its isomorphic version h • σ(V ) whose form is more convenient for our purposes than the form of V .Below, based on Lemma 1, we describe such convenient form of V . Let V ⊂ A n be a neighborly code, and let j ∈ [n] be such that where δ = 0 or δ = 1.We are intend to work with codes V such that |, then we may flip all letters in all words v ∈ V at the j -th position passing in this way from V to its isomorphic form W such that |W j ,0 | ≥ |W i ,ǫ | for every i ∈ [n] and ǫ ∈ {0, 1}.Due to the possibility of such transition to an isomorphic code, we can immediately assume that the code V has the property By Lemma 1, there are disjoint and non-empty sets (γ) If D = , then for every k ∈ D and for every two words v, u ∈ V j ,0 ∪ V j ,1 we have For clarity of our notation, again by possibility of passing to an isomorphic from of V , we may assume that V is such that j = 1, that is, and If V is as above, then we say that it is in standard form (compare Table 1 and the second example in Examples 1).REMARK 1 Of course, we could work, by Lemma 1, with codes which are not in standard form but then for example, Table 1 would be far less readable than in the case of codes in standard forms. We defined standard form for codes with C 1 0 , C 1 1 = , which makes the notations easier, as in our proof of Theorem 2 this assumption will be satisfied. Table 1: The structure of a neighborly code V in standard form in the case D = , where rows of M (V ) are words in V . Note that, by the properties (α),(β ) and (γ ′ ), every column in the sub-matrix of M (V ) of the form contains at least one 0 and and at least one 1, and the sub-matrix In what follows a flip of a letter a ∈ A = {0, 1, * } will be denoted by f (a ) = a ′ , that is, 0 ′ = 1, 1 ′ = 0 and * ′ = * . At the end of this section we show that in a neighborly d -code V ⊂ A n at least one of the sets , and thus, (3) holds true. To prove (4) observe that for every i , j ∈ prop (v ), i = j , we have To show this assume on the contrary that u which means that the words u, v are not neighborly which is impossible as V is neighborly.Since the sets (V j ,v ′ j ) j ∈prop (v ) are pairwise disjoint, we have, by (3), and hence, as An inflation of a code In this section we define an inflation of a code which is the main tool in our proof of Theorem 2. Let V ⊂ A n be a code, and let Note that V i ,η, * is a code: For every two words v, u ∈ V i ,η, * the words ū = u 1 . . . . Thus, v and u are dichotomous at the j -th position.It follows that the sets V i ,0, * ∪ V i , * and V i ,1, * ∪ V i , * are codes.To show this, let v ∈ V i ,0, * and u ∈ V i , * .Then the word v = v 1 ...v i −1 0v i +1 ...v n belongs to V i ,0 .Since V i ,0 ∪ V i , * is a code, there is j ∈ [n] \ {i } such that the words v , u are dichotomous at the j -th position, that is, v j + u j = 1.Consequently, v, u are dichotomous at the j -th position. An inflation of V at the i -th position is the code and let the sequence J 1 = ( j 1 , ..., j m ) be a permutation of elements of the set J . The code V δ = (...(V δ j 1 ) δ j 2 ...) δ j m , where δ = (δ j k ) k ∈[m ] ∈ {0, 1} m , is called an inflation of V on the sequence J 1 .The sequence δ is called an inflation sequence..By the definition of inflation, we have vol (V ) ≤ vol (V δ ) for every sequence J 1 = ( j 1 , ..., j m ) and every inflation sequence δ = (δ . Usually we shall indicate only the set J without specifying a permutation of J .In such a case we just say that an inflation of V is on the set J .However, as we show in the second part of Examples 1 an inflation of a code depends on a permutation of elements of J .At each stage i ∈ J of an inflation process the code V δ ′ , where δ ′ = (δ ′ j k ) k ∈[r ] , I = ( j 1 , ..., j r ) and j 1 , ..., j r ∈ J \ {i }, r ≤ m − 1, can be in one of the three states: In the first two cases we say that V δ ′ is in 0-advantage (resp.1-advantage) at the i -th position.In the third case we say that the code V δ ′ is balanced at the i -th position. Let I ′ = ( j 1 , ..., j r , i ), and δ ′′ = (δ ′ j 1 , ..., δ ′ j k , δ i ).If V δ ′ is in 0-advantage at the i -th position, then δ i = 0, and consequently, all words from the set (V δ ′ ) i ,1 have to be removed, and all words from the set (V δ ′ ) i ,0 have to be modified.This means that in the code the set of words (V δ ′ ) i ,1 is removed, and every word in (V δ ′ ) i ,0 is modified by changing every 0 to * at the i -th position.In the result we obtain the inflation on the sequence I ′ which is of the form is balanced at the i -th position, then we have a choice: We may take δ i = 0 or δ i = 1.In this case we get Thus, any inflation V δ of a code V on a set J ⊂ [n] is a code that arises from V in such a way that some words of V are removed, some are modified and some words from the code V are unmodified.Therefore, for every u ∈ V δ there is v ∈ V such that u i = v i for every i ∈ prop (u) ⊂ prop (v ).If prop (u) prop (v ), then we say that u is a modification of the word v .If prop (u) = prop (v ) then we say that v is unmodified during an inflation process on J .In this case, by the definition of inflation, v i = * for every i ∈ J (compare the second example below).EXAMPLES 1 Let V = {00 * , * 11} (Figure 1).The code V is balanced at the position 2, it is in 1-advantage state at the position 3 and in 0-advantage state at the position 1.Let J = {2}.Then δ = (δ i ) i ∈J is an inflation sequence, where δ 2 = 0 and V δ = {0 * * }.Of course, if δ 2 = 1, then δ is also an inflation sequence and V δ = { * * 1}.In both cases we have vol (the picture on the left).We have V δ = {0 * * } for J = {2} and δ 2 = 0 (a realization of V δ is given in the middle) and V δ = {00 * , * 1 * } for J = {3} and δ 3 = 1 (a realization of V δ is given on the right). Our second example concerns the following neighborly code W ⊂ A 6 (note that, W is in standard form): An inflation of a code usually depends on a sequence on which it is made, that is, for a given sequence J 1 if J 2 is a permutation of J 1 , then it can happen that inflation on J 1 is not equal to the inflation on J 2 .For example, for the code W given in Table 2 we let J = {1, 2, 3}, J 1 = (1, 2, 3) and J 2 = (3, 2, 1).The inflation of W on the sequence J 1 is of the form W δ 1 = { * * * * 00, * * * * 1 * , * * * 001}, where δ 1 = (1, 0, 0), while the inflation of W on the sequence J 2 is of the form W δ 2 = { * * * 1 * * , * * * 0 * 0, * * * 001}, where δ 2 = (0, 0, 0).Thus, W δ 1 = W δ 2 .(Note that, we may take δ 3 = (0, 0, 1) on J 2 , as in the last step we have a balance at the 1-th position, and then During the inflation process of W on J 2 the word v = 010 * * 0 is (in the first step) modified to the word u = 01 * * * 0, but in the second step of the inflation process on J 2 , word u is removed (in this sense v is removed during an inflation process).On the other hand, the word w = * * * 001 is unmodified during the inflation process on J 1 and J 2 .The first two words in W δ 2 are modifications of the third and the fourth word in W , respectively, and the first two words in W as well as the fifth and sixth words in W are removed during the inflation process on J 2 for the inflation sequence δ 2 . A proof of Theorem 2 Our proof of Theorem 2 consists in controlling some inflation process of a neighborly d -code V without twin pairs in such a way that some portion of V remains unmodified (it will be V 1,1 ) during the inflation process, and on the other hand the form of some portion of the considered inflation of V is easy to predict. Proof of Theorem 2. Suppose that the theorem is not true.Then there are an integer M > 0, a sequence of positive integers We may assume that M is the smallest such number, that is, there is d 0 ≥ 1 such that for every d ≥ d 0 and every neighborly As we show below, |V 1,0 |, |V 1,1 | ≥ 2, and thus we may assume that V is in standard form (compare Section 2 and Table 1 and 2).(5) To show this, let 0 is a code, and since the set V 1,0 ∪V 1, * is a code, the set W 1,1 ∪V 1, * must be a code.Therefore, the set In what follows we shall consider an inflation V δ = U on the set J = C 1 0 = {2, ..., s }.Note that, by the property (α) in Section 2, for every v ∈ V 1,1 and every k ∈ J we have v k = * .Thus, each word in V 1,1 is unmodified and of course is not removed during an inflation process on the set J .Therefore, (compare Table 1 and Examples 1).Moreover, since V is in standard form, and the inflation of V is on the set J , it follows that |U 1,0 | ≤ 1.Indeed, suppose on the contrary that there are two words v, u ∈ U 1,0 .The words v, u arose during the inflation process from some two words p , q ∈ V 1,0 , but modifications of p , q to v, u were made only on the set J .This means, taking into account the property (β ) given in Section 2, that v i = u i = * for every i ∈ {2, ..., r } (as, by (β ), p k = q k = * for k ∈ {s + 1, ..., r }).The set U is a code, and therefore v, u are dichotomous, that is v i + u i = 1, where i ∈ {r + 1, ..., n}.Since v i = p i and q i = u i , we obtain p i + q i = 1 for i ∈ {r + 1, ..., n}.This contradicts the property (γ) given in Section 2. Therefore, U 1,0 = or U 1,0 contains precisely one word.Now we consider three cases depending on the form of U 1,0 . Case 1.Let us suppose that there is an inflation V δ = U on the set J such that Clearly, W ⊂ A n is a code (we show this in the similar manner as in the case of the code W given right after (6)) and hence vol (W ) ≤ 2 n .Moreover, vol (W 1,0 ) = vol (V 1,1 ), by the definition of W 1,0 .On the other hand, since, by ( 5) and ( 6) Case 2. We now assume that for each inflation V δ = U on the set J we have Then U 1, * = as U is a code (if v ∈ U 1, * , then the words v and 0 * ... * are not dichotomous because v 1 = * ).Thus, U = {0 * ... * } ∪ V 1,1 .We shall show that this form of U is not possible.Let us assume first that V 1, * = .Since U 1, * = , it follows that there is at least one inflation V δ = U such that vol (U ) > vol (V ).(In other words, during the inflation process defined by δ there are unbalanced states.)To show this, let us assume on the contrary that for each inflation V δ = U on J we have vol (U ) = vol (V ) (that is, any 0 − 1 sequence δ is an inflation sequence).Take any q ∈ V 1, * , and let B ⊂ J be such that q j ∈ {0, 1} for j ∈ B .Note that, B = as if B = , that is q = * ... * q s +1 ...q n , then q ∈ U 1, * (such word q cannot be removed during any inflation process on the set J = {2, ..., s } as q j = * for all j ∈ J ).Now we consider an inflation on the sequence J 1 = (2, ..., s ) with δ = (δ j ) j ∈J such that δ j = q j for j ∈ B .By our assumption, δ is an inflation sequence, and thus * ... * q s +1 ...q n ∈ U 1, * , a contradiction. We shall show first that We assume that V i , * ∩V 1,1 = , otherwise there is nothing to prove.Let W = {0v 2 ...v i −1 1v i +1 ...v n : v ∈ V i , * ∩V 1,1 }.It is easy to see that W is a code: If we take two words v, u belonging to the code V i , * ∩V 1,1 then, since v 1 = u 1 = 1 and v i = u i = * , we have v k + u k = 1 for some k ∈ [n] \ {1, i }.Moreover, vol (W ) = |V i , * ∩V 1,1 |2 n −(d +1) as |w | = |v |/2 for w ∈ W and v ∈ V i , * ∩V 1,1 (equivalently, W is a (d +1)code, by the definition of W ). Now we show that W ∪ U is a code.Let w, u be two words such that w ∈ W and u ∈ U .If u ∈ U 1,0 , then u i = 0, and since w i = 1, the words w, u are dichotomous at i .If u ∈ U 1,1 , then u 1 = 1.But w 1 = 0, and then w, u are dichotomous at the position 1.Finally, let u ∈ U 1, * .The set U is a code and U 1,1 = V 1,1 and therefore, if v ∈ V 1,1 is such that v i = * , then u k + v k = 1 for some k = 1, i .It follows that w and u are dichotomous at the position k = 1, i .Thus, W ∪ U is a code, and hence vol (W ∪ U ) = vol (W ) + vol (U ).Therefore, vol (W ) ≤ M 2 n −d as vol (U ) ≥ vol (V ) = 2 n − M 2 n −d and vol (W ∪ U ) ≤ 2 n .Hence, which gives (7). By the property (γ ′ ) we have |V 1,1 | = |V i , * ∩V 1,1 |+|V i ,0 ∩V 1,1 |, and thus, from ( 7) and ( 6), it follows that Let V = {v (σ d ): σ d ∈ S }.In the same way as in [6, Lemmas 1-4] (for the case d = 3) we show that V is a neighborly (d + 1)-code without twin pairs (compare also [1, Chapter 14]).Since |S | = |V |, we have S d ≤ M d +1 .Therefore, to prove Theorem 1 we shall prove THEOREM 2 If M d is the maximal cardinality of a neighborly d -code without twin pairs, then interiors of the boxes of the family V = { v : v ∈ V } are mutually disjoint.Because of this interpretation we shall use the following notation: |v | = 2 n −p , where p = | prop (v )| and vol and thus |W | ≤ 2 d (see Introduction).If on the contrary |V 1,1 | < |V 1,0 | − M , then since |W 1,1 | = |V 1,0 | and |V | = 2 d − M , we have it was shown that S d ≤ 2 d +1 − 2. In this note we prove THEOREM 1 If S d is the maximal cardinality of a neighborly family of d -dimensional simplices in d , then lim Below we describe a passing from neighborly simplices to neighborly d -codes.Let S be a neighborly family of d -dimensional simplices in d , and let H 1 , ..., H n denote all different hyperplanes spanned by facets of simplices in S .Let H 0 H i is spanned by a facet of σ d and σ d ⊂ H 0 i , 1, if H i is spanned by a facet of σ d and σ d ⊂ H 1 i , * , otherwise. Table 2 : A neighborly code in standard form, where C 1 0
2023-11-01T06:43:06.157Z
2023-10-30T00:00:00.000
{ "year": 2023, "sha1": "fbb0284470f571cb30cbb65ce1b8661472666135", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fbb0284470f571cb30cbb65ce1b8661472666135", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
257683851
pes2o/s2orc
v3-fos-license
Autoimmune encephalitis with mGluR5 antibodies: A case series from China and review of the literature Background Only 15 patients of autoimmune encephalitis with metabotropic glutamate receptor 5 (mGluR5) antibodies have been reported worldwide since 2011, mostly from western countries. Patients with different genetic backgrounds are necessary to further clarify the clinical phenotype and prognosis of this rare disease. Objective We initially describe a case series from China to confirm the previous findings, expand the clinical phenotype, and identify the prognostic factors of autoimmune encephalitis with mGluR5 antibodies. Methods Observational data with follow-up were prospectively collected from autoimmune encephalitis patients with mGluR5 antibodies. Clinical information and outcomes on current and previously reported cases were combined and analyzed. Results We identified five patients (median age 35 years); two were female. The main clinical manifestations were behavioral/personality changes (five of five, 100%) and cognitive disorders (four of five, 80%), accompanied with other neurologic symptoms. Hypoventilation occurred in two (40%) patients, which was life-threatening. One patient had meningoencephalitis, suggesting a new phenotype in anti-mGluR5 encephalitis. All patients received immunotherapy. At the last follow-up (median 18 months), two (40%) patients showed complete recovery, two (40%) patients showed partial recovery, and one (20%) patient died. One (20%) patient had multiple relapses. Together with the 15 previously reported cases, associated tumors occurred in seven of 12 (58%) Western patients vs. one of eight (13%) Chinese patients. Modified Rankin Scale (mRS) scores at the last follow-up (median 31 months) were available in 16 patients. Patients with bad outcomes (mRS > 2, n = 4) were more likely to have hypoventilation at onset and higher mRS scores at peak of the disease. Conclusions In patients with different genetic background, as Chinese, the clinical phenotype of anti-mGluR5 encephalitis is similar. Fewer paraneoplastic cases were observed in Chinese patients. Most patients showed good responses to immunotherapy and cancer treatment. The clinical outcomes were favorable in most patients. Introduction Metabotropic glutamate receptors (mGluR) are G-protein coupled receptors activated by the binding of glutamate, the main excitatory neurotransmitter of the nervous system (1). Eight subtypes of mGluR (mGluR1-8) have been cloned and classified into three groups based on their molecular, pharmacological, and signaling properties (1). Group I mGluRs, including mGluR1 and mGluR5, are implicated in diverse processes such as learning, memory, epilepsy, and pain (2). In the past two decades, mGluR1 (3), mGluR5 (4), and mGluR2 (5) have been identified as targets of antibodies in autoimmune neurologic disorders of different characterization. Autoimmune encephalitis with mGluR5 antibodies was first reported in two patients with limbic encephalitis and Hodgkin's lymphoma (Ophelia syndrome) in 2011 (4). A recent study in an animal model has confirmed the pathogenicity of mGluR5 antibodies, which causes a reduction of mGluR5 clusters in neurons (6). However, as far as we know, since 2011, only 15 autoimmune encephalitis patients with mGluR5 antibodies from 8 studies have been reported worldwide, including three case reports from China (4,(7)(8)(9)(10)(11)(12)(13). Given the rarity of anti-mGluR5 encephalitis, additional studies in patients with different genetic backgrounds are necessary to further clarify the clinical spectrum and the disease prognosis. To confirm the previous findings, expand the clinical phenotype of anti-mGluR5 encephalitis, and report the neurologic outcome, we initially describe a case series of five newly identified patients with anti-mGluR5 encephalitis from China in the current study. We also review the previously reported cases in the literature along with the current data and identify the prognostic factors of anti-mGluR5 encephalitis for the first time. Study design and identification of patients This single-center observational study was registered (registration number: ChiCTR1800019762) on the World Health Organization international clinical trial registry platform. Between June 2019 and January 2022, cerebrospinal fluid (CSF) and sera from 2995 patients with suspected autoimmune encephalitis from the Department of Neurology, West China Hospital were investigated with cell-based assay (CBA; Shaanxi MYBiotech Co., Ltd.) for antibodies to mGluR5 with techniques described in the next section. All samples were taken before immunotherapy. These samples were also screened for other neuronal targets [NMDAR, CASPR2, LGI1, AMPAR1, AMPAR2, and GABA B R (Euroimmun, Lübeck, Germany); GABA A Ra1, GABA A Rg2, GABA A Rb3, GlyRa1, DPPX, IgLON5, mGluR1, dopamin2 receptor, and neurexin-3a (Shaanxi MYBiotech Co., Ltd.)] with CBA and for onconeural antibodies (Hu, Yo, Ri, Ma2, CV2/CRMP5, amphiphysin, Tr, SOX1, Titin, Zic4, Recoverin, and GAD65) with immunoblot analysis (Euroimmun, Lübeck, Germany). Patients were prospectively recruited in the series once the following criteria were met: (1) acute or subacute onset (rapid progression of fewer than three months) of at least one of the following symptoms: behavioral or personality changes, cognitive deficits, sleep disturbances, seizures, decreased level of consciousness, and movement disorders; (2) positive mGluR5 antibodies testing in serum and/or CSF; and (3) reasonable exclusion of other neurological or systemic diseases (7). Clinical information, including prodromal symptoms, clinical manifestations at the acute phase, auxiliary examinations results (CSF analyses, electroencephalogram [EEG], and brain magnetic resonance imaging [MRI]), tumor association, treatment strategies, and treatment responses were prospectively obtained from the medical record system and standardized questionnaires completed by experienced neurologists (Z.H. and D.Z.) after face-to-face interviews. Follow-up information, including neurologic relapse, residual symptoms, and clinical outcomes, were obtained from regular follow-up at the neurology clinic in West China Hospital. The modified Rankin Scale (mRS) was used to assess symptom severity at peak of the disease and each follow-up, and clinical outcome at the last follow-up. Outcomes were classified as lack of improvement, partial recovery (if patients had significant improvement but were not back to the baseline condition), complete recovery (if patients returned to the baseline condition), or death (7). Patients with an mRS score ≤ 2 at the last follow-up were also considered to have a good outcome, otherwise to have a bad outcome (14). Relapse was defined as the new onset or worsening of neurologic symptoms after at least two months of the initial improvement or stabilization (14). mGluR5 antibodies CBA To identify mGluR5 antibodies in new patients, human mGluR5 was cloned into pcDNA3.1 vectors. HEK293 cells were used to transfect with the pcDNA3.1 vectors. After 24 hours of transfection, the cells were then fixed with 4% paraformaldehyde for five minutes, washed in phosphate-buffered saline (PBS)-0.1% Tween 20, and ready for antibody detection. Serum with dilution at 1:10 in 0.4% PBS Triton X-100 and CSF without dilution was utilized to incubate cells for one hour at room temperature. Cells were then washed in PBS-0.1% Tween 20 for three times, and incubated at room temperature for 30 minutes with FITC-goat antihuman immunoglobulin G (IgG) (Cat#109-095-170, Jackson ImmunoReaserch, PA, USA), washed again in PBS-0.1% Tween 20, and then determined the reactivity by comparing with HEK293 cells transfected with empty vector plasmids and incubated with the samples from the same patient, using immunofluorescence microscopy by two investigators independently. The antibody titers were determined using serial dilutions (from 1:10 to 1:1000) of patient's serum and CSF once the samples were positive. The titers were defined as the highest dilution for which the specific fluorescence of HEK293 cells was still visible. The IgG subclasses of patient's antibodies were also determined in two newly identified patients with mGluR5 antibodies using anti-subclass antibodies specific for IgG1, IgG2, IgG3, and IgG4. Review of previously reported autoimmune encephalitis cases with mGluR5 antibodies We searched PubMed and web of science for all research published in English between October 2011 and December 2022, using the search terms [(mGluR5 OR metabotropic glutamate receptor 5 OR anti-mGluR5 OR anti-metabotropic glutamate receptor 5 OR mGluR5-antibody) and (encephalitis OR autoimmune encephalitis OR Ophelia syndrome)] to identify the previously reported autoimmune encephalitis cases with mGluR5 antibodies in human. To analyze the clinical features, auxiliary examinations results, tumor association, and treatment response, and assess factors potentially associated with the clinical outcomes, current data were reviewed along with the previously reported cases with anti-mGluR5 encephalitis when the following information was available: (1) clinical features at onset and severity (the mRS scores) at the peak of disease; (2) CSF and/or MRI results (including IgG subclasses information available in nine patients); and (3) treatment strategy, clinical outcome, and the mRS scores at the last follow-up. Standard protocol approvals and patient consents This study was approved by the Research Ethics Committee of the Medical School of Sichuan University. Informed consent was obtained from each patient for using their medical records and samples. All data in the study were strictly anonymous. Statistical analyses Statistical analyses and figures were performed with SPSS version 25.0 (IBM Corp., Armonk, NY), GraphPad Prism 8, and R software version 3.6.0. Continuous variables were analyzed using the Mann-Whitney U test and shown as medians (interquartile range [IQR]), while categorical variables were analyzed using Fisher's exact test and shown as frequencies (proportions). Variables potentially associated with the clinical outcomes were compared between patients with bad outcomes vs. good outcomes to identify the prognostic factors. Odds ratios (OR) and medians (using the Hodges-Lehmann method (15,16)) were used to estimate the difference between the two groups. The corresponding 95% confidence intervals (CIs) of the rate difference and median difference were reported. Two-sided pvalues of <0.05 were regarded as statistically significant. Given the limited cases of patients and the exploratory nature of the study, multivariate analyses were not performed. Results Clinical features, results from auxiliary examinations, treatment, and clinical outcomes Five new patients (patient 1 to patient 5) with mGluR5 antibodies were enrolled. Demographic data, detailed clinical features, auxiliary examination findings, antibody results, treatment strategy, and clinical outcomes are shown in Table 1. The median age at onset was 35 years (range: 32-59 years). Two (40%) patients were female. All patients had prodromal symptoms, including headache, flu-like symptoms, fever, and diarrhea. All patients had behavioral/personality changes. Other encephalitic symptoms included cognitive deficits (four of five, 80%), sleep disturbances (three of five, 60%), decreased level of consciousness (two of five, 40%), movement disorders (one of five, 20%), and generalized seizures (one of five, 20%), manifesting as status epilepticus. Infrequent neurologic symptoms included aphasia, meningitis, prosopagnosia, and visual deficits. Hypoventilation occurred in two (40%) patients, and both needed intensive care at the acute phase. All patients underwent tumor screening, including serum markers and whole-body PET-CT. An associated tumor was found in one (20%) patient, patient 3 (also positive for NMDAR antibodies in serum and CSF), who had a mature ovarian teratoma. The comorbid autoimmune disorder was found in one (20%) patient, patient 1, who had autoimmune hepatitis. The median mRS score was 3 at peak of the disease. CSF abnormalities in the routine examination were observed in three (60%) patients. Three (60%) patients showed increased IgG index and two (40%) of them also showed pleocytosis. EEG revealed abnormalities in two (40%) patients. Epileptiform discharge was found in patient 3, who had status epilepticus. A diffuse slowing was observed in patient 5. Specific brain MRI abnormalities occurred in four (80%) patients, including three patients with T2/fluid- attenuated inversion recovery (FLAIR) hyperintensities involving limbic regions and extra-limbic regions, including the thalamus, brainstem, basal ganglia, and cerebellum, and one patient (patient 5) with meningeal enhancement on post-gadolinium T1-weighted imaging. To be mentioned, patient 1, who had a normal brain MRI at the onset and first relapse, found T2/FLAIR hyperintensities in bilateral hippocampi at the second relapse (28 months after the disease onset). Brain MRI of representative patients is shown in Figure 1. Brain fluorodeoxyglucose-PET was performed in patient 2, which revealed decreased metabolism of the occipital cortex. Paired samples were available from all patients. Among these samples, antibodies to mGluR5 were found both in serum and CSF in three (60%), one (20%) only in serum, and one (20%) only in CSF. IgG subclass information was available in two patients. IgG1 was found independently in patient 5 and accompanied by IgG2 in patient 1. Figure 2A shows the antibody test result of patient 1. Additional antibodies were found in two (40%) patients, including Recoverin in patient 2 and NMDAR+AMPAR in patient 3. All patients received first-line immunotherapy (intravenous methylprednisolone, intravenous immunoglobulin, oral prednisone), and one (20%) of them also received second-line immunotherapy (mycophenolate mofetil [MMF]) at relapse. The median time from onset to the initial immunotherapy was 14 days (range: 4-73 days). Patient 3 also had a surgical removal of the tumor. Figure 2B shows the mRS scores evaluated over the followup of the five patients. At the last follow-up (median 18 months), two (40%) patients showed complete recovery, two (40%) patients showed partial recovery, and one (20%) patient died in the intensive care unit after the withdrawal of the ventilator. The median mRS score at the last follow-up was 1. Repeated tumor screening per year or at relapse during the follow-up did not find newly developed tumors. Patient 1 had multiple relapses at three months and 28 months after the disease onset, respectively. Immunotherapy was still effective in all episodes in this patient. Review of the literature We reviewed the clinical information of 15 previously reported cases from eight studies (4,(7)(8)(9)(10)(11)(12)(13) and combined them with that of the five patients in the current study. Detailed information is summarized in Table 2. The median age at onset was 35.5 years (IQR 27.5-51.5 years); 45% of patients were female. The distribution of age at the onset and sex is shown in Figure 3A. Prodromal symptoms were common, which occurred in 16 of 20 (80%) patients. The most frequent neurologic symptoms were behavioral/personality changes (17 of 20, 85%), followed by cognitive deficits (15 of 20, 75%), sleep disturbances (10 of 20, 50%), seizures (10 of 20, 50%, including status epilepticus in two children and two adults), decreased level of consciousness (eight of 20, 40%), and movement disorders (six of 20, 30%). Hypoventilation occurred in three (15%) patients. Six MRI of representative patients with anti-mGluR5 encephalitis in the current study. (A-C) Brain MRI of patient 2. Initial brain MRI at disease onset showed T2/fluid-attenuated inversion recovery hyperintensities of the right mesiotemporal lobe (A), cerebral peduncle (B), thalamus, and putamen (C). (D) Brain MRI of patient 1. Brain MRI at disease onset and at the first relapse was normal, but at the second relapse, brain MRI showed T2/fluidattenuated inversion recovery hyperintensity of the bilateral hippocampi. (E, F) Brain MRI of patient 5. Initial brain MRI at disease onset showed diffuse T2/fluid-attenuated inversion recovery hyperintensity of the meninges and enhanced on post-gadolinium T1-weighted images. (30%) patients needed intensive care at the acute phase. The median mRS score was 4 at peak of the disease. Associated tumors were found in eight (40%) patients, including six with Hodgkin's disease, one with small cell lung cancer, and one with mature ovarian teratoma (who also positive for NMDAR antibodies in serum and CSF). The tumor association had no significant difference between males and females (p = 0.288). We also noticed that there were seven of 12 (58%) paraneoplastic cases in Western patients vs. one of eight (13%) paraneoplastic cases in Chinese patients (six of eight Chinese patients underwent wholebody PET-CT for tumor screening), with a difference showing the tendency toward statistical significance (p = 0.07). CSF abnormalities were observed in 17 (85%) patients. Fifteen (75%) patients showed pleocytosis [median 31, range 6-396 white blood cells/mm 3 ] and 11 of 14 (79%) patients had the oligoclonal band or increased IgG index. EEG revealed abnormalities in 10 of 19 (53%) patients, including five patients with epileptiform discharge (two children and three adults, all had seizures), four patients with diffuse slowing (three children and one adult), and one with myogenic artifact associated with a faciobrachial dystonic seizure. Specific brain MRI abnormalities at onset or relapse occurred in 12 (60%) patients. Brain fluorodeoxyglucose-PET in four patients (two with normal brain MRI) all demonstrated decreased metabolism involving the temporoparietal cortex, occipital cortex, or cerebellum. All patients had positive results to mGluR5 antibodies in CSF and/or serum. Among 13 patients with paired samples, antibodies to mGluR5 were found both in serum and CSF in nine (69%) patients. IgG subclass information was available in 11 patients, shown in Figure 3B. Additional antibodies were found in five (25%) patients, including SOX1, Recoverin, LGI1, NMDAR+AMPAR, and NMDAR+MOG, respectively. Seventeen (85%) patients received first-line immunotherapy and four of them also received second-line immunotherapy. All patients with associated tumors received chemotherapy, radiotherapy, or surgical removal as cancer treatment. At the last follow-up (median 20 months), 10 (50%) patients showed complete recovery, nine (45%) patients showed partial recovery, and one (5%) patient died. Neurologic relapse occurred in three (15%) patients. The median time from onset to the first relapse was 16 months (range 3-30 months). Immunotherapy (and cancer treatment) was still effective at all episodes among patients with relapses. The mRS scores information at the last follow-up (median 31 months) was available in 16 patients, including 12 (75%) with good outcomes and four (25%) with bad outcomes ( Table 3). The median mRS score at the last follow-up was 1, significantly decreasing from the median mRS score of 4 at peak of the disease ( Figure 3C, p<0.001). Compared to patients with good outcomes, patients with bad outcomes had higher frequency of hypoventilation (75% vs. 0; OR 58.3, 95% CI 1.92, 1771; p = 0.02) and higher severity at peak of the disease, reflected by higher mRS score (median 5 vs. 4; diff median 1, 95% CI 0, 2; p = 0.042). There were no significant differences in the demographic features, symptoms other than hypoventilation, comorbid tumors, CSF testing results, brain MRI abnormalities, and immunotherapy strategy between the two groups. Discussion This study described a case series of anti-mGluR5 encephalitis from China, which aim to confirm the previous findings, expand the clinical phenotype of anti-mGluR5 encephalitis, and identify the prognostic factors of clinical outcome. All except one patient in our case series were in their 30s, which agrees with the previous study (7). Unlike anti-NMDAR encephalitis, that occurs predominantly in females, we observed no gender difference in mGluR5 encephalitis, confirming the previous findings (7,14). All patients in our case series had prodromal symptoms preceding the neurologic symptoms and developed behavioral/personality changes, mood changes, or psychiatric symptoms, ranging from negative symptoms such as apathy and decreased verbal output to positive symptoms such as behavioral changes with irritability, mania, and hallucination. Besides, cognitive deficit is another prominent symptom in our case series, which occurred in 80% of patients. Most of them had one or multiple cognitive domain impairments, and memory loss was the most common. These results were similar to the previous findings (7). A recent study has revealed that mGluR5 antibodies can reduce the level of mGluR5 in the hippocampus, causing memory loss and anxiety in mice, which was in agreement with the clinical phenotype of anti-mGluR5 encephalitis (6). Other common neurologic symptoms include sleep disturbances, seizures, decreased level of consciousness, and movement disorders, which is align with previous studies (4,(7)(8)(9). It is notable that a patient in our case series showed symptoms of both meningitis and encephalitis. Brain MRI revealed diffuse dura mater enhancement on contrast-enhanced T1-weighted imaging. Thorough evaluations for infectious, rheumatic, and malignant etiologies were performed to rule out other differential diagnoses, and no evidence of additional neuronal antibodies was found. Therefore, meningoencephalitis should be considered a new phenotype of anti-mGluR5 encephalitis and expand the clinical spectrum. Interestingly, a recent case report described a patient with mGluR5 antibody-associated Guillain-Barrésyndrome without other neuropsychiatric symptoms (not reviewed in this study) (17). Given the extreme rarity of the disease, further studies of future cases should be conducted to clarify if the term "anti-mGluR5 encephalitis" should be replaced by "mGluR5 antibodyassociated disease" to define the disease more appropriately. Although most patients in our case series showed pleocytosis or increased IgG index in CSF analysis, two patients had normal CSF findings in routine examination, as mentioned in some patients with anti-mGluR5 encephalitis in the previous study or other types of autoimmune encephalitis (e.g., mGluR1, NMDAR) (12,14,18). Therefore, mGluR5 antibodies should be screened in suspected patients despite having normal results in routine CSF examination. Brain MRI abnormalities at onset or relapse were found in 80% of patients in our case series, which is more frequent than that reported by Spatola et al. (7) As described in the previous study (7), although anti-mGluR5 encephalitis was always considered a form of limbic encephalitis, extra-limbic lesions could also be involved independently or combined with limbic lesions on brain MRI. Notably, patient 1 in our case series, who had a normal brain MRI at disease onset, showed bilateral hippocampi lesions on repeat brain MRI at the second relapse. Therefore, a repeat brain MRI during follow-up is important, especially in patients who are suspected of having a relapse of the disease. Besides, patients in the current study and the previous study who underwent a brain fluorodeoxyglucose-PET all showed decreased metabolism of the cortex or cerebellum, suggesting the potential diagnostic value of brain fluorodeoxyglucose-PET, especially in MRI-negative patients (7). We noticed that two patients in our case series found additional antibodies in the samples, as observed in many other types of (7,9). autoimmune encephalitis, such as anti-AMPAR encephalitis and anti-GABA B R encephalitis (19,20). Taking together with the previous studies, five patients with anti-mGluR5 encephalitis had additional antibodies and all showed atypical symptoms: the patient with SOX1 antibodies had progressive ophthalmoplegia (7), the patient with Recoverin antibodies had visual deficits and confirmed retinopathy, the patient with NMDAR+AMPAR antibodies had refractory status epilepticus, the patient with NMDAR+MOG antibodies had cerebral cortical encephalitis (11), and the patient with LGI1 antibodies had faciobrachial dystonic seizures (12). (4,(7)(8)(9). b Based on the Hodges-Lehmann method for median differences (15,16). A B C In some of these cases, additional antibodies were pathogenically relevant, resulting a complex clinical pictures (e.g., Recoverin antibodies and retinopathy (21), MOG antibodies and cerebral cortical encephalitis (22), and LGI1 antibodies and faciobrachial dystonic seizures (23)). Similar findings have been reported in other types of autoimmune encephalitis, such as anti-AMPAR encephalitis (20). It is worth mentioning that the clinical implication of coexisting Recoverin antibodies in patient 2, who had found no associated tumor, should be interpreted with caution, as there may be a risk of a false positive test result. Future studies are required to determine which antibody plays the major pathogenic role in patients with multiple antibodies. Over half of the Western patients with anti-mGluR5 encephalitis had associated tumors (mainly Hodgkin's disease), however, together with our data, only 13% of Chinese patients had associated tumors, and none of them had Hodgkin's disease at the last follow-up. The difference in the frequency of paraneoplastic cases between Western patients and Chinese patients closely approaches the statistical significance (p = 0.07), most likely due to the limited sample size. One of the possible reasons is the differences in genetic background between different races, as we had reported in our previous studies of anti-NMDAR encephalitis and anti-CASPR2 encephalitis (24, 25). This finding should be confirmed in future cases with longer follow-up since tumors can be occult and diagnosed years after recovery from autoimmune encephalitis in some patients (26). It is worth noting that the only Chinese patient with associated tumor in our case series, who had mature teratoma, found coexisting NMDAR antibodies. It is wellknown that NMDAR antibodies had been reported to be associated with teratoma in previous studies. Besides, the patient also had a clinical phenotype of anti-NMDAR encephalitis (27). Therefore, mGluR5 antibodies may not be the major pathogenic ones responsible for the clinical picture in this patient. In agreement with the previous studies, most patients with anti-mGluR5 encephalitis in our case series showed good responses to the immunotherapy. Combined with the previous data, all except one patient with anti-mGluR5 encephalitis showed complete or partial recovery at the last follow-up. Three-quarters of patients (with available information) had good outcomes at the last follow-up. The mortality was 5% among all cases, close to the clinical outcome observed in anti-NMDAR encephalitis (14,24). The prognostic factors of bad clinical outcomes were hypoventilation at the episode and higher mRS scores at the peak of the disease. However, no significant difference was detected in the frequency of tumor association between the good outcome group and the bad outcome group, as observed in anti-NMDAR encephalitis (14,24). Together with the previous cases, relapses were observed in 15% of patients with anti-mGluR5 encephalitis, as reported in anti-NMDAR encephalitis (14,24). Besides, we first reported a patient with multiple relapses, who had clinical improvement after reinitiating immunotherapy in each relapse as at the first episode. Therefore, the importance of a long-term follow-up in patients with anti-mGluR5 encephalitis should be highlighted, as suggested by Spatola et al. (7) To be mentioned, combined with the previous data, only four patients received second-line immunotherapy, such as RTX at the acute phase or MMF as maintenance therapy. Previous studies in anti-NMDAR and anti-AMPAR encephalitis had revealed the association between lower relapse risk and aggressive treatment (14,20). This association is awaiting further validation in more extensive prospective cohort studies of anti-mGluR5 encephalitis. Our study has several limitations. First, given the rarity of this disease and the consequent small sample size, multivariate analyses were not performed to determine the independent prognostic factors of clinical outcome, and the effect of immunotherapy could not be assessed in more detail. Second, some meaningful information was not available from the previous studies, such as the time from onset to immunotherapy and the time from immunotherapy to clinical improvement. Therefore, some factors potentially associated with the clinical outcome could not be analyzed (28). Besides, some patients had relatively short followup, which may cause an underestimate of the number of patients with relapses or associated tumors. In addition, the retrospective analysis of the current and previous studies increased the risk of bias, especially when analyzing the response to immunotherapy and the strategy of immunotherapy. Future prospective multi-center studies in larger cohorts with longer follow-ups should be conducted to provide more information about this rare disease. Despite the limitations, we have described five newly identified patients of anti-mGluR5 encephalitis from China, which has provided more evidence to support the previous findings and expanded the clinical spectrum of anti-mGluR5 encephalitis. Besides, we found a lower frequency of paraneoplastic cases in Chinese patients than in Western patients. We also reviewed all reported cases with anti-mGluR5 encephalitis so far and determined the prognostic factors of clinical outcome, which may promote a better understanding of the prognosis of this rare disease. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by the Ethics Committee of West China Hospital of Sichuan University (Approved No. of ethic committee: 292). The patients/participants provided their written informed consent to participate in this study. Author contributions KG conceptualized and designed the study, collected the data, carried out the statistical analysis, interpreted the data, and drafted the manuscript. XuL, XG, AL, YL, and XiL collected the data. DZ revised the manuscript. ZH conceptualized and designed the study and revised the manuscript. All authors contributed to the article and approved the submitted version. Funding This study was supported by the National Key R&D Program of China (2022YFC2503800) and the National Natural Science Foundation of China (81971213).
2023-03-23T15:20:07.920Z
2023-03-21T00:00:00.000
{ "year": 2023, "sha1": "6fae0cd2b8035e9de30bcdb25eef9792dc0438a5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1146536/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "f1349f70811012353f0a0ca5b7dcce0d3cf4b945", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
2309317
pes2o/s2orc
v3-fos-license
Aphid Parasitoid Mothers Don't Always Know Best through the Whole Host Selection Process Parasitoid host selection behaviour has been extensively studied in experimentally simplified tritrophic systems formed by one single food chain (one plant, one herbivore and one parasitoid species). The "Mother knows best" hypothesis predicts that the preference for a plant-host complex should be positively correlated with plant quality for offspring performance. We studied the host selection behaviour of the generalist endoparasitoid Aphidius matricariae towards the black bean aphid Aphis fabae in the intercrop system including Vicia faba as a focal plant and its companion plant Camelina sativa. Dual-choice laboratory bioassays revealed that parasitoid females preferred to orientate towards (1) the plant-aphid complex over the non-infested plant whatever the complex (2) the C. sativa-A. fabae complex over the V. faba-A. fabae complex. In dual choice attack rate bioassays, parasitoid females showed more interest towards the aphids on C. sativa but paradoxically chose to oviposit more in aphids on V. faba. Ultimately, parasitoids that had developed on the V. faba-A. fabae complex exhibited better fitness parameters. By demonstrating that parasitoid females were able to discriminate the aphid host that offered the highest fitness to their offspring but selected beforehand the least suitable plant-aphid complex, we provide key insight into the disruption in their host selection behaviour potentially triggered by diverse habitats. This suggests that the "Mother knows best" hypothesis could be thwarted by increasing the complexity of the studied systems. Introduction The "Mother knows best" hypothesis, also known as the "preference-performance hypothesis", derives from the general optimality theory originally set for phytophagous insects [1][2][3][4], which states that female oviposition preferences should positively correlate with host suitability for offspring development (i.e. offspring survival and further adult fecundity). The "Mother knows best" hypothesis has been recently expanded to parasitoid insects [5], natural enemies of phytophagous insects. Because their larvae develop as obligatory parasites, the reproductive success of parasitoids is partly determined by the ability of females to select a suitable insect host for the development of their progeny. Physical and chemical cues associated with the insect host and/or its habitat have been shown to play important roles in the localization and selection of both the insect host and its host plant (for review, see [6][7][8][9][10][11]). During the first steps of host habitat and host location, parasitoid females may respond to volatile chemical blends produced by their host plant, to a combination of insect host and host plant odours and/or to volatiles produced by the plant in response to host feeding damage (i.e., herbivore induced-plant volatiles or HIPV). Once the parasitoid female has reached a potential insect host habitat, it begins to search for the insect host on or near the host plant and responds to chemical stimuli produced by the insect host itself or arising from its products rather than to host-plant derived products. The ability of koinobiont parasitoid (i.e. parasitoids whose hosts continue to feed and grow after parasitization) females to reliably predict future host quality has been demonstrated in choice studies where the preferred insect hosts were indeed the ones that allowed maximum adult parasitoid size and/or minimum development duration [12,13]. However, although it is also expected that the preferred host plant and/or plant-insect host complex will provide an advantage in terms of fitness for the parasitoid offspring, there is a lack of empirical evidence to link the orientation preferences of parasitoid females during the early steps of host selection with offspring performance [14]. Instead, most experiments have ignored the role played by the first trophic level (i.e. the plant) and focused on preference and performance by parasitoids in a bitrophic system (for review, see [8,15]). To fill this lack of knowledge, the "Mother knows best" hypothesis should be considered, for parasitoid insects, not only through the study of the entire host selection process (from the early stages of host habitat selection to the final steps of host acceptance/suitability), but also through a dual choice set up between at least two host habitats. Indeed, laboratory studies generally tend to simplify multitrophic interactions that occur in nature. Parasitoid host selection behaviour has been extensively studied using "simple" tritrophic systems formed by one single food chain that consisted of one plant, one phytophagous host and one parasitoid species [7,16,17]. Such studies often neglect the impact of complex odour bouquets whereas, in natural systems, parasitoids search for their host in habitats that are spatially and temporally diverse, and comprise various plants and herbivore communities [18]. Intercropping systems involve the culture of at least two crops in the same space and time [19] with one focal crop associated with one companion plant providing benefits such as pest/ weed control and/or increased yields. They provide an opportunity to test theoretical ecology concepts and study the effects of plant diversity on food web interactions. Thus, they represent interesting models to investigate the links between parasitoid foraging behaviour in diverse habitat and parasitoid performance. In the present work we used a laboratory approach to test the "Mother knows best" hypothesis through the study of all the steps of the host selection process by an aphid parasitoid facing two plant-host complexes. In other words, will the plant-host complex initially preferred by the parasitoid females be the one consistently preferred along all the steps of the host selection process, and will it ultimately allow the best progeny performance? To address this question, our experimental food web was formed of two crop plants, the broad bean Vicia faba (L.) (Fabaceae) as the focal plant and the false flax Camelina sativa (L.) Crtz. (Brassicaceae) as the companion plant, the black bean aphid Aphis fabae (Homoptera: Aphididae) as the host and its parasitoid wasp Aphidius matricariae Haliday (Hymenoptera: Braconidae: Aphidiinae). Three experiments were set, taking into account each of the hierarchical steps occurring along the entire host selection process: 1. The first steps of host selection (host habitat and host location) were studied by assessing the preferences of A. matricariae females for a host plant (V. faba or C. sativa), depending on its status (infested or not by A. fabae). 2. Host recognition and acceptance of A. fabae by A. matricariae females were evaluated on both plants through an attack rate bioassay (dual choice experiment). 3. Host suitability/regulation was assessed by comparing the fitness of the parasitoid offspring that developed on each of the two plant-host complexes. Study system Camelina (C. sativa) is a Brassicaceae which was an important cultivated oil crop in temperate Europe until the nineteenth century [20]. It has recently been re-introduced because its oil offers good opportunities not only as a biofuel but also as functional food due to its exceptionally high levels of omega-3 fatty acids [21]. Camelina is generally reported to be tolerant and resistant to various pathogens and insects [22,23], although it has recently been shown to be a potential host for some aphid pests [24]. The faba bean (V. faba) is widely grown under a range of climatic conditions from temperate to subtropical where it hosts a wide variety of insect pests [25]. Camelina can be used as a companion plant in intercropping systems with faba bean, for weed control [26]. The black bean aphid A. fabae is a polyphagous species with a wide host range including Fabaceae but also Brassicaceae plants [27], favouring host plant alternation. This aphid is one of the most damaging pests of faba bean plants. It causes direct damage by phloem feeding, which results in significant impairments of plant growth and yield [28], and it also acts as a vector for plant viruses [29]. The black bean aphid is a common host of the generalist and cosmopolitan hymenopteran parasitoid wasp A. matricariae that uses more than a hundred different aphid species as hosts [30] and is commercially available for biological control. The colony of Aphis fabae was initiated from a single apterous parthenogenetic female (provided in 2012 by Gembloux Agro-Bio-Tech, Belgium) and was maintained in ventilated plastic cages (360 x 240 x 110 mm) in growth chambers under controlled conditions (20 ± 1°C, 60 ± 5% relative humidity, and a 16L:8D photoperiod at 2 klux) to induce parthenogenesis. The two plant-aphid combinations used in the study were obtained by mass-rearing of A. fabae either on V. faba or C. sativa plants. Cohorts of synchronized A. fabae nymphs were reared on plantlets of each of the two host plants under the controlled conditions described above until they were used for the experiments. They were obtained by a procedure consisting in placing parthenogenetic adult females on plantlets for 24 hours before removing them. Three-day-old aphids (second instar larvae) were randomly selected as hosts for all bioassays. Aphidius matricariae (Haliday) parasitoids (Hymenoptera: Aphidiidae) were obtained from Viridaxis, Gosselie (Belgium) as mummies. Attention was paid to ensure the use of a commercial line of A. matricariae that would have been reared neither on the aphid A. fabae nor on any of the plant species used in the study. Upon reception, mummies were transferred to plastic tubes (75 x 13 mm) closed with a cotton plug. Once emerged, parasitoids were sexed and mating was allowed by grouping three to four males with six to seven females in the same tube. They were fed ad libitum with a 1:1 honey/water (v/v) solution until used for the experiments. Parasitoids were maintained in a climate room at 20 ± 1°C, 60 ± 5% relative humidity, and a 16L:8D photoperiod. Three-day-old standardized parasitoid females (mated, fed and without oviposition experience) were randomly selected for the laboratory experiments. Bioassay 1: Habitat and host plant localization The aim of bioassay 1 was to determine the preference of A. matricariae females in dual choice tests using different combinations of the two plants. The experimental setup used was modified from [31]. It consisted of four ventilated plastic chambers (360 x 240 x 110 mm) used simultaneously, inside which V. faba and C. sativa plants were placed on opposite sides of the chamber (Fig 1). In order to use potted plants of similar biomasses, one pot contained one plantlet of V. faba whereas the other pot contained three plantlets of C. sativa. This ensured that aphids were submitted to similar amounts of COVs potentially emitted by each host plant. To limit possible biases from the environment around the chambers, the relative position of the two plants in the ventilated chambers was inverted every replicate. Chambers were randomly positioned within the room where they received homogenous light from above. Four combinations were randomly tested: 1) non-infested C. sativa vs. non-infested V. faba; 2) A. fabae-infested C. sativa vs. non-infested V. faba; 3) non-infested C. sativa vs. A. fabae-infested V. faba; 4) A. fabae-infested C. sativa vs. A. fabae-infested V. faba. An infested plant was obtained by placing 20 A. fabae neonates on the plantlet for 72 hours prior to the test, to ensure induction of plant responses [32]. A single standardized A. matricariae female was placed with a small paintbrush on a take-off platform (i.e. Petri dish lid, 50 mm in diameter) in the centre of the experimental setup. Female parasitoids were continuously observed until they made a first choice (i.e. landing on either plant) or for a maximum of twenty minutes after introduction. This duration was chosen as preliminary tests showed that 50% of the females responded within 5 minutes. Only females that landed on one of the two host-plant species were considered as responding parasitoids. Times from introduction to first choice by responding females were recorded (latency time). The females that were not found on any plant, i.e. that were on inner walls or ground of the experimental chamber, were considered as non-responding parasitoids. Females that did not leave the take-off platform within 20 minutes were discarded. Thirty-eight to ninety-one replicates per treatment were performed. All experiments were conducted at 20 ± 1°C and 60 ± 5% relative humidity. Bioassay 2: Host recognition and acceptance The aim of bioassay 2 was to evaluate how A. matricariae females would locate and evaluate their host after habitat location had been achieved. In a choice test, the attack rate and the success of attack of A. matricariae females were measured on aphid hosts previously reared on each plant. Attack rate arenas were adapted from [33] and consisted of 90 mm diameter plastic Petri dishes (Gosselin, Hazebrouck, France), containing one leaf of V. faba and one leaf of C. sativa embedded in 1.5% agar (Prolabo, Louvain, Belgium) and separated from each other by a distance of 2 cm. Although they were not found significantly different at the 5% level, leaf surfaces slightly differed (C. sativa: 6.066 ± 0.250 cm²; V. faba: 6.814 ± 0.268 cm², Mann-Whitney rank-sum test, U = 76, P = 0.053). Ten three-day-old A. fabae were deposited onto each leaf 24 hours prior to the experiment. A. fabae reared on V. faba were deposited on V. faba leaves and A. fabae reared on C. sativa were deposited on C. sativa leaves. One standardized A. matricariae female was carefully introduced inside the attack rate arena. Observations started immediately and lasted for 10 min. The time before the first recorded behavioural item (latency time), the first choice (aphid patch which was reached first by the parasitoid) and the second choice were noted. Then, the frequency and the sequence of the following behavioural items were recorded (AE: Antennal Examination, AB: Abdomen Bending and OI: Ovipositor Insertion) [9]. An ovipositor insertion was recorded whenever a parasitoid female made physical contact with an aphid using its ovipositor, while exhibiting an oviposition stance (Abdomen Bending). For the analysis, the behaviours were regarded as a series of events and only the final event was recorded (e.g. if a wasp had antennated an aphid and then bent its abdomen, this was noted as AB and not as AE). Each contact between a wasp and an aphid was classified in only one of the above categories. Even if the number of the different behavioural items might not be affected, transition frequencies between behaviours may change [34]. Therefore, all behavioural items and their sequential order were recorded and computed into an ethogram. To assess the proportion of Ovipositor Insertion (OI) resulting in true oviposition (OV), all stung aphids were dissected in a drop of NaCl solution (9 ‰) under a stereomicroscope to calculate the rate of oviposition (%_OV = Number of true oviposition (OV) / Number of ovipositor insertion (OI)). Thirty replicates in total were made and all experiments were conducted at 20 ± 1°C and 60 ± 5% relative humidity. Bioassay 3: Host suitability The aim of bioassay 3 was to determine the effects of the plant-aphid complexes, either C sativa-A. fabae or V. faba-A. fabae, on the fitness of the parasitoid progeny. Preliminary experiments had been conducted in order to evaluate the potential effect of aphid treatment (i.e. previously reared on C. sativa or V. faba) on the probability of laying an egg in each attacked host (probability of true oviposition) under experimental conditions of controlled oviposition. The controlled oviposition procedure consisted of placing a single standardized A. matricariae female (i.e., mated, fed, and without oviposition experience) with a single three-day-old A. fabae nymph in a small Eppendorf tube (0.5 ml), as described in [35]. Each parasitoid female was only used once. For each host plant, 24 A. fabae coming from each rearing plant were dissected under a stereomicroscope immediately after ovipositor insertion by A. matricariae to determine the presence or absence of a parasitoid egg. The frequency of true oviposition (%_OV) (87.50% for aphids reared on V. faba and 83.33% for those reared on C. sativa) was not significantly affected by the host-plant species on which aphids had been previously reared (Fisher's exact test, P > 0.80). Prior to the oviposition procedure, each three-day-old A. fabae nymph was measured under a stereomicroscope (LEICA M165C) from the tip of the head to the base of the cauda. After being stung by a parasitoid female, each aphid nymph was then individually placed back onto its host plant in a clip-cage under the controlled conditions described above. It was followed and observed daily until death or formation of a mummy. In the latter case, it was measured as described above and transferred to a plastic tube (75 x 13 mm) closed with a cotton plug. Emerged parasitoids were sexed and females were fed ad libitum with a 1:1 honey/water solution for three days to ensure they had reached their fecundity peak [36]. They were then stored at -80°C for further measurements. The tibia length of females, used as a proxy for parasitoid size, was measured as described above for aphids. Females were dissected into a drop of NaCl solution (9 ‰) to collect their ovaries and the total number of eggs present in the two ovaries was recorded. The following parasitoids' life-history parameters were computed: 1) Pre-nymphal developmental time (from oviposition to mummification) in days; 2) Nymphal developmental time (from mummification to adult emergence) in days; 3) Total developmental time (from oviposition to adult emergence) in days; 4) Mummy size (length in mm); 5) Tibia length (in mm) of parasitoid females; 6) Egg load of parasitoid females; 7) Mummification rate (no. of mummies / no. of stung aphids) x 100; 8) Emergence rate (no. of emerged parasitoids / no. of mummies) x 100. Statistical analysis Mean values are given with their standard error of the mean (SEM). Preferences of A. matricariae females for different combinations of tested plants in Bioassay 1, the first choice in Bioassay 2 and the rate of oviposition were compared using a Chi-square test. The Kruskal-Wallis test was performed to assess the effect of plant combination on the percentage of non-responding parasitoids in Bioassay 1. Parasitoid attack rate parameters (AE, AB, OI and latency time), developmental times (pre-nymphal, nymphal and total) and mummy and tibia lengths were compared using Student t-tests for independent samples. Aphid sizes were compared using Student t-tests, by a randomized selection of fifty A. fabae reared on each host plant. Mummification rate, emergence rate and sequential analysis were compared using Fisher's exact tests. All statistical analyses were carried out using the statistical program 'R' version 3.1.0 [37]. Latency times (seconds; mean ± SEM) were not significantly different: non-infested C. sativa Bioassay 2: Host recognition and acceptance The first choice was significantly in favour of A. fabae on C. sativa ( Table 1). The number of Antennal Examination, Abdomen Bending and Ovipositor Insertion was significantly greater on A. fabae on C. sativa (Table 1, S5 Table). Latency time was not significantly different between A. fabae reared on C. sativa and aphids reared on V. faba ( Table 1). Out of the 21 females that first chose the A. fabae on C. sativa, 10 left this patch and moved to A. fabae on V. faba (47.62%). Conversely, out of the nine females that first chose A. fabae on V. faba, three left this patch and moved to A. fabae on C. sativa (33.33%). No significant difference was found between these two percentages (Fisher's exact test, P = 0.73). When considering all the behavioural items on C. sativa (n = 214), only 4.67% of these realised items were followed by a shift on V. faba (S6 Table). Conversely 6.60% of the items on V. faba (n = 91) were followed by a shift on C. sativa. These percentages were not significantly different (χ 2 = 0.33, P = 0.57). The probability that Abdomen Bending (AB) was followed by Antennal Examination (AE) was four times greater on the C. sativa-A. fabae complex (Fisher's exact test, P = 0.022), whereas on the V. faba-A. fabae complex, it was more likely to be followed by Ovipositor Insertion (OI) (Fig 3). Bioassay 3: Host suitability Three-day old A. fabae nymphs had a significantly smaller size when reared on C. sativa compared to V. faba (mm; mean ± SEM) (0.80 ± 0.02 and 0.94 ± 0.02, respectively, Student t-test, t = -5.73; P < 0.001; S7 Table). Pre-nymphal and total developmental times of A. matricariae parasitoids were significantly longer (ca. one day) on C. sativa than on V. faba, whereas no difference was found for nymphal developmental time ( Table 2). On C. sativa, parasitoid mummification and emergence rates were significantly lower (ca. 50%) than on V. faba (Table 2). Mummy length and parasitoid size (hind tibia length) were significantly smaller on C. sativa than on V. faba. No significant difference was found between C. sativa and V. faba for A. matricariae females egg load. Discussion We showed that A. matricariae females exhibited an initial preference for the plant-aphid complex that would not allow the best progeny performance, consequently invalidating the "Mother knows best" hypothesis. Indeed, in our study, A. matricariae females preferred to orientate towards the C. sativa-A. fabae complex and showed a greater interest (Antennation Examination and Abdomen Bending) for aphids on camelina whereas aphids were more readily accepted and suitable for parasitoid development when reared on V. faba. Such paradoxical choice, opposing optimal foraging and optimal oviposition, can easily be explained in phytophagous insects where females may choose to feed and oviposit on hosts that enhance their own adult performance (realised fecundity) but not their offspring performance (survival and development time) [38]. This was empirically validated in phytophagous grass miner females [39]. It has also been reported in the generalist parasitoid Aphidius ervi whose females were preferentially attracted by third and fourth instars hosts while their offspring performance was maximized on second instars hosts [5]. However, these studies testing the optimal oviposition theory have explored a direct measurement of parasitoid preference that involved only two trophic levels (i.e. herbivore and parasitoid) and therefore excluded the first trophic level (i.e. the plant). Few studies have included the first trophic level in their evaluation of the impact of the host plant quality on the preference and performance of parasitoid wasps. Indeed, [40] and [41] reported that the specialist parasitoid Cotesia glomerata preferred to alight on Brassica nigra-Pieris brassicae complexes that were not co-infested with cabbage root flies, and that this behaviour was correlated with offspring performance [42]. In addition, plant preferences by specialist parasitoids Diadegma semiclausum and C. glomerata for their respective lepidopteran hosts, Plutella xylostella and P. brassicae, were positively correlated with plant quality for offspring performance, which led the authors to state that "Mother knows best" [43]. Indeed, they showed that parasitoid wasps could innately predict host quality on the basis of plant odours. Conversely to these studies on tri-trophic systems, our work is not in accordance with the "Mother knows best" hypothesis. Explanations for the mismatch between mother preference and host suitability can include threat of hyperparasitoids, host defence, and learning of planthost complex cues [5]. In our study we used naïve parasitoid females, with no oviposition nor olfactive experience. Parasitoids are known to exhibit associative learning of volatile compounds emitted by the plant during oviposition, subsequently allowing them to select more accurately a suitable host-plant complex according to plant odour [44]. The benefits of learning ability are correlated with the variability of host resources (polyphagous) and the lifetime of the female [45]. Within the framework of our study model and given the results of the finals phases of host selection, it is expected that the disruption observed in the host selection process would decrease with the age/experience of female parasitoid. Parasitoid reproductive success is closely correlated with the female's ability to find hosts [8]; therefore, parasitoids have evolved efficient foraging strategies to locate hosts in complex environments. Many aphid parasitoids respond weakly to plant or aphid odours alone [46,47], but use synomones to locate aphid-infested plants [48,49]. Although A. matricariae did not discriminate between its two host plants, the preference for one of the two plant-host complexes could be explained by differences in volatile compound blends emitted. This generally allows parasitoids to discriminate between species of plants [47,50] and/or species of herbivores [10,32]. A meta-analysis [51] showed a strong correlation between preference and performance in oligophagous species of herbivores, but not in polyphagous species. Our results and the literature previously cited suggest that this correlation could be transferred to the third trophic level (i.e. between generalist and specialist parasitoids). The ability of parasitoids to exploit plant-derived volatiles is higher when host ranges are narrow (i.e. specialist parasitoids) [14]. A number of the compounds released are common to most plants and are referred as green leaf volatiles (GLV). However, the composition of the entire blend and the concentrations of specific compounds differ based on plant and herbivore species. Those chemicals that promote the effectiveness of natural enemies involve volatile compounds produced in response to herbivore feeding damage, so-called herbivore induced-plant volatiles (HIPV), and are known to be attractive to parasitoids and predators of arthropod herbivores [10]. In our study, A. matricariae females seemed to be sensitive to HIPV because they showed a preference for the plant-aphid complex in comparison to a non-infested plant. Similar responses by A. matricariae to plant-aphid complex were demonstrated [52] and for A. ervi [32]. The quantity of emitted and perceived plant volatiles is important for parasitoid females when searching for herbivorous hosts [53]. Therefore, the preference for the C. sativa-A. fabae complex could possibly be explained by supposing that it had a different GLV emission profile than the V. faba-A. fabae one. Indeed, GLVs were found to have positive effects on host location by parasitoids [49,54] and to be important for mediating parasitoid attraction to herbivore-damaged Brassicaceae [55]. Once host location is achieved, upon host encounter by the parasitoid female, effective detection of the host occurs during 'antennal palpation' [34] and is based on physical and chemical cues acting at short range or by contact [48]. Some behavioural items linked to host recognition (AE and AB) were enhanced on A. fabae on C. sativa, but the numbers of OI, allowing the wasp to assess host quality before oviposition, were identical on aphids reared on both plants. Nevertheless, the ethogram of A. matricariae attack behaviour on the two plant-aphid patches emphasizes that the host recognition step was more effective on A. fabae on V. faba, with increased transitions between AB and OI, and consequently fewer returns from AB to AE, compared to A. fabae on C. sativa. These results suggest an alteration in the host selection process on the C. sativa-A. fabae complex, which is confirmed by the lower oviposition rate measured on this complex. Host acceptance for A. matricariae females seems to be a function of stimuli firstly perceived during AE, resulting in some rejection before OI, and finally during OI, when host quality is assessed before oviposition. Although V. faba leaf area was slightly greater, it is unlikely that this difference could have had a decisive influence on the behaviour of the parasitoids. Indeed, at this spatial scale and at this stage of the host selection process, parasitoid females predominantly use cues emanating from the hosts themselves even if plants may also play a role in the host selection behaviour. Changes in host acceptance that depend on the host plant have already been recorded in other aphid parasitoids. For example in laboratory experiment, L. testaceipes oviposited into more aphids on mungbean than on cotton [56]. Overall in our study, the two plants were suitable with oviposition occurring on both species, but host acceptance of A. matricariae was enhanced on V. faba compared to C. sativa. In order to see if such a final decision to preferentially oviposit in A. fabae on V. faba was linked to a better performance of the progeny on this complex, a controlled oviposition bioassay was performed. Our results indicate that parasitoid fitness was higher when they had developed on V. faba compared to C. sativa. This is in accordance with studies in other systems where the size of the emerging solitary parasitoid, used as a fitness proxy, is correlated to "host quality" (size, age, stage and diet) (for review, see [8,57]). Indeed, in our study, aphid hosts developing on V. faba were bigger than those developing on C. sativa and therefore offered better quality for parasitoid development. This size difference observed between aphids feeding on C. sativa and V. faba could be due to different amino acid composition [58]. Indeed, the growth and fecundity of phytophagous insects are generally limited by nitrogen, in terms of quantity and quality (i.e. composition). The latter occurs because aphids lack the ability to synthesize nine 'essential' amino acids; and if the concentration of one of those is in short supply, protein synthesis and animal growth are constrained [59]. Various studies with other koinobiont parasitoids have reported that parasitoid size (which is often correlated with parasitoid fecundity) is an increasing function of host size or stage at oviposition [15], with bigger hosts usually representing a greater resource [60]. However, this link was not found in our study, in which no significant difference of parasitoid egg load was observed. The lower performance of A. matricariae on the C. sativa-A. fabae complex could also be partly explained by the presence of camelina-specific secondary compounds that may be harmful to the developing parasitoid larvae. Few studies have investigated the effects of secondary plant chemistry mediated through the host on parasitoid performance [15,16]. Camelina tissues exhibit different glucosinolates [61], Brassicaceae secondary metabolites that may negatively affect the fitness of parasitoids [16,55]. Moreover, the presence of Camalexin was identified in camelina [62], a secondary compound reducing the performance of Myzus persicae, a generalist aphid species [63]. Performance of the host and that of its parasitoid are often positively correlated [64] but the adverse effects of food plant characteristics on insect performance are usually less pronounced in the parasitoid than in its herbivore host. Here, camelina plants seemed to have more drastic effects on A. matricariae than on A. fabae: the overall fitness of parasitoids was reduced whereas intrinsic rate of natural increase (r m ) was equivalent [24] and only sizes were affected in aphid hosts. Ultimately, the sharp decline in parasitoids fitness could also be due to the host plant range of A. fabae hosts. Indeed, parasitoids attacking generalist hosts have been shown to be more strongly affected by the herbivore's diet than parasitoids that attack specialist hosts [64]. Conclusion These findings may have important implications for agricultural production in sustainable systems. The presence of camelina induced a disruption of the initial foraging decisions of A. matricariae females towards the V. faba-A. fabae complex, potentially impairing the top-down regulation of the black bean aphid in such an intercropping association. From the aphid perspective, camelina seemed to be an 'enemy free-space', as described by [65] stating that plants can provide an ecological refuge for herbivores by allowing them to chemically or physically escape their natural enemies. Based on the first phases of host selection by parasitoids, the C. sativa-A. fabae complex may ultimately be considered as an ecological trap for A. matricariae (i.e. a low-quality habitat that organisms prefer over superior habitats) [66]. This concept has been set for animals that make errors in habitat assessment as a result of some mismatch between the environmental cues they use to select habitats and actual habitat quality (for review, see [67]). By demonstrating that A. matricariae females are able to discriminate the aphid host that offers the highest fitness to their offspring but select beforehand the least suitable plant-aphid complex, this study provides key insight into the disruption in their host selection behaviour potentially triggered by diverse habitats. Supporting Information S1 Table. Bioassay 1: Habitat and host-plant location-Non-infested C. sativa vs. noninfested V. faba. Responses made by Aphidius matricariae females when presented with a choice between non-infested C. sativa vs. non-infested V. faba. Females that landed on either plant within 20 min were considered as "responding" females (Response = 1) whereas they were considered as "non-responding" when they left the take-off plateform but did not choose any target (Response = 0). If they did not leave the take-off plateform within 20 min they were discarded (Response = D). Times from introduction to first choice by responding females were recorded (latency time). (DOCX) S2 Table. Bioassay 1: Habitat and host-plant location-A. fabae-infested C. sativa vs. noninfested V. faba. Responses made by Aphidius matricariae females when presented with a choice between A. fabae-infested C. sativa vs. non-infested V. faba. Females that landed on either plant within 20 min were considered as "responding" females (Response = 1) whereas they were considered as "non-responding" when they left the take-off plateform but did not choose any target (Response = 0). If they did not leave the take-off plateform within 20 min they were discarded (Response = D). Times from introduction to first choice by responding females were recorded (latency time). (DOCX) S3 Table. Bioassay 1: Habitat and host-plant location-Non-infested C. sativa vs. A. fabaeinfested V. faba. Responses made by Aphidius matricariae females when presented with a choice between non-infested C. sativa vs. A. fabae-infested V. faba. Females that landed on either plant within 20 min were considered as "responding" females (Response = 1) whereas they were considered as "non-responding" when they left the take-off plateform but did not choose any target (Response = 0). If they did not leave the take-off plateform within 20 min they were discarded (Response = D). Times from introduction to first choice by responding females were recorded (latency time). (DOCX) S4 Table. Bioassay 1: Habitat and host-plant location-A. fabae-infested C. sativa vs. A. fabae-infested V. faba. Responses made by Aphidius matricariae females when presented with a choice between A. fabae-infested C. sativa vs. A. fabae-infested V. faba. Females that landed on either plant within 20 min were considered as "responding" females (Response = 1) whereas they were considered as "non-responding" when they left the take-off plateform but did not choose any target (Response = 0). If they did not leave the take-off plateform within 20 min they were discarded (Response = D). Times from introduction to first choice by responding females were recorded (latency time). (DOCX) S5 Table. Bioassay 2: Host recognition and acceptance behaviour of Aphidius matricariae females on Aphis fabae reared on either C. sativa or V. faba. A. matricariae females were individually tested in an attack rate bioassay where they were presented with a choice between 10 A. fabae reared on V. faba deposited on V. faba leaves and 10 A. fabae reared on C. sativa deposited on C. sativa leaves. Observation of the female wasps lasted for 10 minutes after their introduction. Different behavioural items were recorded and their frequencies are reported in the table below (AE: number of Antennal Examination, AB: number of Abdomen Bending, OI: number of Ovipositor Insertion). The time before the first recorded behavioural item (latency time) and the first choice (aphid patch which was reached first by the parasitoid) are also presented in the table. Immediately after the attack rate bioassay, all stung aphids were dissected and the number of parasitoid eggs (Eggs) recorded. (DOCX) S6 Table. Numbers and percentages of behavioural items (AE, AB, OI or All) that were followed by a shift from C. sativa to V. faba or from V. faba to C. sativa. (AE: number of Antennal Examination, AB: number of Abdomen Bending, OI: number of Ovipositor Insertion). The total numbers of behavioural items performed are indicated in brackets. (DOCX) S7 Table. Bioassay 3: Host suitability: Effect of the plant-Aphis fabae complex on several life history traits of the parasitoid Aphidius matricariae. Parasitoids' life-history traits were measured on the females wasps that had developed on A. fabae aphids that were reared either on Camelina sativa or on Vicia faba. For each emerging female individual the following parameters were measured: Tibia length (in cm) and Egg load (No. eggs); Mummy size (length in cm); Pre-nymphal developmental time (from oviposition to mummification) in days; Nymphal developmental time (from mummification to adult emergence) in days; Total developmental time (from oviposition to adult emergence) in days. (DOCX)
2017-06-09T18:07:12.014Z
2015-08-13T00:00:00.000
{ "year": 2015, "sha1": "87e84237c33b4dc5b9d6f15b040a34e9daa7ef85", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0135661&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87e84237c33b4dc5b9d6f15b040a34e9daa7ef85", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13897247
pes2o/s2orc
v3-fos-license
Evidence of Dopaminergic Processing of Executive Inhibition Inhibition of unwanted response is an important function of the executive system. Since the inhibitory system is impaired in patients with dysregulated dopamine system, we examined dopamine neurotransmission in the human brain during processing of a task of executive inhibition. The experiment used a recently developed dynamic molecular imaging technique to detect and map dopamine released during performance of a modified Eriksen's flanker task. In this study, young healthy volunteers received an intravenous injection of a dopamine receptor ligand (11C-raclopride) after they were positioned in the PET camera. After the injection, volunteers performed the flanker task under Congruent and Incongruent conditions in a single scan session. They were required to inhibit competing options to select an appropriate response in the Incongruent but not in the Congruent condition. The PET data were dynamically acquired during the experiment and analyzed using two variants of the simplified reference region model. The analysis included estimation of a number of receptor kinetic parameters before and after initiation of the Incongruent condition. We found increase in the rate of ligand displacement (from receptor sites) and decrease in the ligand binding potential in the Incongruent condition, suggesting dopamine release during task performance. These changes were observed in small areas of the putamen and caudate bilaterally but were most significant on the dorsal aspect of the body of left caudate. The results provide evidence of dopaminergic processing of executive inhibition and demonstrate that neurochemical changes associated with cognitive processing can be detected and mapped in a single scan session using dynamic molecular imaging. Introduction Neurochemical control of executive inhibition remains uninvestigated because of the lack of a reliable technique to detect taskinduced changes in the brain chemistry. Indirect evidence acquired in cognitive studies suggests that dopamine may be involved in the processing. These studies have found that the patients with dysregulated dopamine neurotransmission show impaired performance in executive inhibition tasks. Thus, poor performance is reported in patients with attention deficit hyperactivity disorder (ADHD), Tourette's syndrome (TS), Parkinson's disease (PD), and schizophrenia [1]. In most of these studies modified Eriksen's flanker task [2] was used to elicit executive inhibition. Involvement of dopamine in the processing is suggested also by the data obtained in laboratory animals. For example, it was shown in monkeys that the number of inhibited neurons reduces significantly after depletion of dopamine [3]. The depletion therefore increases the number of nonspecifically activated neurons and reduces signal to noise ratio. As a result, the depleted monkeys find it extremely difficult to inhibit competing options and select an appropriate response. Additionally, neuroimaging experiments have consistently reported increased activation in the brain areas that are innervated by dopaminergic neurons. In an fMRI experiment [4] we observed increased BOLD activation in the caudate, anterior cingulate cortex (ACC), and superior and middle frontal gyri during performance of the flanker task. Since these structures are innervated by dopamine, the experiment provides indirect evidence of dopaminergic processing of the inhibition. A number of neurocognitive models of learning (based primarily on the data acquired in laboratory animals) also assume involvement of dopamine in the processing. For example, the actor-critic model of reinforcement learning [5] assumes that dopamine-mediated processes help animals learn the most rewarding action by inhibiting competing options. There is however no direct evidence of dopaminergic processing of the human executive inhibition. Because of the lack of direct evidence, its significance remains unclear. As a result, we have incomplete understanding of the neurocognitive deficits that are responsible for impaired inhibitory control in psychiatric and neuropsychiatric conditions. In this experiment we used a newly developed dynamic molecular imaging technique [6,7] to detect and map dopamine released during performance of a task of executive inhibition. The technique exploits the competition between dopamine and its ligand for receptor occupancy and detects dopamine released during task performance in a single scan session. We used this technique previously to study dopamine released during performance of a number of cognitive, emotional and behavioral tasks [6,7,8,9,10,11,12]. In the present experiment we detected and mapped dopamine released in the Congruent and Incongruent conditions of a modified Eriksen's flanker task [2]. The task elicited executive inhibition. Results In the modified Eriksen's flanker task performed in the PET camera, volunteers made accurate responses in most trials. In the Congruent condition they made 97.162.0% correct response while in the Incongruent condition 91.0610.9% of responses were accurate. Even though responses were less accurate in the Incongruent condition, the accuracy was not significantly different from that in the Congruent. Similarly, response time was longer in the Incongruent (6956183 msec) as compared to the Congruent (6026151 msec) condition but the difference was not significant statistically. The trend of lower accuracy and greater response time in the Incongruent condition indicated cognitive cost of processing the inhibition. For making a response in this condition volunteers had to inhibit prepotent responses indicated by the direction of flanker arrowheads. This inhibition was not needed in the Congruent condition in which target and flanker arrowheads pointed to the same direction. As described in Materials and Methods section, analysis of the PET data involved measurement of a number of receptor kinetic parameters using two models: linear extension of simplified reference region model or LE-SRRM [12,13]; and extended simplified reference tissue model or E-SRTM [14]. Using the LE-SRRM we dynamically measured changes in the rate of ligand displacement (c) in the Incongruent condition. This measurement allowed detection and mapping of dopamine released in each voxel at each time point. By comparing the rate of change measured in the Congruent and Incongruent conditions, we located voxels where it increased significantly during task performance (Incongruent condition). To ensure that this measurement reflected endogenously released dopamine and it was not a chance finding, we measured additional receptor kinetic parameters using the E-SRTM [14]. These parameters included the binding potentials (BPs) and dissociation coefficients (k 2a ) of the ligand in the Congruent and Incongruent conditions. Voxel-wise comparison of the parameter values allowed us to locate voxels where the values (DBP and Dk 2a ) changed significantly after task initiation (Incongruent condition). The LE-SRRM analysis revealed that the values of c changed significantly after task initiation in 4 striatal areas located one each in the caudate and putamen of the two hemispheres ( Figure 1). It was most significant (Table 1) on the dorsal aspect of anterior part of the body of left caudate (t = 2.56). Stereotactic (MNI) coordinates (x,y,z) of this location were 210,14, and 8 mm. In this area we observed maximum change in the rate of ligand displacement (c = 0.1). It was 384% higher than the mean striatal value (0.026). The changes were less significant (t,2.1) in the other three striatal areas: the left dorsal putamen (222,4,26); right dorsal body of the caudate (16,16,14); and right dorsal putamen (24,4,2). In these areas values of c were relatively low (,0.08) but significantly higher than the mean striatal value (Table 1). To ensure validity of this finding we estimated the ligand BP and k 2a (Table 2) in the Congruent and Incongruent conditions using E-SRTM [14]. As compared to the control (Congruent condition), the BP decreased in 3 striatal areas ( Figure 2) during task performance (Incongruent condition). It was most significant (t.2.5) in the left caudate (28%) and left putamen (26%). Additionally, relatively small (23%) but significant (t = 2.21) decrease was observed in the right putamen. There was no significant change in any other area. The ligand dissociation coefficient (k 2a ) also increased in all of these 3 areas but it was statistically significant only in the left caudate (t = 2.07). Thus, both models found most significant change in the left dorsal caudate. Additionally, both models suggested significant changes in the left and right putamen also. Interestingly, the voxels where maximum change in the rate of ligand displacement (c) and maximum reduction in the ligand BP occurred, were located within 6 mm of each other in the left caudate and putamen even though these measurements were made using two different receptor kinetic models. In the right putamen these locations were .10 mm apart. It appears that the two models picked up activations from the same neuronal clusters of the left caudate (210, 14,8 and 212,8,10) and left putamen (222, 4,26 and 226,4,26). In the right putamen (24,4,2 and 24,8,28) activations identified by the two models probably came from different clusters. Changes in the left caudate were therefore most significant, consistent and reliable. All parameter values measured in this area were consistent with increased dopamine release in the Incongruent condition in comparison with the Congruent condition. Discussion The results demonstrate increased dopamine release in a number of striatal areas during performance of a flanker task. The increase was most significant on the dorsal aspect of the body of left caudate. All receptor kinetic parameters measured in this area (using two different receptor kinetic models: LE-SRRM and E-SRTM) suggested significant release of endogenous dopamine. In addition, most parameter values suggest dopamine release in three additional striatal locations: dorsal part of the left and right putamen and body of the right caudate ( Figure 1). These findings are interesting because in an earlier fMRI experiment [4] we found increased activation in the same region of the left caudate. Further, the maxima of BOLD response observed in the fMRI experiment and change in the rate of ligand displacement observed in the present experiment were located only 4 mm apart (MNI coordinates: 214,16,10 and 210, 14,8). Additionally, maxima of the fMRI activation were located within 8 mm of the location where maximum decrease in the ligand BP (212,8,10) was observed during task performance. Finding of activation in the same location in experiments that used different techniques validates the observation and underscores significance of the left caudate in processing of executive inhibition. This observation of dopamine release in the left caudate is consistent with the observation of a number of fMRI studies [15,16,17,18]. These studies however, have implicated other striatal areas also in the processing of executive inhibition tasks, and it was suggested that different striatal structures process different aspects of the task. Thus, caudate and putamen of the right hemisphere are associated with the preparatory phase of response execution [19,20] and those of the left side with inhibition and interference resolution [18]. In a recent study [18] the caudate and putamen of both hemispheres were activated in a flanker task that involved response selection and interference suppression. When the task was modified to require only response selection without interference (in a stimulus-response compatibility task) only the caudate was activated. Further, requirement of inhibition without selection (in a go-no-go task) activated the right putamen. This finding is supported by another recent experiment in which a negative correlation was observed between the volume of left putamen and the degree of interference. This study also found a positive correlation between the right putamen volume and the accuracy of response [21]. Thus, it appears that different striatal areas process different aspects of the task. The location of striatal activity in an experiment therefore depends on the degree to which these aspects/components are expressed. Thus, in the present experiment interference suppression was the most prominent component and the most significant activation was observed in the left caudate. It therefore suggests that the dopamine system of left caudate is involved in the processing associated with the inhibition of unwanted response. This suggestion is consistent with the observation of hyperactivity, agitation and inattention (due to loss of inhibitory control) following lesion, destruction or shrinkage of the caudate head [22]. In a recent study impaired executive function in patients with temporal lobe epilepsy has been attributed to the atrophy of left caudate in vicinity of the area where we found dopamine release [23]. It appears that the caudate is able to exert inhibitory control due to its functional connection with the dorsolateral prefrontal cortex (DLPFC) and ACC [24]. In animals dorsal caudate receives cortico-caudate projections from the dorsolateral frontal area and the cingulate [25]. Functional connection between these areas in the human brain has been recently demonstrated in an fMRI experiment [26]. In this experiment simultaneous activation of these areas (the left dorsal caudate, DLPFC and ACC) was observed when attention was focused on a target. Since focused . The data on the left of the vertical lines were acquired in the Congruent condition and those on the right were obtained in the Incongruent condition. Significant reduction in the ligand concentration in the Incongruent condition suggests that the rate of ligand displacement increased during task performance. The increase was due to competitive displacement induced by endogenously released dopamine. There was no significant change in the rate of ligand displacement in the reference region (cerebellum). The time-activity curves were drawn using the mean data acquired from the voxels where maximum changes were observed in each area. This analysis used the linear extension of reference region tissue model (LE-SRRM). doi:10.1371/journal.pone.0028075.g001 attention is required to resolve interference, the DLPFC and ACC are most consistently activated during resolution of interference caused by multiple response options [27]. In addition to interference suppression, the caudate and its functional connection to the DLPFC are needed to inhibit irrelevant options. It appears that the same frontal areas, located in different hemispheres are activated when the emphasis of task is shifted from response selection to inhibition. These activations are lateralized on the left hemisphere when a task requires response selection and on the right, when the emphasis shifts to inhibition [28]. Further, clinical evidence suggests that the activations associated with inhibition are dependent on dopamine neurotransmission. That is why unmedicated PD patients have difficulty ignoring non-essential stimuli [29,30,31]. The neural mechanism that allows dopamine to control inhibition in the human brain is not known but animal studies suggest a possible cellular mechanism. For example, after dopamine neurons are depleted in monkeys, the number of inhibited neurons reduces and the number of nonspecifically activated cells increases significantly [3]. As a result these monkeys find it difficult to select an appropriate response and focus attention on a stimulus. This dopaminergic effect on focused attention is validated in hyper-dopaminergic psychiatric conditions like schizophrenia. These patients are generally hyper-attentive [32,33] and have difficulty in shifting attention away from irrelevant stimuli. Thus, it appears the inhibitory system works most efficiently when dopaminergic activity is optimal. Both high and low levels disrupt inhibition. This effect is similar to the dopaminergic effect on cognitive functions, which are impaired at both high and low levels of dopaminergic activity [34]. The other striatal areas where changes in dopamine release were relatively small, process aspects of the flanker task that were inadequately expressed in the current experiment. These aspects include selection of an appropriate response. The response selection is an important aspect of not only flanker task but also those of learning and reward systems. The dopamine system is believed to facilitate learning of the outcome of a response and therefore, help us select the most rewarding response [35]. Therefore, dopaminergic agents alter outcome-based selection in PD patients and change their bias for learning from negative outcomes in favor of positive outcome [36]. This observation is consistent with the actor-critic model of reward and reinforcement. The model assumes that the dopamine system learns to select the action that is most rewarding [5]. Striatal Dopamine and Executive Function These results provide additional data to help us understand dopaminergic processing of executive function. Previously, dopamine release in the left and right caudate and the right putamen was observed during set-shifting in Montreal card sorting task [37]. We observed dopamine release in the same areas in the current experiment. The similarity is not surprising because set-shifts also involve inhibition -inhibition of the current strategy. In addition to inhibition, set-shift involves selection of a new strategy, which (as discussed earlier) is not strongly expressed in the flanker task. Probably because of this difference, there was a stronger activation of the right caudate during set-shifting. Interestingly, dopamine release in the right caudate is reported also in the spatial working memory [38] and explicit motor memory tasks [10]. Since both of these tasks required volunteers to select a response based on spatial location of a stimulus, it appears that that the dopamine system of right caudate is involved in the selection process. As discussed earlier, dopamine of the left caudate is associated with inhibition. Additionally, the evidence suggests that the dopamine of the right putamen is also involved in inhibition. Therefore, increased dopamine release in this area is observed in the flanker task (current experiment), in the set-shifting experiment and during processing of explicit motor memory task. All of these tasks involve inhibition of unwanted response. These observations are consistent with the BOLD activations observed in the right putamen in a gono-go task that involved inhibition without response selection [18]. However, we did not find dopamine release in this area in an implicit motor memory task, which also required volunteers to inhibit unwanted response but the inhibition in this experiment was nonconscious [11]. The dopamine system of right putamen therefore is involved in the processing of only voluntary inhibition. It will therefore be interesting to see if dopamine is released in the right putamen during processing of a task involving non-conscious inhibition (e.g. negative priming). The dopamine system of the left putamen is also involved in the processing of executive function. It is activated in working memory but not in set-shifting task. This system was activated in the current The values were estimated using extended simplified reference tissue model (E-SRTM). MNI = Montreal Neurological Institute stereotactic coordinates; BP0 = ligand binding potential in the Congruent condition; BP1 = ligand binding potential in the Incongruent condition; DBP = change in BP after task initiation; t-value DBP = t values of the difference in BP before and after task initiation; k 2a = dissociation coefficient of the specific binding (k 2 /k 2 -1); Cong = Congruent condition; Incong = Incongruent condition. doi:10.1371/journal.pone.0028075.t002 experiment also. In an earlier molecular imaging experiment we found dopamine release in the posterior part of the left putamen during planning and execution of motor responses [10,11,12]. Since both, working memory and flanker task (used in the current experiment) involve planning and execution, this activation is consistent with our earlier observations. The anterior left putamen however may have a different function. A significant increase in dopamine release in this area was recently observed following rTMS (repetitive transcranial magnetic stimulation) induced suppression of DLPFC during performance of Montreal card sorting task [39]. This is an intriguing finding because it was observed only when DLPFC activity is suppressed. It indicates that the DLPFC controls dopamine release in the anterior left putamen and that this area takes over some of the functions of DLPFC. It also indicates that the dopamine systems of the structures located inside and outside the striatum interact during processing of the executive function. It is therefore important to study the role of extrastriatal dopamine in executive processing. Unfortunately, in the current experiment we were able to study only striatal dopamine because the ligand 11 C-raclopride does not bind in detectable amount in the low receptor density areas outside the striatum [13]. Dopamine released in these areas can however be detected using a high affinity dopamine receptor ligands such as 18 F-fallypride. We recently used this ligand to detect and map dopamine released outside the striatum during emotional processing [9]. Since a number of extrastriatal brain areas are involved in the processing of executive function [4,18,27], our understanding of the neurochemical control of human executive function will remain incomplete until dopamine released in extrastriatal areas is characterized. Thus, the current experiment demonstrates that dopamine is released in a number of striatal areas during processing of a flanker task, which involves inhibition of irrelevant response options. The most significant increase in dopamine release was observed on the dorsal aspect of the body of left caudate. By providing evidence of dopaminergic processing of an important executive function, the results of this experiment will help us define dopaminergic control of the human executive function. Additionally, the study demonstrates that the neurochemical change associated with cognitive processing can be detected and mapped using a singlescan dynamic molecular imaging technique. Ethics Statement This study was approved by Partner's Human Research Committee, Boston, MA 02116. The IRB approved procedure for obtaining written informed consent from each participant was used in the study. The study was conducted on right-handed healthy young volunteers (n = 10) of either sex (mean age 33.1 years; male 4). None of the volunteers or their first-degree relatives had current or past history of a psychiatric or neurological disorder. Additionally, volunteers had no history of chemical dependency, or use of a dopamine-modifying drug in past 12 months. Pregnant women were not included because of uncertain adverse effect of ionizing radiation on developing fetus. After obtaining IRB approved written informed consent volunteers were positioned on the bed of a positron emission tomography (PET) camera and administered intravenous bolus of a dopamine receptor ligand 11 C-raclopride (mean dose 13.6 mCi) at a high specific activity (mean specific activity 1159 mCi/micromole). Immediately after the injection volunteers performed a modified version of Eriksen's flanker task [2]. The PET data acquisition also started at the same time. The data were acquired at 30 sec frames during the first 5 min and at 60 sec frames thereafter for the next 40 min, using an ECAT EXACT HR+ PET camera operating in 3D mode. In the flanker task volunteers were shown a series of 7 arrowheads and asked to press a key using the right index and middle fingers to indicate the direction the arrowhead located in the center (target) was pointing. They were asked to respond as quickly and as accurately as possible. The task had a Congruent and an Incongruent condition. The Congruent condition was started immediately after the ligand injection and in this condition all arrowheads pointed to the same direction (e.g., ....... or ,,,,,,,). After 25 min, unbeknownst to volunteers, this condition was terminated and the Incongruent condition started. In this condition direction of the target arrowhead was changed so that the flanker and target arrowheads pointed to opposite directions (e.g., ..., ... or ,,,.,,,). The Incongruent condition was administered for 20 min and in each trial the stimulus was presented for 800 msec. It was followed by a cross mark for 1900 msec (Figure 3). There was a 15 sec break after every 4 min. The response time and accuracy of responses were recorded in each trial and the ligand concentration was dynamically measured during entire scan session. The PET data were analyzed using methods used in our earlier experiments [8,9,10,11,12]. The analysis involved measurement of receptor kinetic parameters using modified versions of the simplified reference tissue model or SRTM [40]. Based on these parameter values, the rate of ligand displacement (from receptor sites) was estimated dynamically through out the experiment to locate striatal areas where dopamine was released during task performance. We also estimated the ligand binding potential (BP) in the Congruent and Incongruent condition. These estimates (along with the other receptor kinetic parameters) were used to detect and map dopamine released during task performance. Molecular Imaging and PET Data Analysis: The dynamic molecular imaging technique exploits the competition between an injected radioligand and endogenously released neurotransmitter for occupancy of receptor binding sites [7]. Because of this competition dopamine released during task performance displaces the ligand from receptor sites and reduces its BP. Using receptor kinetic models, the displacement rate, BP and other receptor kinetic parameters are dynamically measured in this technique to detect and map dopamine released during task performance. In previous experiments we used this method to study dopamine release during performance of a number of cognitive, emotional and behavioral tasks [8,9,10,11,12]. To measure receptor kinetic parameters, the PET data were analyzed using the following steps: First, images were reconstructed as 1286128663 element volumes using a standard threedimensional filtered back projection algorithm with corrections for photon attenuation, random coincidences, scatter, and dead time. To minimize residual effects of head movement, images were registered to align each frame to a common orientation. This was accomplished by realigning all frames to a reference frame (the frame acquired at 25 min). Thereafter, a mean image of the first 25 minutes' acquisition was created and used as the source image for spatial normalization, employing a raclopride template (which matched the MNI template) developed in our laboratory. All frames were then smoothed using a 5 mm FWHM Gaussian filter. The routines of statistical parametric mapping software (SPM8; Wellcome Department of Imaging Neuroscience, London) were used for realignment, spatial normalization, and smoothing. Thereafter, voxel-wise analyses were carried out on realigned, normalized and smoothed images to estimate receptor kinetic parameters in each subject. The analysis used receptor kinetic models designed to detect transient change in kinetic parameters. These models are described in earlier publications [12,13,14] and explained briefly in the following paragraphs. Parameter values were computed in each voxel at each time point to locate the areas where values changed significantly after task initiation (i.e. in the Incongruent condition). Additionally, time-activity curves were drawn for the voxels showing maximum ligand displacement. These computations used the cerebellum as a reference region and assumed negligible density of dopamine receptors in this region. A time-activity curve for the cerebellum was also drawn to estimate clearance rate of the free and nonspecifically bound ligand. Thereafter, the kinetic parameters (including the ligand BP) were measured in each condition separately in each volunteer. Individual values were then pooled to acquire cohort mean of each parameter value in each condition. By comparing values measured in the Congruent and Incongruent conditions, we located voxels where the values (and ligand BP) changed significantly after task initiation (Incongruent condition). Thus multiple receptor kinetic parameters were used to detect and map dopamine released during task performance. Kinetic Models We used the linear extension of simplified reference region model or LE-SRRM [12,13], and the extended simplified reference tissue model or E-SRTM [14] to measure receptor kinetic parameters. Both models are modified forms of the SRTM [40], which was developed to measure time dependent changes in receptor kinetic parameters. There was a need to modify the SRTM because it assumes steady physiological state throughout the experiment. This assumption is not consistent with the design of the single-scan method used in this study. Since task condition was changed from Congruent to Incongruent in the current experiment, the steady state was not maintained. The assumption of steady state was eliminated in the LE-SRRM and the E-SRTM using different approaches. The LE-SRRM allows the dissociation rate of ligand to change in response to an altered synaptic level of neurotransmitter by introducing a term c à exp({t(t{T))à v(t{T) in the dissociation parameter of SRTM. In this term, c represents the rate of change in ligand displacement, t allows gradual recovery of kinetic parameters after initial rapid release of dopamine, t denotes the measurement time, T is the time of change in neurotransmitter level, and n is the unit step function. The analysis using the LE-SRRM involved measurement of the values of receptor kinetic parameters and c on a voxel-by-voxel basis using the least squares fitting procedures. The null hypothesis assumed that the task did not elicit dopamine release and there was no change in the rate of ligand displacement after task initiation. This hypothesis was tested in each subject and values of the displacement parameter c were pooled across subjects to acquire a cohort mean and variance. Additionally, parameters that describe ligand transport and binding, and the time dependent effects elicited by the task were also estimated. The differential where, C R is the concentration of radioligand in a region devoid of specific binding (reference), PET is the concentration of radioligand in a voxel with specific binding, R is the ratio of transport rates in the tissue and reference regions, k 2 describes clearance of nonspecifically bound tracer from the voxel, and k 2a includes information on dissociation from the receptor, t denotes measurement time, t allows gradual recovery of kinetic parameters, T is the task initiation time and n(u-T) is the unit step function. The E-SRTM [14] uses a different approach to account for a change in physiological state induced by switching task conditions. It assumes that the two conditions (i.e., Congruent and Incongruent) are two separate datasets: one acquired before (Congruent condition) and the other after (Incongruent condition) an intervention. Since steady state was maintained within each condition, the SRTM could be applied to each of these datasets. This assumption allows measurement of receptor kinetic parameters in each condition. By comparing parameter values measured in the two conditions in each voxel, dopamine released during task performance was detected and mapped. We used differential equations and solutions of the E-SRTM [14] to measure the ligand BP and other kinetic parameters in the Congruent and Incongruent conditions. These values were measured at the voxel level to allow accurate mapping of endogenously released dopamine. For these computations we modified the original E-SRTM and included a bounded nonlinear optimizer routine [41] instead of a non-bounded routine, the Marquardt algorithm [42]. This modification allowed us to limit non-physiological solutions. For instance, we bound the solution values of BP between 0 and 6 to prevent the possibility of finding a solution that is outside the physiological range. To ensure reliability of this modification we ran a computer simulation in which tissue and reference region time activity curves were drawn using bounded and non-bounded routines. We found essentially identical values for all 4 kinetic parameters: R 1 , k 2 , BP0 (BP in the Congruent condition) and BP1 (BP in the Incongruent condition). The values of R 1 , k 2 , BP0 and BP1 were 0.95; 0.25; 2.31 and 2.19 respectively using non-bounded routine and 0.95; 0.26; 2.31 and 2.20 respectively with the bounded routine. The LE-SRRM and E-SRTM differ not only in methods used to eliminate the assumption of steady state, but also in approach for detection of dopamine released during task performance. While LE-SRRM assumes that a change in dissociation coefficient (k 2a ) of the ligand is a sensitive indicator of endogenously released dopamine, the E-SRTM assumes that dopamine release can be detected more accurately by measuring changes in the ligand BP. Furthermore, whereas the LE-SRRM assumes that the receptor kinetic parameters return to the original state in about 10 minutes if task remains unchanged, the E-SRTM makes no such assumption. Since the two models use different approaches to detect dopamine, we used both models to enhance reliability of data analysis. To reconcile findings of the two models, we identified blobs (.5 contiguous voxels) that were 'activated' after task initiation in each model analysis. A blob was considered 'activated' if a) there was a significant change (p,0.05) in values of c (estimated using LE-SRRM) after task initiation; b) the ligand BP (measured using E-SRTM) was significantly lower (p,0.05) in the Incongruent condition; c) there was at least 15% increase in dissociation coefficient (k 2 ) measured using E-SRTM in the Incongruent condition; and d) maxima of the blobs were located within 6 mm (in all three directions) of each other (to account for Gaussian smoothing in the processing). Thus, we used multiple kinetic parameters and approach to ensure validity of results. Software to implement these models was developed using Matlab (MathWorks, Natick, MA) utilizing the constrained minimization routine of its optimization toolbox. The LE-SRRM was used as the primary kinetic model because validity of this model has been extensively studied [11,13]. Further, we used simulations to examine effect of task-induced increase in regional cerebral blood flow (rCBF) on estimated values of the receptor kinetic parameters. These simulations indicated that changes in rCBF do not significantly affect parameter values that were used to estimate dopamine release, unless it is more than 120% of the original rCBF [13]. Since rCBF changes during cognitive task performance are much smaller [43], these changes are not likely to have significant effect on reported results. During performance of a flanker task we observed less than 0.3% change in the MR signal intensity in a previous experiment [4]. Author Contributions Conceived and designed the experiments: RDB. Performed the experiments: RDB. Analyzed the data: RDB DSW. Contributed reagents/ materials/analysis tools: RDB DSW. Wrote the paper: RDB.
2016-05-12T22:15:10.714Z
2011-12-05T00:00:00.000
{ "year": 2011, "sha1": "770593bb0e04c72b8bb36b05a6ceaf3d3bcf53cf", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0028075&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "770593bb0e04c72b8bb36b05a6ceaf3d3bcf53cf", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
54091853
pes2o/s2orc
v3-fos-license
Integrating top-down / bottom-up sustainability strategies : an ethical challenge Sustainable use of the planet will require multiple sustainability strategies, which will range from the entire system, the entire Earth, to the local or regional. Strategies starting at the highest system level are referred to as ‘top-down,’ and strategies designed for components, local or regional, are referred to as ‘bottom-up.’ Doubtless, several intermediate levels will eventually be required, although the number is far from clear at this time. It is abundantly clear that both top-down and bottom-up strategies must be integrated effectively or neither will work well. Furthermore, there will be significant uncertainties at both levels of organization, which will be reduced as evidence accumulates. However, sustainability is too complex and dynamic to reduce scientific uncertainty to a level desired by most decision makers. A greater emphasis on sustain-ethics and value judgments will improve communications between those working at different organizational levels since humankind’s wish to leave a habitable planet for its descendants and those of other life forms is clearly a value judgment. INTRODUCTION The Options Spring 2002 issue is devoted entirely to Achieving Sustainable Development: The 21st Century Imperative.Jernelöv's (2002) editorial notes that human security issues relating to the supply of water, food, and energy, and the protection of Earth's life support systems have been high on the International Institute for Applied Systems Analysis' priority list since its creation approximately three decades ago.The same Options issue notes that human dimensions must be placed at the core of sustainable development to meet the needs of present generations without sacrificing the livelihoods of future generations.Not featured is eco-ethics-the ethics of humankind's relationship with the planet's biospheric life support system, despite the fact that the natural capital and services it provides is the sine qua non of sustainability. Sustainable use of the planet is the most complex problem in human history.Its goal is to change humankind's behavior and practices so that the human species can inhabit its planet indefinitely.To accomplish this goal, it is essential that a mutualistic relationship develop between human society and natural systems.This relationship, in turn, will require a combination of econ-ethics and eco-ethics (e.g.Kinne 2002).Both economics and ecology are derived from the Greek word oikos, which means household.The term household was originally used (and still is) in a much more restricted way; however, humankind is now beginning to perceive Earth as the ultimate household.Econ-ethics requires an ethical economic system that will benefit humankind.The economy must be structured in a way that will not damage the planet's ecological life support system.Econ-ethics and eco-ethics used concomitantly to enhance sustainable use of the planet would be sustain-ethics (this term initiated in this paper).Neither natural systems nor future generations can demonstrate appreciation or gratitude for sustain-ethics while present generations still live, but ethical behavior gives a peace of mind that is its own reward.However, sustain-ethics, in addition to compassion for natural systems and future generations, also includes compassion for disadvantaged members of the human species who could express appreciation to humans now alive for improving the human condition. There must be a global strategy for sustainability ('top-down strategy') but also a strategy that considers the unique issues and ecosystems of each bioregion ('bottom-up strategy').Holistically practicing topdown and bottom-up sustainability strategies, including several intermediate 'connecting' levels, is a formidable, daunting task.The most promising way to connect these interdependent activities is an ethical 'cement'-sustain-ethics.It is unclear whether the topdown strategies should work directly with the bottomup strategies or whether there should be one or more intermediate steps.Ideally, the shorter the communication chain the more rapid and effective communication will be, but there are many obstacles to this simple two-strategy model. The lofty goals of sustainability are fairly easily stated and seem to strike a responsive chord in anyone wishing future generations of humankind to have a habitable planet.How this will be implemented in various ecoregions with different problems is not particularly clear; even less clear is the way in which different ecoregions will interact with each other and how the humans who occupy them will be persuaded to follow a global sustainability strategy. It is abundantly clear that both top-down and bottom-up strategies are being developed, although the rate of development of the latter varies dramatically from one country to another and from one bioregion to another.In addition, global acceptance of whatever top-down strategy eventually emerges will doubtless be markedly influenced by local conditions. Both global and regional strategies must be integrated effectively or neither will work well.Humankind has only one finite planet, and damage to one part of it almost guarantees damage to other parts that are distant geographically, spatially, and even temporally. It remains to be seen whether humans can grasp a problem of such complexity for an infinite period of time.After all, sustainable use means being able to continue these practices indefinitely.However, if it is not possible to address effectively a problem of such complexity, humankind will suffer enormously.Therefore, the attempt must be made despite many inherent difficulties. OBSTACLES TO TOP-DOWN/BOTTOM-UP SUSTAINABILITY STRATEGIES The obstacles to developing a sustainability initiative are essentially the same as those hampering development of a global sense of community.Even taken individually, they both have inherently formidable obstacles.In the aggregate, one wonders how they will ever be transcended.Illustrative examples of obstacles to bottom-up strategy development include: (1) language barriers, (2) ethnic and religious conflicts, (3) disparities in per capita wealth, (4) disparities in educational opportunities, particularly in developing scientific and environmental literacy, (5) differences in the balance between individualism and a sense of community, (6) differences in age distribution within the local population (e.g.predominantly young or predominantly elderly), (7) level of biophilia (humankind's innate affinity for the natural world), (8) degree of compassion for individuals of the human species distant in time (future generations) or space (in far geographic localities), (9) level of equity and fairness in resource allocation among members of the human species and those of other life forms, and (10) a lack of willingness to do more than the law requires in achieving sustainable use of the planet.Illustrations of obstacle 10 include focus on individual 'rights' (Cairns 2002a,b) rather than individual responsibilities, lack of accountability for one's own actions, greedy desire to acquire material wealth at others' expense, disregard for the appearance of one's environment (e.g.littering in public parks and freeways), corporate efforts to obtain today's profits without regard for the long-term effects of today's practices, elected officials' actions geared toward satisfying constituents' immediate demands (to keep the vote) instead of doing what is best for the survival of the local ecosystem, and the waste of resources that is a part of affluent societies' consumerism. The obstacles to development of a top-down strategy are equally formidable.Illustrative examples include: (1) the enormous difficulty in visualizing solutions to goals over large temporal and spatial spans, (2) integrating huge temporal and spatial spans in a mutualistic fashion, (3) ensuring that no component of the topdown strategy negates or compromises an important component of the bottom-up strategy, (4) developing continuous feed-back loops between top-down and bottom-up strategies, (5) developing a harmonious working relationship between top-down and bottomup strategists, (6) ensuring that minutia do not distract from the holistic scope of the top-down strategy, (7) detecting changes in either natural systems or human society that require mid-course corrections, (8) coping with 'rogue' nations and uncooperative nations in an ethical way while maintaining sustainable practices, (9) acquiring and maintaining the financial base necessary to operate the Global Sustainability Organization (GSO), ( 10) determining when the precautionary principle should be applied, and (11) determining how to orchestrate the GSO in a democratic fashion while promptly eliminating unsustainable practices. Clearly, the same obstacles exist for both top-down and bottom-up-they will, however, not be resolved in an identical fashion.These illustrative examples do indicate how important ethics will be for both strategies.As humankind moves from small group, tribal units toward a global community, shared ethical values become ever more important to the survival of the human species. UNCERTAINTY AND ETHICS There are four major classes of scientific uncertainty, particularly in resolving environmental issues such as sustainability: (1) framing uncertainty, (2) modeling uncertainty, (3) statistical uncertainty, and (4) decision-theoretic uncertainty (Shrader-Frechette 1996).Durham (1992) remarks that scientists provisionally accept a hypothesis that has survived rigorous attempts to falsify it, even one with obvious deficiencies, if there is no better (i.e. more probable) hypothesis available.Physics has often been regarded as one of the 'hardest' sciences with a superb record of validating hypotheses.Yet, Carnap (1966) states that since hypotheses have an infinite number of observational consequences that can never be conclusively validated, scientists sometimes opt, in an uncertain situation, for provisional acceptance of the best available non-falsified hypothesis.Thus, uncertainty is the norm, even within disciplines noted for their precision. Uncertainty is likely to be orders of magnitude greater in the quest for sustainable use of the planet, which requires input from all disciplines.Further, experimentation is difficult, not only because there is only one planet but also because of a natural reluctance to experiment with human subjects.Cairns & Smith (1996) analyze some of the ways in which these uncertainties associated with both top-down and bottom-up strategies can be reduced and make recom-mendations on how the validation process might belatedly be integrated into the ecotoxicological field.These approaches should be useful, properly modified, when integrating top-down and bottom-up approaches in general.As Cairns & Smith (1996) note, it is difficult (and in some cases impossible) to measure directly how a stressor affects an ecosystem.For example, society cannot wait 20 or more years to determine the specifics of the ecological effects of radioactive wastes.While this uncertainty is being reduced, ethical principles will be a useful component of societal decisions.Ethical principles are extremely important since scientific uncertainty may never be satisfactorily reduced in time periods of interest to human society. Sustainable use of the planet is an aspiration involving levels of complexity transcending most scientific endeavors.Arguably, one of the most important facets of this complex problem is that most environmental laws and regulations place the burden of proof for demonstrating human health or environmental damage on governmental regulatory agencies or nongovernmental organizations wishing to demonstrate harm from development or technological activities.The universal standard, which is generally used to meet burden of proof requirements, is often the normal standard of scientific proof, such as a 95% confidence level or an equivalent criterion.The scientific community, in order to minimize Type I errors and, therefore reduce speculation in scientific data interpretation, adopted this standard.But when such a standard is utilized as a basis for developing sustainability strategies, the scientific uncertainty that inevitably pervades such situations means that the burden of proof usually is not met, despite the fact that some information might demonstrate impairment to the quest for sustainability.Finally, as a consequence of the absence of a robust understanding of scientific uncertainty and its implications for sustainability, decisions mean that policymakers/managers will not have adequate scientific information to guide them in terms of whether or to what extent decisions should reflect a precautionary approach.Since humankind has only one finite planet on which to achieve sustainable use, it is abundantly clear that the usual requirements to reduce scientific uncertainty, such as use of controls, multiple testing under variable conditions, and the like, cannot be met.Ethics are especially important in such circumstances. Humankind is now moving from the age of reductionist science to an age of synthesis or integrative science.This transition does not mean that reductionist science is no longer appropriate, but rather that as levels of complexity in any system increase, new properties emerge that were not apparent at lower levels.Consequently, one means of reducing uncertainty in this age of synthesis is how congruent a particular hypothesis or body of evidence is with other related bodies of evidence within the particular system being studied.Both top-down and bottom-up sustainability strategies will require synthesis and also a means of coping with scientific uncertainty.Again, ethics should be a major factor in the decision making process. POLICYMAKERS/MANAGERS AND SOCIAL AND NATURAL SCIENTISTS It is well to remember that all sustainability strategies involve both macro-and micro-coevolution of human societies and natural systems.If successful sustainability strategies are developed, they will also require coevolution in understanding between and among policymakers/managers and natural and social scientists.These coevolutionary interactions will greatly influence sustainability issues, as well as issues of how humans behave toward the environment, those in other cultures and financial circumstances.Most importantly, it will require abandoning the many unsustainable practices found in almost every culture on the planet and substituting sustainable practices despite the attractiveness of the unsustainable ones to which humans have become accustomed.These are groups unaccustomed to working together on a longterm, meaningful basis on such a complex issue as sustainability strategies.At worst, some groups have no regard for or even trust in some of the other groups, and, at best, there is often a poor understanding of the ways in which other groups function.Ethics is an obvious 'bridge' between groups so that misunderstandings can be reduced or eliminated. One would expect the primary initiative in developing integrative programs covering the broad spectrum of groups just described to come from the world's universities and colleges, particularly those in which the responsibility for generating new knowledge and communicating it to students is a major responsibility.Regrettably, this does not occur at either the rate or scale necessary for achieving sustainability for a variety of reasons.Among these are increased teaching loads and budget cuts at state-and federallysupported institutions, which do not permit the extensive time necessary for faculty in one discipline to develop a deep understanding and productive relationships with those in other disciplines.Arguably most importantly, students do not perceive the need for developing such a broad perspective because they cannot, at present, see the relationship to the job market and their future professional growth.Consequently, it is highly unlikely that there will be adequate, experienced personnel skilled in integration and synthesis at the temporal and spatial scales required for sustainability initiatives as well as the diversity of components needed for successful implementation.Such personnel cannot be produced overnight, nor is it likely that many persons with a disciplinary bias can be persuaded to take a holistic view of sustainability initiatives.As a consequence, it will be of greatest importance to utilize the relatively few available personnel as effectively as possible in the short term and to prepare a much larger group that is sufficiently holistic to implement sustainability initiatives.The most important aspect of educational institutions' budget cuts is fewer personnel and less scientific information for a considerable period of time.Uncertainty will not be significantly reduced and may even increase.Thus, ethics is now of major importance. HOW MANY LEVELS OF SUSTAINABILITY STRATEGIES? Going from global to local or regional directly follows Dubos' famous injunction 'think globally, act locally.'The problem is that insightful global thinking will require an information mass well beyond the capability of most (and possibly all) individuals to assimilate and understand.Even if an individual, or even a small number of individuals, did have such a capability, there would be a problem of trust because they would undoubtedly not represent all religions, all cultures, all language groups, and so on.Furthermore, since effective sustainability strategy implementation will involve numerous professional and non-professional groups and a mixture of science and value judgments, a diversity of viewpoints would strengthen the policy decisions, if the diversity did not impede reaching consensus.In the first two decades of the 21st century, it is extremely unlikely that there would be adequate numbers of competent personnel to function as integrators of concepts and information and synthesizers of both concepts and value judgments.If there is a global public will to increase the number of competent professionals, undoubtedly this could be done over a period of several decades or more.The initial problem would be the lack of suitable faculty and other professionals to educate and inform the large numbers of additional personnel needed.Furthermore, much onthe-job training would be required, and a large number of qualified personnel would be required to spend most, if not all, of their time on synthesis and information integration rather than on increasing the literacy of additional personnel. At the bottom-up level, there are numerous areas where adequate or nearly adequate numbers of competent professionals are available.There are also numerous areas where the idea of sustainability is not even being discussed in the most general way.So, there is a major educational problem at the bottom-up level, although in most respects it differs significantly from the top-down approach.A major problem in increasing literacy at the bottom-up level concerns trying to increase all citizens' literacy in the requirements for sustainable use of the planet and deciding what organization(s) should be responsible for quality control, planning, financing, and the like.There is also the crucial question of how to transfer increased sustainability literacy from areas where the literacy is high to areas where it is low or nonexistent.This problem will almost certainly be exacerbated by cultural, religious, and language difficulties, to name just a few.Ethics should help reduce these problems because it may furnish a common ground in which diversity can be appreciated but not divisive.Kung (1998) defines a comprehensive ethic-founded on the bedrock of mutual respect and humane treatment of all beings-that would encompass the ecological, legal, technological, and social patterns that are reshaping civilization.If humans are going to have a global economy, a global media, a global technology, Kung (1998) argues that there must also be global ethics to which all nations and peoples of the most varied backgrounds and beliefs can commit themselves.Earth can and should be held together by ethics.As Common (1995) notes, there is no purely scientific basis on which to decide the alternative positions between economists and ecologists.Differences primarily reflect dissimilar value systems (e.g.Myers & Simon 1994).This is one reason why sustainability issue decisions are so difficult and contentious.It is abundantly clear that the debate on sustainability issues would be more productive if participants explicitly stated their ethical values that, together with scientific evidence, support the positions they are taking. At the outset, there seems to be no choice but to begin with only the top-down approach and the bottom-up approach with no intermediate stages.This design will undoubtedly cause difficulties, but these will doubtless be less if there are competent personnel in both categories rather than a large number of unqualified people at intermediate organizational stages or, worse yet, at all organizational stages.This immediately calls to mind the problem of quality control, which has been successfully resolved by the disciplines representing reductionist science but has yet to be resolved for integrative science or synthesis. As the number of qualified personnel increases and the general literacy about sustainability increases, more levels of organization between top-down and bottom-up not only will be possible but most likely essential. CONCLUSIONS For initial stages, the primary focus of sustainability initiatives should be restricted to top-down and bottom-up strategies with no intermediate levels, for reasons already stated.As the number of trained personnel and the information base, including case histories, expands, so also can the number of intermediate stages between the two extremes.Exacerbating the complex problems already discussed in a preliminary fashion will be the certainty that there is no precise indication of how much time is left to put these topdown/bottom-up strategies in place.Many professionals think environmental problems are already severe and that a number of crucial environmental thresholds and breakpoints have already been crossed.Resilient systems usually permit an overshoot if it is not too severe and not sustained for an exceptional time period.However, many unsustainable practices are increasing exponentially while social adjustments lag far behind.Possibly, it will require a major collapse of one of the planet's life support systems to change the mood from complacency to serious concern.This also would mean that the time to cope with the problem and to increase sustainability literacy will be substantially decreased.Consequently, one hopes that reason guided by evidence will result in some precautionary measures being taken, such as increasing training programs at universities and colleges and, as soon as possible, in general school systems, which will enable all citizens to become literate in this area. Clearly, a greater emphasis on ethics and value judgments with regard to sustainable use of the planet is long overdue. Science can show what probably is done; technology can show what might be done; but ethics can help humankind decide what should be done. .I am deeply indebted to Eva M. Call for transcribing the dictation of the first draft of this manuscript and to Darla Donald for editorial work in preparing it for publication.The Cairns Foundation paid for processing costs.Reexamining the 'inherent worth and dignity of every person' paradigm in an interdependent web of life context.J Lib Rel 3(1), available at www.meadville.edu/cairns_3_1.htmlCairns J Jr (2002b) Revisiting respect for the interdependent web of life and the worth and dignity of each individual: a major issue in sustainable use of the planet.Common Ground 1.2 (2002):29-34 Cairns J Jr, Smith EP (1996) Uncertainties associated with extrapolating from toxicological responses in laboratory
2018-11-23T12:51:31.022Z
2003-02-01T00:00:00.000
{ "year": 2003, "sha1": "07126bad25e087336c97039f73395480a7943bd5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3354/esep003001", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "07126bad25e087336c97039f73395480a7943bd5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Business" ] }
25475801
pes2o/s2orc
v3-fos-license
Quantitative Variations of Intracellular Microcystin-LR, -RR and -YR in Samples Collected from Four Locations in Hartbeespoort Dam in North West Province (South Africa) During the 2010/2011 Summer Season The Hartbeespoort (HBP) Dam is a reservoir used for agricultural, domestic supply of raw potable water and recreational activities in South Africa’s North-West Province. Eutrophication and cyanobacterial blooms have long been a cause of water-quality problems in this reservoir. The most prevalent bloom-forming species is Microcystis aeruginosa, often producing the toxin microcystin, a hepatotoxin which can negatively impact aquatic animal and human health, and poses a problem for potable water supply. Algal samples were collected monthly from four pre-determined sites in the dam during the summer months (December 2010–March 2011). Intracellular microcystins (MCs) were extracted using SPE C18 cartridges, followed by separation, identification and quantification using LC-ESI-MS techniques. Quantitative variation studies of MCs were conducted with respect to MC congener isolated, sampling site and month. Three main MC congeners (MC-RR, -LR and-YR) were isolated, identified and quantified. In addition, three minor MCs (MC-WR, MC-(H4)YR and (D-Asp3, Dha7)MC-RR were also identified, but were not quantified. The MC dominance followed the order MC-RR>MC-LR>MC-YR across all sites and time. The maximum and minimum concentrations were 268 µg/g and 0.14 µg/g DW for MC-RR and MC-YR, respectively, of the total MCs quantified from this study. One-way ANOVA showed that there were no significant differences between average MC concentrations recorded across months (P = 0.62), there was, however, a marginally-significant difference in concentrations among MC congeners (P = 0.06). ANCOVA revealed a highly significant interaction between sites and MC congeners on MC concentration (P < 0.001). Introduction Hartbeespoort Dam is situated in the North-West Province of South Africa. The reservoir is fed by the waters of the Crocodile and Magalies Rivers ( Figure 1) and has a mean depth of 9.6 m, maximum depth of 45.1 m and surface area of 20 km 2 . Hartbeespoort Dam is renowned for its poor water quality and is arguably one of the World's worst examples of eutrophication, due to the high nutrient loads which enter the system and have overburdened the reservoir basin for decades. Microcystins (MCs), a group of cyclic heptapeptide hepatotoxins are produced by a number of cyanobacterial genera [1,2]. Freshwater microcystin-producing cyanobacterial species, including Microcystis spp., Anabaena spp., Planktothrix (Oscillatoria) spp., Aphanizomenon spp. and Nostoc spp. [1,2], are on the increase worldwide due to increased environmental and water pollution leading to eutrophication of aquatic environments [1][2][3][4][5]. Microcystins are characterised by the presence of a unique non-proteinogenic β-amino acid called ADDA [6][7][8][9]. The toxicity of MCs is highly dependent on the ADDA group, the MeDha group (MeDha = N-methyldehydroalanine) and the structural cyclic nature [10,11]. Over 80 known microcystin structures (variants) have been reported worldwide from freshwater systems with their variations being mainly due to amino-acid replacements either at position 2 or 4 of the MC backbone [12], for details on the general structure refer to Sivonen et al. [8]. Other structural variations have been attributed to methylation/or demethylation processes of the methyl groups on amino acids at positions 3 and/or 7 [7,8]. The ADDA group [6] is highly conserved in microcystins (and nodularins) and stable against physiological replacements by other amino acids. Due to this stability, the ADDA group has therefore been utilised in various methods for the identification of microcystins (and nodularins) as described in Msagati et al. [13], these include ELISA [14], HPLC-UV/PDA [15][16][17][18][19], LC-ESI-MS [18,19], etc. In positive mode LC-ESI-MS the ADDA group gives a characteristic fragment ion at m/z 135 corresponding to the [phenyl-CH 2 CH(OCH 3 )] + ion [13]. This ion therefore is often used as a diagnostic feature for the identification of MCs in algal samples [13,18] when using the LC-ESI-MS technique. The LC-MS technique gives reliable data with regard to detection, identification, quantification, differentiation and discovery of new MC congeners than other methods stated above [20]. For instance, it has been observed that ELISA is prone to crossreactivity with MCs other than the MC-LR variant and thus its use is limited whenever identification and quantification of specific MC variants in samples is necessary [21][22][23][24][25]. However, the LC-MS offers complementary results for accurate detection and quantification of the same [7,18]. Cyanobacterial cell deaths caused by ageing, stress, mechanical breakdown or chemical activities (e.g., grazing, water purification processes, pump suctions, boat propellers, etc.) result in releases of intracellular MCs and increased extracellular MCs into the surrounding water [26][27][28][29][30]. Extracellular Magallies river N MCs are both highly soluble and relatively stable in water [31] posing a worldwide health concern. Microcystin toxicities resulting in human illnesses, as well as wildlife and fish kills, have been reported [2,3,10,17,[32][33][34][35][36][37], including many cases in South Africa [38]. The characteristic solubility of MCs in water therefore makes these toxins bioavailable for a couple of days in a water column [23,31] before they can be completely biodegraded. The accumulation of MCs in marine organisms [39], aquatic plants and other life forms found in water environments [23,[40][41][42] has been reported. Microcystin loading and spatial distribution in a given aquatic ecosystem is a function of intracellular MC content in algal cells and is closely related to algal species dynamics and dominance [43]. Papadimitriou et al. [40] showed that extracellular MC concentrations extracted from water samples were lower than intracellular MC concentrations extracted from algal cells collected from the same location and time, indicating dangers of possible underestimation or overestimation of MC concentrations should quantification be based solely on extracellular MCs. Lower estimates in extracellular MC levels compared to total intracellular MC levels are due to a number of factors, including algae grazers , assimilation of bioavailable MCs by aquatic organisms, wash-away by water drifts, adsorption on suspended solids/organic matter/sediments, MC degradations [44], etc. Thus, depending on the method used, measuring intracellular MC concentrations gives more complete information on the quantity and type of MCs, species distribution and their dynamics rather than using extracellular MC concentrations [45].Seasonal and temporal variations in cyanobacteria population, particularly with M. aeruginosa dominance in Hartbeespoort Dam, have been studied [24,38,[46][47][48] and shown to be related to MC production as described elsewhere [29,43,49]. However, to the best of our knowledge and based on the literature surveyed, little information exists in terms of quantitative assessment of intracellular MC with respect to congener concentration variations, dominance and distribution in the Hartbeespoort Dam. In terms of quantitative studies on MCs commissioned and funded by the Water Research Commission (WRC), South Africa (www.wrc.org.za), including some recent works, the results presented were mainly based on total microcystin concentrations expressed as MC-LR equivalence using immunoassay or biochemical methods [25,50]. However, Conti et al. [51] demonstrated that besides immunoassay/biochemical methods being very sensitive tools for MC detection and quantification, their use results in over-estimations of toxins, probably due to crossreactivity, false positives or false negatives [22,40,[52][53][54][55]. Therefore, the objectives of the present study were to:  Extract and identify intracellular microcystin congeners from algal blooms collected from four sites on the dam accessible by the public either directly or indirectly (by boat).  Quantify intracellular microcystin congeners using the LC-ESI-MS technique and determine their spatial distribution with respect to locations (sites), time and MC type (congener).  Establish an MC congener dominance profile for water-quality assessment with respect to the use of Hartbeespoort Dam water resources. Sample Collection Samples were collected monthly from December 2010 to March 2011 from four pre-determined sites in the Hartbeespoort Dam (Figure 1), based on four factors:  Areas that can easily be reached either by boat or by walking along the banks of the dam (S1-S4).  Relative distance to fishing/conservation area hotspots (S2-S4).  Proximity to Magalies River inflow and Crocodile River outflow to/from the dam (S3 and S1, respectively).  Proximity to the animal conservation area (zoo) and water purification station (S1). Algal cells were collected in 500 mL glass bottles from each site during the summer season. Samples were transported to the laboratory in cooler boxes filled with ice blocks and processed the same day by filtration. For algal identifications, samples were treated according to [56,57] and analysed within 24 h of collection. Sample Pre-Treatment for Microcystin Extraction Algal cells were filtered on pre-weighed GF/C glass-fibre filters (47 mm, 0.45 µM). Filters were washed further by flushing with distilled water to remove any superficial microcystins remaining on cells. Filters were stored at −20 °C before freeze-drying. After freeze-drying, weighed algal cells (1.0 g) were extracted using 70% MeOH (aq) and then the extracts were subjected to CHCl 3 /MeOH/H 2 O liquidpartitioning (7/6/3, v/v/v) to remove pigments, co-eluting compounds and other cartridge-blocking material [58] prior to SPE extraction using Waters TM HLB cartridges [59]. Physicochemical Parameters and Species Identification On-site physicochemical parameters (conductivity, temperature, pH and dissolved oxygen) were measured immediately at each site using multifunction meter (YSI TM 6-Series Sonde, Marion, Germany). The determination of the concentrations of chlorophyll-a, nitrates and phosphates as well as species identification were carried out in a parallel project that dealt with the assessment of nutrient loading in the Hartbeespoort Dam (Zoology Department, University of Johannesburg). LC-ESI-MS Instrumentation and Conditions The SPE extracts were analysed using a modified method reported in the literature [12]. Briefly, a Waters TM LC-MS instrument (with a Waters TM 3100 Mass Detector) coupled with an electrospray ionisation source (ESI) was conditioned for 10 min with the mobile phase shown below. MCs separation was performed on a conditioned C 18 column (Waters Symmetry300 TM , 4.6 mm × 75 mm, 3.5 μm) using the Alliance Waters TM e2695 separation module. The mobile phase consisted of a mixture of 0.5% (FA) in Milli-Q water (A) and 100% acetonitrile (B). The column was operated at room temperature, and the microcystins were eluted within a gradient window of 20 min using a reverse phase system consisting of 25% B (10 min), 70% B (10 min), 25% B (11.1 min), 25% B (20 min). Flow rate and sample injection volumes were set at 0.5 mL/min and 0.5 μL, respectively. The ion source was operated on both positive and negative ESI modes for all experiments. Structural identification of microcystins in samples was based on retention time and fragment ion products relative to respective elution window/fragmentation pattern of authentic analytical standards, results were complemented with literature values. Microcystin Analysis and Quantification Reference standard curves were established from linear regression values of respective authentic microcystin standards using Sigmaplot TM Software (Version 8). Linear regression equations were established giving acceptable R 2 values for each authentic standard solution. Major intracellular microcystins (MC-RR, -YR, and -LR) isolated from M. aeruginosa cells were quantified based on linear regression equations using peak areas [60][61][62]. All experiments on field samples were performed in triplicate and quantities were reported in mean values (n = 3). Statistical analyses (one-way ANOVA and ANCOVA) were performed on an R2.14.1 statistical package [63] using mean values of the calculated MC concentrations (P < 0.05 denoted a significant difference). Site Selection The Hartbeespoort Dam (GPS co-ordinates: 25°43'44.56"S, 27°51'30.35"E) is one of the major water impoundments situated in North-West Province, South Africa, with its water being used for irrigation, domestic and recreation purposes [64]. Thus monitoring of the quality of its water is a primary priority for public health. However, on a daily basis, this reservoir receives millions of litres of treated wastewater that is rich in phosphates and nitrogenous species, this has led to excessive nutrient loading resulting in Hartbeespoort Dams being one of the most heavily eutrophied dams in South Africa [65,66]. Eutrophication of this dam dates back to beyond the 1970s and the water is quite often characterised by foul smell and heavy green pigmentation especially during summer seasons [67]. However, substantial lake restoration control measures are currently underway to reduce nutrient loading (phosphorus/nitrogen), and a shoreline restoration project is being undertaken [66]. Physical removal of algae and water hyacinth after trapping them with floating boom barriers is also an ongoing dam-remediation project managed by DWAF [68]. Sample-collection sites were pre-selected in areas where operational dam restoration and sampling activities would not have any effect/impact on each other. However, sample-collection sites were meant to give a large representative sample area of the dam with different features of interest relevant to public health including recreational and fishing activities and unrestricted/ease of public access (Sites S1-S4) (Figure 1). A pre-survey (and during GPS positioning exercise) showed that Site S1 experienced visible algal cell accumulation during the summer, probably due to the wind direction and effects of currents caused by the outflow of water near the water gates. In addition, this site (S1) was situated within a few metres from the water-purification facility for domestic use as well as to the animal conservation area (known as the Snake Park). Site S2 was selected due to low current and turbulence effects that led to massive algal scum throughout sampling. Algal blooms characteristically occur in calm, nutrient-rich water bodies [69]. Likewise, Site S3 was a relatively calm site, and protected from strong winds by some bushes forming part of a conservation scheme around the dam. The water mass around Site S3 is mainly from Magalies River inflow ( Figure 1). Persistent small quantities of algal cells are very common around Site S3, even during the winter season (personal communication from a local resident). Microcystis spp. Identification Preserved cells were identified microscopically according to the Utermöhl method as described [70]. Microcystis aeruginosa species was the dominant species throughout the sampling period across all sites with an average dominance rating of over 80% (i.e., 80% to 100% of total cells identified belonged to M. aeruginosa), data not shown. These findings corroborate those reported in other previously published reports which state that over the past two decades M. aeruginosa has been dominant in Hartbeespoort Dam, particularly during the summer season [25,48,50,71,72]. Other algal species identified included M. wesenbergii (<10%), Spirulina (<5%), Planktothrix spp. (<5%), Melosira spp. (<1%) and Nitzschia spp. (<1%). However, although their presence in this dam has been previously reported [25,50,71], none of these minor species have been associated with MC production in Hartbeespoort Dam. Hence it is widely accepted that M. aeruginosa is the major producer of MCs observed in this dam. On-Site Environmental Conditions Some important physicochemical parameters that prevailed in the dam during the summer are shown in Table 1. The proliferation and persistence of M. aeruginosa as well as MC production are dependent on a number of environmental conditions prevailing in any given water body [2,46,[73][74][75]. While an increased N:P nutrient ratio is known to be crucial in this regard [76], other important parameters have previously been demonstrated to play an important role in MC production, including as demonstrated recently, optimal pH and temperature [75], as well as light intensity [25,75]. During the entire period of our study, the average pH levels in the dam ranged between pH 8.8 ± pH 0.21 to pH 7.4 ± pH 0.07. This pH range is suitable for desolvation of phosphates in a water system as demonstrated by Greenwald [77]. The availability of soluble phosphates in water increases as the pH of the solution increases towards alkaline conditions up to a pH < 8.6 [77]. The optimal pH range observed in Hartbeespoort Dam is a characteristic feature of most eutrophic waters [71,78] as it is suitable for metabolic activity and growth of toxic algae [5,71,75,79] at the expense of less favoured algae. Similar to previously reported pH values [25,38,71], floating algal cells were conspicuous on the water surface throughout the dam, including all sampling sites where M. aeruginosa cells were collected (Figure 1). The presence of larger masses of algal cells in some areas (e.g., Site S1 and Site S2), which were later shown to be dominated by M. aeruginosa, indicated that the dam conditions were favourable for M. aeruginosa growth, distribution and dominance as shown earlier [46,50]. Algal growths were observable throughout sampling dates, during which the average temperature in December 2010 through to March 2011 ranged between 24.2 °C and 26.2 °C ( Table 1). The observed temperature range is typical for summer seasons as reported elsewhere favouring the growth of Microcystis spp. [4,28,29,33,73,74,80] as is the case in Hartbeespoort Dam [25,50]. There was an increase in the quantities (>5.94 µg/g DW) of all MCs extracted towards mid-summer, particularly by February 2011 during which period the water surface in the dam was completely green, evidenced by higher quantities of chlorophyll-a measured (data not shown). Algal cell multiplications result in an increased algal biomass paralleled by an increased quantity of chlorophyll-a being produced, as well as elevated MC production [76,81]. Higher quantities of MCs were recorded at Site S2 (Figures 1-3) where chlorophyll-a concentration was the highest, being higher than S1, S3 and S4 (data not shown). A correlation between higher total MC concentrations expressed as MC-LR eq/L and higher chlorophyll-a concentrations have been reported for algal cells collected from this dam [25,47,50]. Areas around Site S1 and Site S2 have been identified as some of the historical algae concentration zones due to their locations and conditions that favour massive algal cell aggregations and growth [82]. A decrease in DO is commonly associated with microbial activity taking place [75]. Towards the end of the summer season there was a tremendous decrease in DO across all sites, an indication that there was higher microbial activity, probably due to bacteria feeding on dying algal cells. A decrease in intracellular MCs extracted from all sites (S1 to S4) in March 2011 supports this observation (compare Figure 2 and Table 2). A decrease in intracellular MC towards the end of the summer is of particular importance in terms of public health since higher releases of extracellular MCs are expected due to cell deaths. Decreasing numbers of visible algal cells in a water column and clear water can be a deceiving water-safety indicator to users, should there be no post-bloom monitoring strategies to determine MCs in recreational water as well as in domestic water, particularly after the summer. Isolation, Separation and Identification of MCs using LC-ESI-MS Pooled MC-rich hydroalcoholic extracts from a partitioning step were dried under vacuum at 40 °C, then re-dissolved in 1 mL MeOH and subjected to the SPE protocol [16]. After LC-MS instrument and column conditioning (10 min), retention times (RTs) for pure and mixed standards were established and optimised. Similarly, SPE products of algal samples were injected into the column using the autosampler and eluted for 20 min as had been done for the standards. [62,86], but there is limited information on their production from the same species found in the Hartbeespoort Dam, thus more investigation is underway to further elucidate their occurrences and to quantify them. Quantification and Quantitative Variations of Intracellular MC-LR, -RR and -YR Microcystis aeruginosa cells were identified from all samples collected from December 2010 to March 2011. Freeze-dried algal samples were treated similarly during MC extraction. To minimise instrumental and operational errors quantifications were performed in triplicate to generate mean values for each MC congener (Table 2). Complete removal of pigments and/or organic matter was achieved through liquid-partitioning procedures using CHCl 3 as a non-polar solvent mixed with aqueous methanol prior to SPE [58]. When not properly removed from algal samples, green pigments and organic matter lead to SPE cartridge blockages, prolonged extraction time, and poor LC-MS spectra resulting in problematic quantification [88] and low yields of MCs due to pigment influenced degradation of MCs [89] and signal interferences [59]. Higher recoveries of MC-RR, -LR and -YR have been demonstrated following a chloroform-methanol partitioning step prior to SPE protocol [59] resulting in clean and quantifiable LC-MS spectra. The insolubility of MCs in chloroform is therefore of particular advantage in the isolation and LC-MS quantification of MCs following a partitioning step using chloroform-methanol solvent system prior to SPE protocol. In this study clean LC-MS spectra were obtained following a liquid-partitioning-SPE procedure (data not shown). Microcystin-RR, -LR and -YR were quantified using peak areas substituted on linear regression equations: Y = 5,342x − (Table 2). Aquatic environments are dynamically non-uniform in nature leading to the existence of micro-environments within any given water system. The existence of micro-environments has been shown to influence and govern factors involved in the development of algal blooms and distributions, MC productions and quantitative variations, both in time and space [79]. Micro-environmental dynamics regulate important growth factors, particularly N/P nutrient ratios, thereby affecting growth rates of MC-producing algal blooms leading to variations in MC content [90,91]. From information given in Table 1 and Figure 2 it was shown that there were visible variations in the amounts of intracellular MCs among sites as well as on a monthly basis indicating that MC production was governed by differences in micro-environments within the dam [79]. Basically, MC-RR accounted for the highest amounts of all MCs quantified at all sites and over the four months of the sampling period (Table 1, Figure 2), however, statistically, a one-way ANOVA showed that there was no significant difference in MC concentrations among months, and however, marginal significant difference among MC congeners was observed (P = 0.62 and P = 0.06, respectively) ( Figure 3A and C). On the other hand, a one-way ANOVA showed that there was a significant difference (P < 0.001) in MC concentrations among sites. The highest total MC concentration of about 468 µg/g DW was recorded at Site S2 in February 2011 (Table 2). These results support the information about historical massive concentrations of algae [82] around this area, particularly M. aeruginosa cells. A multivariate analysis to determine effects of interactions between sites and MC congener on MC concentration was performed using ANCOVA (Table 3). Results showed that there was a highly significant influence of the sampling sites (P < 0.001) resulting in quantitative inter-site (spatial) MC variations (Tables 2 and 3). However, we did not find any significant interaction between period of sampling and site (P = 0.5764), a probable indication that the same species producing MCs was dominant throughout study. Microcystin content variations, including short-term (temporal), inter-site (spatial), perennial as well as seasonal variations, are a common trend in many eutrophic systems due to changing environmental factors and species succession/or dynamics [79]. Publications on intracellular and/or extracellular MC content variations, including temporal/short-term/or inter-site (spatial) variations, have appeared in which either laboratory or actual environmental samples were investigated, including the most recent findings published [44,76,[92][93][94][95], etc. Detailed referenced work on this subject was shown in Kardinaal et al. [79] and Briand et al. [91]. Thus, the observed inter-site variations in intracellular MC concentrations in the Hartbeespoort Dam were not uncommon considering the size of the dam. Moreover, this study was arguably done during the summer season immediately before the onset of the colder season when M. aeruginosa undergoes overwintering [96,97], thus it is our assumption that we were able to capture typical trends in intracellular MC profiles from samples collected representing the dam situation during the summer season. Temporal distribution of Microcystis spp. responsible for MC production in the Hartbeespoort Dam has been demonstrated in the paper published by Van Ginkel et al. [47], however, quantitative evaluations of MC dominances were not shown. In her report Van Ginkel [24] reported that cyanobacterial cells increased in a pool with time from the beginning of the summer season and decreased at the onset of the colder season from March towards the winter months, a probable indication that Microcystis cells were overwintering. From the observations made during our study, the MC profiles are evenly distributed across the dam as they were shown to occur in all samples studied, implying that they are produced from one dominant and welldistributed species, M. aeruginosa. Our findings therefore corroborate those of previous studies which showed that M. aeruginosa is a dominant MC-producing species in the Hartbeespoort Dam [25,38,46,48,50,86,98]. Health Implications of MC-RR Dominance and the Occurrence of (D-Asp 3 , Dha 7 )MC-RR Congener From this study, the observed MC-RR concentrations in the dam were found to be between 5.94 µg/g and 268 µg/g DW (Table 2) among sites. This range of concentration is within and above the range detrimental to detoxifying organs found in fish, as described in the literature [99]. Based on mouse assay the toxicity level of MC-RR (LD 50 = 200 µg/g to 800 µg/kg bw) is less than that of MC-LR (LD 50 = 25 µg/g to 50 µg/kg bw) [100], however, Ito et al. [101] showed that both MC-RR and MC-LR equally inhibit the activities of PP1 and 2A enzymes. The inhibition of PP1 and PP2A activities results in metabolic dysfunction in animals and eventual death in case of higher dosages [31]. In addition, it has also been demonstrated that at elevated concentrations, MC-RR congener had inhibitory effects against antioxidant and detoxification activities of GST on fish gills [99,102] at a dose reaching 10 µg/L. MC-RR is the dominant congener in the Hartbeespoort Dam, toxicologically, however, the fact that its dominance would therefore pose the major health hazard may be ignored, since it is the uptake rate [90,103], synergistic effects of other compounds/or MCs [37,103] and the amount [104] of MCs ingested that are physiologically more important in terms of animal poisoning than toxicity values of any particular pure MC congener. For instance, bioaccumulation of MC-RR in fish and plant tissues resulting from its elevated extracellular concentration in water [104] is of importance, as toxicological effects pose health risks for the public consuming seafood products from this water as well as other uses of the water for domestic, recreational and agricultural activities. Although there is no guideline for MC-RR concentrations in recreational waters due to its low toxicity [100], the WHO has set a limit of 25 µg MC-LR eq/L for total MC concentration in recreational waters [105]. Therefore, from Table 2, it can be deduced that upon cell lysis in Hartbeespoort Dam by the end of the summer season, MC-RR would have contributed the highest proportion to the total MC concentration in recreational water, above the WHO threshold level for recreational waters. However, as described above, upon ingestion or dermatological contact, both MC-RR and MC-LR equally inhibit the activities of PP1 and 2A enzymes, thus elevated concentrations of MC-RR alone could potentially pose a health risk to the general consumers of products or services from the dam. In addition, Blom and co-workers [104], showed that the toxicity activity of [D-Asp 3 , (E)-Dha 7 ]MC-RR was higher than that of the MC-RR congener to algae grazers. Thus, although further investigation is still needed to fully characterise the structure of (D-Asp 3 , Dha 7 )MC-RR isolated from the algal extracts we studied, its occurrence in Hartbeespoort Dam is of particular interest to us with regard to what could be its synergistic effect (if any) on the toxicity of the dominant congener, MC-RR and the like. Conclusions The quantitative profiles, spatial distribution and the dominance of the MC-RR congener over MC-LR and MC-YR in Hartbeespoort Dam across all sites as well as the occurrence of MC-WR, MC-(H 4 )YR and (D-Asp 3 , Dha 7 )MC-RR in the summer season were reported and related to M. aeruginosa dominance. The information presented here about the identification of sites with higher MC concentrations, MC distribution and dominance in the dam is of particular importance to the Hartbeespoort Dam authorities, researchers and the general public for risk assessments and water safety purposes. The formulation of strategic interventions to minimise potential health risks to the public using such water is highly dependent on information of this kind, especially wherever high-risk sites are identified [69]. Thus, it is the authors' view that information contained in this document has highlighted existing knowledge and has integrated the new knowledge with the existing knowledge to gain new insights into the state of safety of Hartbeespoort Dam water resources with respect to domestic, agricultural and recreational activities around and downstream of the dam.
2016-03-01T03:19:46.873Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "1561fc63c65c018bf3d0abed2fef1103473c6cd9", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1660-4601/9/10/3484/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1561fc63c65c018bf3d0abed2fef1103473c6cd9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
23474010
pes2o/s2orc
v3-fos-license
The role of left insula in executive set-switching: Lesion evidence from an acute stroke cohort Impairments in executive functions are common in stroke survivors, both in the acute and in the chronic phase. However, little is known about the underlying lesion neuroanatomy of these deficits. This study aimed to elucidate the pattern of brain damage underlying executive dysfunction in a large and acute stroke cohort. Executive set-switching deficits were evaluated by a shape-based analogue of the Trail Making Test (from the Oxford Cognitive Screen) in a consecutive sample of 144 stroke patients (age: 70 ± 15 years, examination: 5 ± 4 days post-stroke; brain imaging: 1.7 ± 2.9 days post-stroke). A voxelwise lesion-symptom mapping analysis was performed by combining executive set-switching accuracy scores with manually delineated lesions on computerized tomography or magnetic resonance imaging scans. The analysis showed that lesions within the left insular cortex and adjacent white matter predicted poorer executive set-switching. Further analyses confirmed that the lesion effect in the left insula survived correction for the low-level visuospatial and motor component processes of executive set-switching. In conclusion, the study provides lesion-based evidence for the role of the left insular cortex in flexible switching of attention. The findings are consistent with emergent models of insular function postulating the role of this region in regulatory aspects of goal-directed behaviour. Introduction Flexible switching of attention between tasks, operations and stimulus sets reflects a core aspect of executive control (Miyake et al., 2000). Deficits in executive set-switching have been documented in both acute (Tamez et al., 2011) and chronic stroke cohorts (Chan et al., 2015;Yochim, Baldo, Nelson, & Delis, 2007). Mounting evidence highlights the importance of studying executive deficits that accompany stroke. For instance, a modulatory relationship between domain-general executive control mechanisms and domain-specific cognitive tasks has been observed within language (Brownsett et al., 2014) and visuo-spatial attention domains (e.g., Robertson et al., 1997;Singh-Curry & Husain, 2009). Further, the presence of an early executive impairment may predispose stroke survivors for experiencing reduced quality of life in the chronic phase (Nys et al., 2006). A commonly used instrument for mapping executive deficits in neurological populations is the Trail Making Test (TMT), a visuo-motor search task that induces switching between competing stimulus sets (e.g., S anchez-Cubillo et al., 2009). Due to its purported executive demands (e.g., Arbuthnott & Frank, 2000;Kortte, Horner, & Windham, 2002), early lesion research speculated that the TMT may be used as a tool for detecting executive impairment stemming from frontal lesions (Stuss et al., 2001). However, several recent studies did not support the specific role of the frontal areas in mediating set-switching in the TMT, both in (sub)acute (<3 months; Tamez et al., 2011;Muir et al, 2015) and chronic phases post-stroke (>3 months, Chan et al., 2015;Muir et al., 2015). Specifically, these studies failed to demonstrate that patient categorisation, either into frontal versus non-frontal groups (Chan et al., 2015;Tamez et al., 2011), or based on the stroke involvement with the nodes of a predefined "executive network" (Muir et al., 2015), discriminated between the TMTderived indices of set-switching. By contrast, studies utilising a more sensitive approach of categorising patients based on the presence of a lesion in a voxel-wise fashion (see Bates et al., 2003;Rorden, Karnath, & Bonilha, 2007), suggest the involvement of diverse frontal regions in TMT performance. For instance, a large sample study of chronic brain-injured patients (N ¼ 236) found that lesions within the rostral anterior cingulate cortex predicted less efficient set-switching performance (Gl€ ascher et al., 2012). Another large-sample study of individuals with penetrating head injuries (N ¼ 182) reported an association between regionally non-specific lesions within the left prefrontal cortex, anterior cingulate, insula, parietal and temporal areas and lower executive functioning, as evaluated by the Delis-Kaplan Executive Function System (D-KEFS) tests that included the TMT (Barbey et al., 2012). Furthermore, damage to the left dorsomedial prefrontal cortex was associated with slower set-switching in a study of 27 frontal chronic braininjured patients with heterogeneous aetiologies (Miskin et al., 2016). Finally, lesions within the right dorsolateral prefrontal cortex predicted higher incidence of set-switching errors in a sample of 30 acute, right-hemispheric and predominantly frontal stroke patients (Kopp et al., 2015). Although these voxel-lesion-symptom mapping (VLSM) studies collectively suggest that frontal areas are important for executive set-switching, the regionally specific frontal contributions remain inconclusive. In addition, the use of chronic brain-injured samples or small sample sizes in these studies makes it difficult to tease apart the contributions of localised brain damage from the long-term spontaneous plasticity effects (see Gillebert & Mantini, 2013;Guerra-Carrillo, Mackey, & Bunge, 2014;Pascual-Leone, Amedi, Fregni, & Merabet, 2005). Neuroimaging studies of the TMT performed in healthy volunteers suggest that both frontal and non-frontal areas are important for executive set-switching. Specifically, functional magnetic resonance imaging (fMRI) adaptations of the TMT contrasted the set-switching to a control condition and revealed brain activations in the prefrontal cortex, including the left lateral prefrontal cortex (Moll, de Oliveira-Souza, Moll, Bramati, & Andreiuolo, 2002), left superior frontal gyri (Zakzanis, Mraz, & Graham, 2005) and right-lateralised ventrolateral prefrontal cortex (Jacobson, Blanchard, Connolly, Cannon, & Garavan, 2011). Further, less consistent nonfrontal contributions to the set-switching component of the TMT were also observed across these studies, including insular, parietal and temporal activations. In line with this, fMRI studies that utilised established switching paradigms, including task-switching (Dreher, Koechlin, Ali, & Grafman, 2002), stimulus-response reversal (Dove, Pollmann, Schubert, Wiggins, & von Cramon, 2000), and the Wisconsin Card Sorting Task (Monchi, Petrides, Petre, Worsley & Dagher, 2004), implicated a widely distributed circuitry of both frontal and non-frontal regions in executive set-switching. Indeed, two meta-analyses of set-switching reported converging evidence that in healthy individuals, set-switching operations engage a widespread neural circuitry comprising superior parietal, premotor and anterior insular regions in addition to consistently reported prefrontal activation effects (Derrfuss, Brass, Neumann, & Cramon, 2005;Wager, Jonides, & Reading, 2004). Overall, although neuroimaging findings suggest that executive set-switching performance is mediated by a circuitry that extends beyond prefrontal cortex, a high regional variability of the reported effects makes it difficult to synthesise the findings across the studies. In addition, while VLSM studies highlight the involvement of the prefrontal cortex in executive set-switching in chronic brain-injured cohorts (Barbey et al., 2012;Gl€ ascher et al., 2012;Miskin et al., 2016), lesion-mapping studies of executive deficits in acute stroke patients are lacking. The only identified VLSM study of executive set-switching in an acute stroke cohort used a restrictive sampling criteria based on the anatomical lesion location and included 30 patients (Kopp et al., 2015). The current study aimed to elucidate the neuroanatomical underpinnings of executive set-switching deficits in a large (N ¼ 144), consecutive, and acute stroke sample. Executive setswitching performance was quantified by a shape-based TMT analogue included in the Oxford Cognitive Screen (OCS) (Demeyere, Riddoch, Slavkova, Bickerton, & Humphreys, 2015), a stroke-specific screening tool covering the domains of attention and executive function, language, memory, praxis and number processing. In contrast to the standard TMT, which requires number-letter switching, the shape-based TMT analogue requires alternation between task-relevant shapes in order of size. The shape-based TMT analogue is intended to provide a more sensitive screen of executive deficits in stroke, as it permits assessing patients who are impaired in language and/or numerical sequencing (Demeyere et al., 2015). We used a voxel-wise approach (Bates et al., 2003) to map acute lesion data (mean stroke to scan interval ¼ 2 days) onto acutely evaluated executive setswitching deficits (mean stroke to test interval ¼ 5 days). The use of an acute stroke sample allowed the interrogation of lesion-deficit coupling in the absence of confounding long-c o r t e x 1 0 7 ( 2 0 1 8 ) 9 2 e1 0 1 term spontaneous plasticity effects (e.g., Gillebert & Mantini, 2013;Guerra-Carrillo et al., 2014;Pascual-Leone et al., 2005). Indeed, there is evidence suggesting a more reliable infarctdeficit mapping for acute, compared to chronic, behavioural datasets (Ochfeld et al., 2010). 2. Material and methods Participants The study cohort included 144 acute stroke patients (62 females; mean age ¼ 71 ± 15 years) recruited consecutively from the acute stroke unit at the John Radcliffe Hospital, Oxford. This study involved the collection of acute behavioural data (mean stroke to test interval ¼ 4.9 ± 3.8 days; with 73% of cases tested within the first week following hospital admission) and acute clinically obtained brain imaging scans (mean stroke to scan interval ¼ 1.7 ± 2.9 days; Table 1). Neuropsychological examination was performed on the acute ward by means of the OCS. Inclusion criteria for the study were as follows: (i) patients were within 3 weeks of a confirmed diagnosis to have had an ischemic or haemorrhagic stroke; (ii) a record of both behavioural and brain imaging data acquired within 3 weeks poststroke was available; (iii) brain imaging scans were of high enough quality to allow accurate manual tracing of stroke lesions; specifically, there were no imaging artefacts on the computerized tomography (CT) or magnetic resonance imaging (MRI) scans and no accompanying major structural brain abnormalities (e.g., severe cortical atrophy, small vessels disease etc.) were detected; (iv) patients presented no symptoms of egocentric hemi-spatial visual neglect, as assessed by the Hearts Cancellation task, included in the OCS (Demeyere et al., 2015), (v) patients had no current or previous diagnosis of psychiatric illness, degenerative disease, epilepsy, alcohol and/or drug abuse. Demographics data and stroke information were obtained from the medical notes. Table 1 summarises demographic and stroke variables for the cohort. Written or witnessed informed consent was obtained from all participants. The study was approved by the UK National Research Ethics Service (Reference: 11/WM/0299). 2.2. Shape-based Trail Making Test Analogue from the Oxford Cognitive Screen 2.2.1. Task and procedure Fig. 1 displays the stimulus sets for the shape-based TMT analogue included in the OCS. The use of shape-based stimuli, instead of letters and numbers, was intended to increase the task sensitivity for the detection of executive set-switching deficits in stroke by minimising language and numerical processing demands (Demeyere et al., 2015). The shape-based TMT analogue is thus an inclusive tool for the assessment of executive dysfunction independently of verbal and numerical disturbances. Accordingly, the hemispheric distribution of lesions in the current sample was not biased towards the inclusion of patients who were less likely to show verbal impairment, i.e., patients with right-hemispheric lesions. Specifically, of the entire sample of 144 patients, 47 patients had right-hemispheric lesions, 60 patients had lefthemispheric lesions and 37 patients had bilateral lesions (Table 1). All participants were instructed to draw a line joining shapes in decreasing order of size on an A4 worksheet. In the shape-based TMT analogue, baseline performance was assessed across two tests. The first baseline test required participants to connect large to small circles in the presence of square distractors (Fig. 1A), whereas the second baseline test required connecting large to small squares in the presence of circle distractors (Fig. 1B). In the third set-switching test, participants were asked to connect both shape types in descending and alternating order (i.e., set-switching test; Fig. 1C). A short practice was administered before the start of each test to ensure that participants had an accurate comprehension of the task. Two versions of the shape-based TMT analogue (OCS-version A and OCS-version B) that differed in the shape type deployed (i.e., the use of triangles instead of squaresethough no changes were made to the locations of the shapes) were administered between subjects. In contrast to the standard TMT, participants could take as much time as they needed to complete the baseline and setswitching tests, and performance speed was not emphasised by the experimental instruction. The self-paced nature of the task ensured that reduced performance accuracy, if present, was due to connecting fewer shapes in the incorrect order as a result of executive function deficits and not due to the fixed time constraint or the requirement to implement an additional task set (i.e., connect shapes as fast as possible). Behavioural analysis Executive set-switching performance was expressed as the number of accurately connected shapes in the set-switching test of the shape-based TMT analogue (range of scores: 0e13). These raw accuracy scores were used as a dependent variable in the primary lesion-mapping analysis. Across two baseline tests, performance was expressed as the number of accurately connected shapes along a single stimulus dimension (range of scores: 0e6). The raw accuracy scores from these baseline tests were averaged for each participant and used as a covariate of no interest in the control lesionmapping analysis. Due to the self-paced nature of the task, the completion time was highly variable across participants c o r t e x 1 0 7 ( 2 0 1 8 ) 9 2 e1 0 1 and was not correlated with the set-switching accuracy Prior to the lesion-mapping analysis, a series of tests were conducted to inspect whether set-switching accuracy scores were matched for each demographic and stroke variable. Specifically, the statistical tests included: Pearson's correlations (age, years of education, lesion size, stroke to test and stroke to scan interval), two-sample t-tests (gender, OCS version, imaging modality and stroke aetiology) and one-way ANOVAs (handedness and lesion hemisphere). We used a statistical threshold of p < .05, Bonferroni-corrected for multiple comparisons (uncorrected p < .004). Lesion mapping CT/MRI scans were acquired for all patients as part of their routine clinical assessment following stroke at the John Radcliffe Hospital, Oxford. A total of 120 axial scans in the CT modality (29e62 slices, slice thickness: 3e5 mm) and 24 scans in the MRI modality (24e192 slices, slice thickness: 1e6 mm) were acquired containing full-brain coverage. Out of 24 MRI scans, 2 axial scans were acquired using a T1-weighted sequence, 20 axial scans using a T2-weighted sequence and 2 coronal scans using a Fluid-attenuated inversion recovery (FLAIR) sequence. Image processing Prior to delineating lesions, images were reoriented to the anterior commissure using Statistical Parametric Mapping 8 (SPM8) (Wellcome Trust Centre for Neuroimaging, London, United Kingdom). The lesion boundaries were manually delineated directly on the CT or the MRI image, slice-by-slice, in the plane of the highest resolution, using MRIcron (McCausland Center for Brain Imaging, Columbia, SC, USA). Manual delineation of lesions was performed by two independent experienced raters blind to the behavioural results. All lesion masks were smoothed at 5 mm full width at half maximum in the z-direction and binarised using a .5 threshold. Both the patients' clinical scans and delineated lesion masks were warped into 2 Â 2 Â 2 mm stereotaxic space using the Clinical Toolbox, based on SPM8 (Rorden, Bonilha, Fridriksson, Bender, & Karnath, 2012) (https://www.nitrc.org/ projects/clinicaltbx/). The normalization algorithm in this toolbox involves affine (i.e., linear) and non-linear transformations, and uses cost-function masking (Brett, Leff, Rorden, & Ashburner, 2001). This procedure minimises the bias induced by the presence of abnormal areas by removing the lesion from the normalisation transforms (Brett et al., 2001). The Clinical Toolbox has the further advantage that it uses spatially-matched CT and MRI normalisation templates representative of a healthy elderly population . The quality of normalisation was evaluated through visual inspection and was deemed satisfactory in all patients (see Figure S1 for an example of spatial normalization). Lesion-mapping analysis We performed a VLSM analysis (Bates et al., 2003) to identify the voxels with a significant difference in the behavioural scores depending on the voxel lesion status (lesioned/intact). Specifically, we used a parametric VLSM tool that by default calculates and uses the lesion size of each patient as a covariate of no interest, and that allows one to add multiple covariates of no interest (https://langneurosci.mc.vanderbilt. edu/resources/). Correction for multiple comparisons was estimated using permutation-based thresholding (5000 iterations). Unless mentioned otherwise, a minimum of 10 patients with a lesion at any given voxel was necessary for the statistical test to be performed (e.g., Shahid et al., 2017), and the reported results survived a voxelwise threshold of p < .001, corrected for multiple comparisons based on cluster size and the permutation method (Kimberg, Coslett, & Schwartz, 2007). The primary VLSM analysis was conducted on the raw accuracy scores from the set-switching test. The analysis was performed in the full sample of patients (N ¼ 144), and c o r t e x 1 0 7 ( 2 0 1 8 ) 9 2 e1 0 1 repeated in the subset of patients with unilateral lesions (N ¼ 107) and in the subset of right-handed patients (N ¼ 99). In a first set of control analyses, we corrected for the lowlevel visuo-spatial components of the set-switching test by using the average accuracy across two baseline tests as a covariate of no interest. Since the performance on the baseline tests was close to ceiling (Table 2), we repeated the VLSM analysis on the "executive score", a composite score for executive functioning. This metric was computed by subtracting the set-switching accuracy (maximum score ¼ 13) from the summed accuracy scores on the two baseline tests (maximum score ¼ 12) (see Demeyere et al., 2015). The executive score thus reflects the change in the executive set-switching accuracy relative to baseline, with higher scores indicating lower executive set-switching ability. A second control analysis was performed to demonstrate the task specificity of the observed findings. To this end, we categorised the stroke patients depending on their lesion overlap with the identified clusters according to a predefined threshold (10%). We then evaluated the performance of both patient groups on the imitation task from the OCS (praxis domain). This mirror task requires participants to imitate meaningless hand and finger gestures performed by the examiner (Demeyere et al., 2015). The task was chosen for its similarity to the set-switching test in terms of difficulty level in healthy individuals (median score on the Imitation task 11; median score on the set-switching test: 12), the range of scores (Imitation task 0e12; set-switching test: 0e13), and the cut-off value for impairment (Imitation task 8; setswitching test: 7) (Demeyere et al., 2015). To evaluate the cross-task specificity of the findings, the scores on the setswitching test and the imitation task were re-scaled onto a 0e10 scale and analysed using a mixed ANOVA with lesion location (overlap/no overlap) as between-subject factor and task (set-switching test, imitation task) as within-subject factor. 3.2. Lesion neuroanatomy Fig. 2 shows an overlay of 144 lesion masks mapped into stereotaxic space. In line with empirical reports documenting predominantly subcortical lesion neuroanatomy in clinical stroke samples (e.g., Corbetta et al., 2015), the highest proportion of lesions involved damage to subcortical and insular areas. Primary VLSM analysis To identify which brain areas, when lesioned, predict poor executive set-switching, accuracy scores from the setswitching test were submitted to a VLSM analysis with lesion size as a covariate of no interest. This analysis identified a cluster in the left insular cortex extending into precentral gyrus (1382 voxels, 2 Â 2 Â 2 mm 3 , centre of mass Montreal Neurological Institute (MNI) coordinates: x ¼ À42, y ¼ 0, z ¼ 19) (Fig. 3A). The VLSM analysis of set-switching accuracy scores in patients with unilateral lesions (N ¼ 107) identified a similar left-lateralised cluster in the insular cortex extending into the precentral gyrus ( Figure S2A). Further, the lesion effect in the left insular cortex was robust to handedness, as indicated by the results of the VLSM analysis conducted in the right-handed patients only (N ¼ 99) ( Figure S2B). Analyses controlling for low-level visuospatial and motor demands A secondary VLSM analysis was performed to correct the performance on the set-switching test for the low-level visuospatial and motor demands as measured by the two baseline tests. A VLSM analysis with accuracy on the baseline tests as covariate of no interest identified an overlapping cluster in the left insula extending into precentral gyrus (179 voxels, 2 Â 2 Â 2 mm 3 , centre of mass MNI coordinates: x ¼ À41, y ¼ 1, z ¼ 17) (Fig. 3B). Similarly, a VLSM analysis on the executive score (an index of the set-switching accuracy relative to baseline) (Demeyere et al., 2015) showed an association between the left insula and poor executive set-switching ( Figure S3). Analyses assessing cross-task specificity A control analysis was performed to demonstrate the crosstask specificity of the left insular lesion effect. According to a predefined lesion overlap threshold (10%) (see Methods, section 2.3.2.), the lesion of 13 out of the 144 patients overlapped with the left insular cluster (as depicted in Fig. 3B). We compared performance of the two patient groups (overlap/no overlap with the left insular cluster) on the set-switching test c o r t e x 1 0 7 ( 2 0 1 8 ) 9 2 e1 0 1 and an unrelated imitation task. Specifically, a mixed ANOVA with lesion location (overlap/no overlap with the left insula cluster) and task (set-switching/imitation) as factors showed a main effect of lesion location (F 1,142 ¼ 6.84, p ¼ .01), a main effect of task (F 1,142 ¼ 5.46, p ¼ .02), and a significant interaction between lesion location and task (F 1,142 ¼ 6.68, p ¼ .01). Post-hoc independent samples t-tests confirmed that there was a significant difference between the patient groups (overlap/no overlap) in the set-switching test (t 142 ¼ 3.55, p ¼ .001), but not in the imitation task (t 142 ¼ .88, p ¼ .38). In summary, a lesion in the left insula discriminated between high and low accuracy scores on the set-shifting test (Fig. 4A), while it was not discriminatory of high and low accuracy scores on an imitation task (Fig. 4B). Discussion This study used voxel-based lesion-symptom mapping (VLSM) in a large sample of acute stroke patients to identify regions critical for mediating flexible switching of attention. We used a shape-based analogue (Demeyere et al., 2015) of the wellestablished (S anchez-Cubillo et al., 2009) and validated (e.g., Arbuthnott & Frank, 2000) metric of executive set-switching i.e., the Trail Making Test (TMT). The data showed that patients with localised damage to the left insular cortex performed worse in the set-switching test of the shape-based TMT analogue compared to patients without such damage. The finding was corroborated by further control analyses which demonstrated that the negative impact of left insular c o r t e x 1 0 7 ( 2 0 1 8 ) 9 2 e1 0 1 damage on executive set-switching (i) survived correction for the non-executive components of the task, (ii) was replicated in the VLSM analysis of the baseline-corrected executive scores (Demeyere et al., 2015); (iii) was replicated in the analyses restricted to patients with unilateral lesions; and (iv) was replicated in the analysis restricted to right-handed patients. Categorisation of patients based on the extent of their left insular lesion overlap further illustrated the specificity of the lesion effect to the executive demands of the shape-based TMT analogue. Specifically, the left insular damage discriminated between high and low accuracy scores in the setswitching test, but not in the baseline tests or in the imitation task, which matched the set-switching test in terms of difficulty level, though measuring a separate cognitive domain (praxis). This finding reinforces the notion that the lesion effect in left insula was not driven by patients' inability to comprehend the task instructions, overall poor task performance, or the ability to retain task-relevant information in short-term memory. Overall, this study provides lesion-based evidence supporting a role for the left insular cortex in executive set-switching above and beyond low-level visuospatial and motor components of the test. The current study used a shape-based TMT analogue that differs from the standard TMT in a number of ways. Firstly, the shape-based TMT variant utilises shape-based stimulus sets, assessing executive deficits independently of number and letter sequencing impairments (Demeyere et al., 2015). Secondly, the task involves the sequential administration of two baseline tests, in which participants are required to connect task-relevant shape stimuli that are embedded in an array of task-irrelevant distractor shapes. Thirdly, in contrast to the standard TMT protocol (e.g., Bowie & Harvey, 2006), speed is not emphasised by the test administration protocol. Finally, while performance accuracy is not the main outcome variable in the standard TMT (Bowie & Harvey, 2006), it is a commonly used metric to supplement the completion time measurement (e.g., Kopp et al., 2015;Stuss et al., 2001). Accuracy measurement may be particularly informative for describing aberrant executive functioning in patient populations that tend to exhibit highly variable completion time profiles. Given the differences between the standard and shape-based TMT variants, the extent of generalizability of the findings obtained with the latter should be evaluated in the future studies. Previous VLSM findings of executive deficits in braininjured samples have implicated diverse frontal regions in TMT set-switching, including anterior cingulate cortex (Gl€ ascher et al., 2012), right dorsolateral prefrontal cortex (Kopp et al., 2015), left dorsomedial prefrontal cortex (Miskin et al., 2016) as well as regionally non-specific left-lateralised prefrontal cortex (Barbey et al., 2012). A number of methodological aspects might have contributed to the variability of the reported frontal regions in these studies, as well as inconsistencies with the current study. In particular, across the above-described VLSM studies, lesion effects were identified in relation to various TMT-derived measurements that differed both with respect to the metric used (e.g., completion time versus accuracy) and with respect to the type of baseline correction. For instance, a VLSM study of chronic braininjured patients used standardised, covariate-corrected and baseline-corrected completion time residuals as the outcome variable (Gl€ ascher et al., 2012). Another VLSM study of war veterans with penetrating, cortical lesions used baselinecorrected completion time scores recorded in the numberletter switching condition from the D-KEFS variant of the TMT (Barbey et al., 2012). Finally, a study of acute stroke patients with right-lateralised lesions used both completion time and accuracy metrics, but identified significant lesion effects only in relation to the latter (Kopp et al., 2015). It should also be noted that the above VLSM findings of executive setswitching were based on aetiology-diverse chronic samples (Gl€ ascher et al., 2012;Miskin et al., 2016), some of which included a stroke subset (Gl€ ascher et al., 2012). In this study, we analysed behavioural and imaging data collected in a large stroke cohort within 3 weeks (average 5 days) post-injury. Importantly, the use of acute stroke sample minimised the contribution of remote, non-damaged brain areas that may support executive functioning during the chronic phase of stroke as a result of functional brain reorganisation (e.g., Carter, Shulman, & Corbetta, 2012;Pascual-Leone et al., 2005). The lesion distribution of the current stroke sample fits with the empirical observation that the majority of the lesions induced by stroke affect subcortical structures (Corbetta et al., 2015). More specifically, empirical reports suggest that fewer than 20% of total stroke cases include lesions to cortical locations (e.g., Corbetta et al., 2015). By extension, the current sample exhibited a relatively low cortical lesion overlap in frontal areas and this, combined with a stringent minimum voxel overlap criterion (N ¼ 10), may have limited the sensitivity of the VLSM analysis to detect significant effects in these locations. The negative result concerning the frontal regions therefore should be interpreted with caution. Another important methodological consideration of this study is that the patients with egocentric visual neglect were screened out. The right-lateralised lesion topography commonly identified in relation to the visuo-spatial neglect syndrome (e.g., Verdon, Schwartz, Lovblad, Hauert, & Vuilleumier, 2010), implies that the sensitivity of the VLSM analysis to detect effects in the right hemisphere might have been reduced. Whereas the purpose of such restrictive sampling was to minimise the contributions of specific visuospatial deficits to the executive set-switching measurement, accumulating evidence suggests a modulatory relationship between visual and executive aspects of attention in stroke (e.g., Singh-Curry & Husain, 2009). More research is therefore warranted to characterise the co-occurrence of executive impairments and hemi-spatial visual neglect in acute and chronic stroke cohorts, and to further elucidate to what extent damage to regions previously identified in association with specific spatial neglect components (Verdon et al., 2010) may underlie such a cross-domain relationship. A characterisation of the left insular region identified in this VLSM study is warranted. While it is well-known that the insular cortex involves anterior and posterior subdivisions (e.g., Naidich et al., 2004), a probabilistic atlas reflecting a finegrained gyrus organisation of this region has only been recently published (Faillenot, Heckemann, Frot, & Hammers, 2017). A qualitative comparison between the left insular region isolated in this study and the insular micro-anatomical subdivisions (Faillenot et al., 2017), suggested that the microanatomy of the left insular cluster included both anterior and posterior components. It should be noted that the probabilistic map of insula is representative of a healthy, young population, while the identified left insular cluster here was based on an elderly cohort that suffered an acute brain injury. In addition, while there is consensus in the fMRI literature in support of the functional dissociation between anterior and posterior insula (e.g., Menon & Uddin, 2010), stroke lesions reflect the distribution of the underlying vasculature and this frequently obscures the boundaries of the functional areas. By extension, the lesion effect in the left insula reflects the resolution of the vascular territory affected in the current sample and therefore does not permit making fine-grained anatomical or functional distinctions of this region. A potential drawback of the current study is that the lesions were delineated on the clinical scans acquired in CT and MR modalities. Although we used age-specific templates that are appropriate for studies with mixed CT/MR modalities , we cannot exclude that the heterogeneity in the spatial resolution of the scans may have influenced the accuracy of the lesion delineation. Although executive deficits have not been commonly reported following insular lesions (Ibañez, Gleichgerrcht, & Manes, 2010), a recent single-case study found that brain damage caused by the left insular stroke led to an isolated executive impairment (Markostamou, Rudolf, Tsiptsios, & Kosmidis, 2015). In particular, the stroke patient from this study performed below a cut-off on all indices of executive functioning, including a metric of mental flexibility, but had no language, perceptual or memory impairments. Another combined, multi-centre study that mapped TMT setswitching performance on separate indices of grey matter and white matter damage, yielded complementary data for the left insular involvement in executive functioning (Muir et al., 2015). Specifically, the study found that ischemic stroke within the left superior longitudinal fasciculus, a white matter tract traversing peri-insular region, predicted TMT setswitching deficits in both acute and chronic stroke cohorts. Conversely, stroke involvement with the grey matter regions within an "executive network" (including lateral and medial prefrontal cortex, lateral parietal cortex and thalamus) was not associated with TMT set-switching deficits. Based on these data and consistent with the insular lesion effect identified in the current study, which included both grey matter and white matter compartments, it could be argued that white matter pathways traversing the insular region might be as important as insular grey matter for accurate set-switching. More broadly, the current data fits with the recent network models of insular function that implicate this region in the task-dependent control of goal-directed behaviour (Menon & Uddin, 2010), in addition to its well-established role in visceral and sensory processing (Craig, 2009). For instance, a functional connectivity study anchored the anterior insula within a 'cognitive control network' due to its involvement during a cognitive-control task combining working memory and target-switching demands (Cole & Schneider, 2007). In addition, the anterior insula has been designated a core node within a "saliency network" due to its role in tagging salient environmental stimuli for additional, controlled processing (Menon & Uddin, 2010). These models provide a good framework for the task-based fMRI data indicating that executive set-switching operations rely on a large-scale distributed network of brain regions, extending beyond prefrontal cortex and including anterior insula. Indeed, two meta-analyses of neuroimaging studies that used switching paradigms identified anterior insula as consistently being activated across the switching studies (Derrfuss, Brass, Neumann, Cramon, & Von, 2005;Wager et al., 2004). Specifically, activation in the anterior insula was detected in task contexts of uninstructed setswitching initiated based on trial-and-error learning (e.g., Monchi et al., 2004), as well as in tasks using explicit cues to induce switching (e.g., Dove et al., 2000;Dreher et al., 2002). More closely related to the current study however, is the finding that bilateral insular activation discriminated between set-switching and control conditions in the context of the fMRI-adapted TMT (Zakzanis et al., 2005). Conclusions To our knowledge, this is the first large sample study that investigated the mapping between executive deficits assessed through a stroke population optimised TMT tool and brain lesions delineated from acute stroke scans. The study yields lesion evidence for the involvement of the left insular cortex and adjacent white matter in flexible switching of attention. Importantly, we observed that this effect was independent of low-level visuospatial and motor demands of c o r t e x 1 0 7 ( 2 0 1 8 ) 9 2 e1 0 1 the shape-based TMT analogue, and could not be explained by overall poor task performance. Our findings are in accordance with recent network models of insular function, postulating a role for this region in regulatory aspects of goaldirected behaviour. Based on the network-level accounts of the insular function, there are most likely to be additional brain regions that are critical for accurate set-shifting performance but went undetected in the current study due to sample limitations. Furthermore, although the focus of the current study was on the regional structural damage arising from stroke, accumulating evidence suggests that remote dysfunction can occur in structurally intact regions connected to the area of lesion (e.g., Carter et al., 2012). Therefore, a possible avenue for future research may involve the study of the neural mechanisms through which focal brain damage in insular cortex leads to alterations in functionallyconnected brain regions.
2017-11-23T18:07:10.335Z
2017-11-22T00:00:00.000
{ "year": 2018, "sha1": "f0184894d27c9649e7d2949ec761943b06174830", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cortex.2017.11.009", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f0184894d27c9649e7d2949ec761943b06174830", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
17075540
pes2o/s2orc
v3-fos-license
Recruitment of HIV-1 envelope occurs subsequent to lipid mixing: a fluorescence microscopic evidence Entry of the human immunodeficiency virus (HIV) into the target cell is initiated by fusion with the cell membrane, mediated through the envelope glycoproteins gp120 and gp41, following engagement to CD4 and the co-receptor. Previous fusion kinetics studies on the HXB2 envelope protein (Env) revealed that Env recruitment occurred at about 13 min concurrent with the lipid mixing. To resolve the temporal sequence of lipid mixing and recruitment, we employed an inhibitory assay monitored by fluorescence microscopy using a gp41 ectodomain (gp41e) fragment, which blocked Env recruitment in stark contrast to the lack of gp41e effect on the lipid mixing. In addition, to demonstrate the mode of action for the inhibition of gp41e, our results strongly suggested that lipid mixing precedes the Env recruitment because lipid mixing can proceed with Env recruitment inhibited by exogeneous gp41e molecules. Importantly, it was found that the random clustering of Env molecules on the membrane surface occurred at ~1 minute whereas the Env recruitment was observed at 13 minutes after the attachment of Env-expressing cell to the target cell. This > 10-fold temporal discrepancy highlights that the productive assembly of Env molecules leading to fusion requires spatio-temporal coordination of several adjacent Env trimers aggregated via directed movement. Background As the first step in the replication cycle of a virus, membrane fusion is mediated by the viral fusion protein. For HIV-1, the fusion protein consists of the non-covalently associated surface subunit gp120 and transmembrane subunit gp41. Attachment of gp120 to the CD4 receptor triggers a cascade of conformational changes in gp120 to expose the co-receptor binding site, which in turn induces structural rearrangement of gp41 and insertion of the fusion peptide into the host cell, forming the pre-hairpin structure. Subsequent refolding of the heptad repeat regions pulls the fusing membranes into close proximity to facilitate the membrane merger. It has been shown that hemifusion (lipid membrane mixing) is an important intermediate step in the transition to complete membrane fusion for the fusing membranes [1]. Additionally, the assembly of multiple Env proteins on the membrane surface is a critical process in the fusion reaction. Recently, we have examined the function of the co-receptor in HIV-1 HXB2-mediated fusion to dissect kinetically the varying stages of the fusion event [2]. An important finding was that the co-receptor is essential for gp120 shedding from gp41 following CD4 engagement and that the process is gradual, spanning over 10 minutes. It was also found that coreceptor binding accelerates six helix bundles (SHB) formation, promotes hemifusion and complete fusion, and induces the recruitment of adjoining Env subunits to create and sustain the fusion pore. The temporal order of hemifusion and recruitment was, however, not resolved in that study. Together with SHB formation, these three steps constitute the later stages of the fusion event [3]. Because the major portion of energy required for fusion is expected to be expended in these processes (as they involve repulsive hydration force, lipid reconfiguration, pore formation and enlargement, and membrane merger), the rate-limiting step of the fusion likely lies in these steps. Understanding the kinetics of these processes is therefore critical for the mechanistic study of virus-mediated fusion and the pursuit of antivirus drug and vaccine strategy. Clustering of the viral fusion proteins on the membrane surface has been previously studied for their possible role in membrane fusion [4,5]. Thus, the distribution of influenza hemagglutinin molecules on the cell surface has been documented at 40 nm resolution, displaying clusters of various size and apparent fluidity [6]. The kinetics of lateral movement and assembly of protein or proteineous complex on the membrane surface has not been elucidated [7]. To resolve the dynamics of lipid mixing and recruitment, we took the competitive inhibition approach in which a recombinant gp41 ectodomain devoid of the fusion peptide segment, termed gp41e, is added to the Env-expressing effector cell in the presence of the target cell at different time points, to observe the effect on the processes being tested. We found that hemifusion was not blocked by the recombinant gp41e protein treatment, whereas Env recruitment was inhibited when the inhibitor was added within 13 minutes of mixing the effector and target cells. The micro-imaging results indicate that hemifusion precedes the Env recruitment and support the view that blocking of Env clustering on the membrane surface is a mechanism of gp41e inhibitory action. In addition, fluorescence recovery after photobleaching (FRAP) was used to demonstrate that the observed functional recruitment dynamics was not a random diffusion, but a highly temporally and spatially orchestrated process. The biological implication of the finding is discussed. Characterization of gp41 ectodomain (gp41e) A recombinant protein derived from the HIV-1 HXB2 gp41 ectodomain was designed in order to mimic the fusogenic state of viral gp41 fusion protein. Gp41e, comprising 139 residues (gp41 24-154 with 8 His tagged to the C-terminus), includes the 59-residue N-domain, the 28-residue loop region, and the 44-residue C-domain. An 8 His tag was added at the C-terminal end of the gp41e construct to allow for purification by immobilized metal ion affinity chromatography (IMAC). In our study, the complete gp41e, rather than the deletion constructs used in other previous studies [8][9][10][11], should better mimic the behavior of HIV-1 gp41. We have shown that gp41e spontaneously folds into an oligomeric, predominantly helical state, and dissolves in distilled water in the native conformation [12]. Gp41e inhibits content mixing of cell-cell fusion Gp41e proteins were tested for the ability to inhibit the fusion of HIV-1 Env-expressing (effector) HeLa cells and CD4-X4-expressing (target) NIH3T3 cells. The target cells were loaded with the cytoplasmic dye 5-and 6-[(4-chloromethyl)benzoyl]amino tetramethylrhodamine (CMTMR, red) for 1 hour at 37°C. CMTMR-labeled target cells were co-cultured with calcein-labeled effector cells at 37°C, and the dye redistribution was monitored microscopically. The aqueous dye mixing was previously determined to occur at about 20 minutes post-incubation [2]. In the present experiment, we added 1 μM of gp41e prior to the co-incubation of effector and target cells. Content mixing was clearly inhibited ( Figure 1B), in contrast with the control which was free of gp41e ( Figure 1A). The result implies that gp41e prevents the fusion pore formation in the cell fusion cascade. Next, we examined the effect of gp41e on Env recruitment and lipid mixing. Addition of gp41e had no effect on the lipid mixing step Complete membrane fusion includes the content mixing and the exchange of lipids between Env-expressing effector cells and target cells [13]. Recently, we have shown that the final step, content mixing, was preceded by Env recruitment and lipid mixing [2]; the temporal order of which was not distinguished, however. Here, we attempted to investigate the influence of gp41e. For this purpose, HXB2 Env-expressing HeLa cells and CD4-X4expressing NIH3T3 target cells were labeled with two different hydrophobic fluorescent probes, DiO (green) and DiI (red), respectively. DiI-labeled target cells ( Figure 2) were co-cultured with DiO-labeled effector cells at 37°C, and lipid dye mixing was recorded in real-time by fluorescence microscopy coupled to a CCD camera. The fluorescent dye mixing results from the exchange of membrane lipids. Target cells co-cultured with effector cells displayed lipid dye redistribution at about 13 minutes, either in the presence or absence of gp41e ( Figure 2). HIV-1 HXB2 Env recruitment is impeded upon addition of gp41e proteins The higher order structure of multiple Env-receptor complexes is required to form a fusion pore [14,15]. HIV-1 HXB2 Env recruitment could be monitored after binding to target cells expressing CD4-X4 molecules [2]. In addition, HIV-1 Env recruitment was examined by conjugating the EGFP fusion protein (green fluorescence) to Env as a probe for the migration of Env protein on the HeLa effector cells. The recruitment of Env-EGFP on the effector cell surface was recorded by a CCD digital camera upon the addition of target cells. To resolve the dynamics of Env recruitment, we took the competitive inhibition approach in which gp41e is added to the Env-expressing effector cell in the presence of the target cell, at different time points of 0, 10, 13, 20 minutes ( Figure 3A-D, respectively) to observe the effect on the processes. Figure 3 shows only 0, 13, 15, 20 minute time points of treatment with gp41e. Initially, the Env-EGFP was distributed on the effector cells without extensive fluorescence clusters found at the contact area with target cells. For gp41e addition at 13 minutes of co-incubation and thereafter (13 and 20 min, Figure 3C-D), onset of the HIV-1 Env-EGFP fluorescence can be observed which clustered at the initial contact site with target cells, and the intensity increased up to 20 minutes post-incubation, at which time the complete membrane fusion (content mixing) occurred. Thus Env-EGFP recruitment was clearly impeded by the gp41e protein added prior to 13 minutes of contact between effector and target cells ( Figure 3A-B). Env recruitment in the competitive inhibition approach with gp41e was quantified by the fluorescent tag intensity in Figure 4. The data clearly indicates that, when gp41e was added at 13 minutes after co-incubation of the effector and target cells, the Env recruitment was significantly hindered as compared to the gp41e treatment at the 20 minute time point. Recovery time measured by FRAP indicates that Env recruitment is not a lateral diffusion process One intriguing question regarding HIV-1 Env recruitment leading to fusion is whether the process results from the lateral diffusion on the membrane surface. To address the issue, FRAP was used to analyze the mobility of Env in live cells. In this experiment, the laser of a confocal microscope was directed to a specific region of interest in a cell expressing the fluorescent protein to be analyzed. The ROI (region of interest) is exposed to the maximum intensity of the laser, thereby, resulting in prolonged photobleach- ing of the fluorescent signal (EGFP tagged to Env). The mobility of the tagged protein is then evaluated by the recovery/return time of the fluorescent signal of the bleached ROI. The ROI 2 ( Figure 5A) is the control region without bleaching; the fluorescence intensity of which is, therefore, not influenced. Figure 5B shows the fluorescent image of Env-EGFP-expressing effector cells, in which ROI 1 of bleaching and ROI 2 of the control region are indicated. The fluorescence intensity of ROI 1 decreased to 15% immediately after the pulse and recovered to 65% of its original intensity 1 minute later ( Figure 5A). Strikingly, the recovery time is an order of magnitude shorter than the time for Env recruitment (~13 min). The disparate kinetics indicates that Env recruitment which leads to the productive fusion event is not a random diffusion process probed by the FRAP measurement. The main distinction for the movement of protein molecules on the membrane surface during the fusogenic process from that in the FRAP experiment lies in the notion that, for the latter, the fusion proteins are not appropriately assembled and thus the movement is not concerted; whereas the movement is highly coordinated spatially and temporally in the former. These points will be further addressed in the Discussion. Mechanism of gp41e action on the membrane fusion The inhibitor approach has been employed to dissect the dynamics of Env-mediated membrane fusion [2,16,17]. Thus, T-20 was used to demonstrate the induction of the pre-hairpin structure by CD4 attachment [16]; anti-HR1 and anti-HR1/HR2 antibodies have been used to differentiate the timing of pre-hairpin exposure and SHB formation at 31.5°C [17]; NC-1 antibody has been used to resolve the temporal order of the pre-hairpin, NC-1 sensitive and SHB formation [2,18]. In the present study, we attempted to use gp41e to dissect the kinetics of hemifusion, gp41 subunit recruitment, and complete fusion. The fluorescence micro-imaging approach adopted effectively circumvented the limitation on the temporal detection used in the early study of dynamics of virus-induced fusion [19]. Gp41e is the gp41 fragment devoid of the FP (fusion peptide), TMD (transmembrane domain), and CT (cytoplasmic tail) regions; and therefore, it is not anchored to the bilayer core. This property is exploited in the pursuit of a means to resolve the temporal sequence of lipid mixing and recruitment, because it has been shown that the lipidanchored hemagglutinin mediates hemifusion, but not complete fusion [1], and the recruitment of fusion pro-teins to promote complete fusion involves spatial and orientational orchestration of several Env trimers. Thus, we reasoned that gp41e, lacking the membrane anchor, should inhibit the process subsequent to recruitment. Similar trimeric recombinant gp41 proteins with entry inhibitory activity have been documented [9]. As expected, gp41e exhibited potent inhibitory activity on the aqueous content mixing between Env-expressing effector cells and CD4-X4-expressing target cells, in agreement with previous studies [9,20]. To pinpoint the step that gp41e interferes, we investigated lipid mixing and the clustering of Env in the late phase of the fusion event. Figure 2 unambiguously illustrates that the lipid dye redistribution occurs at about 13 minutes after co-culture of the effector and target cells, with or without the presence of gp41e. Indeed, the point of gp41e intervention was found at the stage of the fusion protein recruitment as illustrated in Figure 3. The dynamics of gp41e action was further quantified by monitoring the variation of the intensity, representing the extent of clustering of Env, with the time of gp41e addition to the cell mixtures ( Figure 4). Remarkably, Env recruitment with gp41e added at 13 minutes (i.e. the starting time of Env recruitment) was somewhat less extensive compared to the inhibitor treatment time of 20 minutes. The result is consistent with the concept that gp41e can block the cell-cell fusion by interfering with Env clustering, as depicted in Figure 6. The dramatic onset of its inhibition at 13 minutes and the observed enhanced effect at 20 minutes of gp41e addition led to the conclusion that the effective Env recruitment does not occur until substantial lipid mixing has taken place. Based on the mutations on the residues in the NHR-CHR domain of gp41, Markosyan et al. [21] proposed that the inhibitory mechanism of the recombinant core protein rests on the exposure of the CHR region, which binds to the NHR region of the adjacent gp41 molecule, thereby interrupting SHB formation. This latter mechanism can be reconciled with our result that lipid mixing was not blocked by gp41e by considering that lipid mixing may occur during the refolding of the NHR and CHR domains prior to completion of SHB formation. In the previous kinetic study of gp41-mediated fusion [2], the pre-hairpin formation was found to be within a few minutes postcoincubation of effector and target cells. Therefore, gp41e can block the SHB formation at this stage in an open form and later impede fusion protein recruitment at 13 minutes The intensity representing recruitment of HIV-1 Env-EGFP (green) on the effector HeLa cells from quantification of data from Figure 3 Fluorescence recovery after photobleaching (FRAP) assay of HIV-1 Env-EGFP Figure 5 Fluorescence recovery after photobleaching (FRAP) assay of HIV-1 Env-EGFP. FRAP was used to analyze the mobility of Env-EGFP, which was measured by analyzing the recovery/return of fluorescent signals into the bleached ROI (region of interest). The fluorescent intensity of ROI 1 (experimental one) decreased to 15% after the pulse and recovered to 65% of its original intensity 1 minute later (Panel A), and the ROI 2 (A) is the control region without bleaching. Panel B displays the fluorescent image of Env-EGFP-expressing effector cells, with ROI 1 of bleaching and ROI 2 of control region indicated. The mode of action of gp41e inhibition of fusion protein clus-tering Figure 6 The mode of action of gp41e inhibition of fusion protein clustering. The exogenous gp41e trimer (green) interferes with Env recruitment. The resulting Env-gp41e complex blocks the formation of a functional pore and the progress to complete fusion because gp41e lacks a TMD and cytoplasmic tail needed for the disruption of inner leaflets of fusing membranes. This could be a mechanism of fusion inhibition by gp41e. post-coincubation in a trimeric form. Another mode of inhibitory action of gp41 fragments involving the calcium binding site (aa 630-647) [22] is also compatible with the results presented here. Although our inhibition data are most readily explained by the interference of gp41 trimer cluster formation with gp41e adopting a SHB structure, they, nevertheless, cannot identify or distinguish the proposed modes of fusion inhibition by the gp41 ectodomain protein. To sum up, gp41e significantly inhibited cell-cell content mixing and Env recruitment, but did not inhibit the exchange of lipids. Kinetics and functional significance of envelope recruitment for viral fusion Oligomerization and clustering of the transmembrane proteins of HIV and influenza virus [4,23] have been implicated in the fusion reaction. Thus, previous studies have shown that HIV-1 virions can be recruited to sites of cell contact in the effector dendritic cells [24] and have proposed that contact between the effector and target cells facilitates transmission of HIV-1 by locally concentrating the neighboring virions mediated by Gag and Env proteins [25]. As an extension, we are interested in examining the role of aggregation of the Env molecules in the cell-cell fusion. In our previous study [2], visualization of coreceptor-induced clustering of Env was compatible with the idea that a high order of multiple Env-co-receptor complexes is necessary for fusion pore formation [15]. Interestingly, Env recruitment previously observed at about 13 minutes was concurrent with lipid mixing and continued for a few minutes afterwards [2]. In the present work, we found that gp41e inhibited Env recruitment, but not lipid mixing, implying that lipid mixing is initiated prior to Env recruitment. It is noteworthy that, because both lipid mixing and Env recruitment are progressive (i.e. they last for several minutes), some recruitment occurs before the completion of lipid mixing. In other words, the initial stage of Env clustering takes place while lipid mixing is in progress, or the inception of recruitment of some sites is underway while lipid mixing proceeds near completion at the other sites. This is possible since the inhibition of clustering is negative-dominant (i.e. the aggregation process can be impeded at any time before its completion). The failure of gp41e to block lipid mixing also implies that the step does not require a large number of Env trimeric subunits. In this respect, our finding is not completely disparate with the report that a single functional Env glycoprotein trimer could be adequate to support HIV-1 entry [26,27]. In addition, a gp41 ectodomain construct failed to induce gp120 shedding from the Envexpressing cell alone [9]. The result is in line with the proposal that gp41e exerts its action at the recruitment step which is subsequent to gp120 shedding. according to our previous study [2]. The kinetics of lipid mixing, fusion pore and SHB formation were extensively investigated by Melikyan and coworkers using varied temperature and lipid inhibitors [3,19]. The lag time of ~15 minutes for complete fusion was largely eliminated by subjecting the fusing cells to the temperature arrested stage (TAS, co-incubation of effector and target cells at 23°C for 3 hours; [3]) suggesting that the rate-limiting step is traversed during the stage when the gp41 HR domains was exposed. It was also observed that TAS was concurrent or overlapping with the lipid mixing stage, and SHB did not form until the membrane merger occurred. In the present and previous kinetics [2] studies, the rate-limiting step was deduced at the lipid mixing and recruitment stages, which may lead to pore opening, as explained below. Thus, our results corroborate with that deduced by Melikyan's work. It was noted that the directional clustering and orientational assembly of several fusion proteins necessary for pore opening and enlargement are associated with large entropy loss and hence unfavorable free energy change; thus, recruitment as a rate-limiting step is energetically reasonable, as exemplified by the flickering of pore opening observed in electric conductance experiments. Collectively, we hypothesize that the merge of the outer leaflets of apposing membranes initiates with one or a few functional Env trimers at the contact site, and its progress is facilitated by the continuous recruitment of adjacent Env subunits. The spatial and orientational coordination of the clustered Env proteins, along with the FP-TMD interaction to disrupt the inner leaflets around the hemifusion diaphragm, helps drive the fusion reaction to pore formation [26]. The dynamics of protein complexes at the cell surface are determined by the organization, oligomerization state, and interaction with the membrane lipids and other constituents of the cell membrane. Their movement is also directed by the interaction with the cytoskeleton [28], and hence is not totally random. In the FRAP experiment, the diameter of photobleached ROI is ~5 μm. Using the recovery time of 50 s obtained from Figure 5A, the diffusion coefficient was estimated to be 0.12 μm 2 s -1 , close to the value 0.09 μm 2 s -1 documented for the influenza hemagglutinin at 22°C on the cell surface [6,29]. If one assumes that the distance of migration for clustering is on the order of < 1 μm, with the diffusion coefficient of 0.12 μm 2 s -1 , the time taken for an Env subunit to reach the fusion site would be < 1 s, at least two orders of magnitude less than ~10 min observed for Env recruitment here (Figure 3). The discrepancy led to the recognition that recruitment as measured in the present work is a directed organization of several trimeric subunits, whereas the diffusion experiments ( Figure 5) measured the random movement of the subunit; conceivably, the proper orientation and juxtaposition of several Env trimers would take longer than the time taken to migrate to the fusion site. Another potentially critical factor is that the FP is inserted into the target cell in our assay, but is free in the FRAP study; the lateral diffusion could be greatly retarded by the double-membrane anchoring of gp41, the effect of which deserves future investigation. The diffusion rate of oligomers on the membrane surface has been observed to be an order of magnitude smaller than that of a monomer [30]. It was proposed that high order oligomers were trapped in the cytoskeleton mesh and hence had a low rate of hopping between compartments on the membrane, in contrast to monomers or dimers which exhibit free diffusion trajectories and larger diffusion coefficients. The influenza HA molecules have also been found to distribute as elongated clusters, suggestive of arrangement along the cytoskeleton [6]. Hence, the compartmentalization of the cytoskeleton may contribute to the differential dynamics between free lateral diffusion of membrane protein and the recruitment to fulfill fusion function observed here. The recruitment and assembly of homo-and hetero-oligomers of integral membrane proteins are also essential to the signal transduction process; for instance, activation of G-protein coupled receptors by external ligands triggers the recruitment and hetero-trimeric assembly of G-proteins [31]. Thus the distinction of the directed assembly and diffusion of the membrane proteins addressed in the present work (cf. Figures 3 and 5) may afford some insight into cellular signaling processes. A proposed refined model on the temporal sequence of CD4-co-receptor induced conformational changes of HIV-1 Env protein We have elucidated the functional role of X4 Env in the different stages of the fusion event with emphasis on the kinetic aspect in order to dissect their temporal order [2]. Attachment of the receptor and co-receptor initiates a series of conformational alterations in Env, including extension of FP, insertion of FP into the target membrane, dissociation of oligomeric gp120, gp120 shedding from gp41, refolding of HR1 and HR2, NC-1 sensitive conformation (NSC) formation, Env recruitment, lipid mixing and content mixing. In the present study, we have resolved the temporal order of hemifusion (lipid mixing) and Env recruitment. To recapitulate the present and previous kinetics study, an updated model of the correct tem-poral sequence of HIV-1 Env conformational changes is depicted in Figure 7. The model could be also applied to the mechanism of other class I fusion protein-mediated membrane fusion. Molecular cloning and protein production of gp41e in bacterial system How to clone and express the HIV-1 gp41 ectodomain (gp41e) was described in our previous study [12]. In brief, a fragment of 428-base pair was amplified from the plasmid pSVE7'-puro which encodes an ectodomain consisted of the 24th -154th a.a. of gp41 from the HIV-1 HXB2 strain, and eight histidines at the carboxyl terminus for purification of proteins by the immobilized metal affinity chromatography (IMAC). This cloning was performed with the pETBlue system (Novagen, Merck Co., Madison, WI). The expressed gp41e protein, induced by 1 mM isopropyl-D-1-thiogalactopyranoside, was extracted with the BugBuster protein extraction reagent (Novagen, Merck Co., Madison, WI) according to the manufacturer's protocol, and was then subjected to the IMAC system and purified by HPLC. Molecular mass of the protein was determined by ESI (electron spray ionization)-LC MS (Thermo-Finnigan LCQ series, San Jose, CA). , and images were recorded on a CCD digital camera. At least several hundred cells were observed. Image processing was performed using the Met-aMorph software (MetaMorph, Carl Zeiss Meditec, Göttingen, Germany). Measurements of kinetics of lipid mixing and content mixing For lipid mixing, HIV-1 Env-expressing (effector) HeLa cells and CD4-X4-expressing (target) NIH3T3 cells were respectively incubated for 15 minutes with 20 μM DiO (green) and DiI (red) in RPMI-1640 medium without serum. The cells were then washed three times with medium or PBS and resuspended at 10 6 cells/ml in RPMI-1640 medium without serum. DiI-labeled target cells were co-cultured with DiO-labeled effector cells at 37°C, and lipid dye mixing was monitored in a real-time fashion. For content mixing, the effector and target cells were labeled respectively with calcein AM (green) and CMTMR (red) at concentrations of 10 μM and 20 μM for 1 hour at 37°C. CMTMR-labeled target cells were co-cultured with calcein-labeled effector cells at 37°C, and dye redistribution was monitored in real-time. All the fluorescent images were monitored by fluorescence microscopy (Met-aMorph, Carl Zeiss Meditec, Göttingen, Germany) coupled to a CCD camera. The dyes were purchased from Molecular Probes (Eugene, OR, USA). At least several hundred cells were observed. Image processing was performed using MetaMorph software (MetaMorph, Carl Zeiss Meditec, Göttingen, Germany). Fluorescence recovery after photobleaching (FRAP) assay 5 × 10 5 HIV-1 Env-EGFP-expressing HeLa (effector) cells were adhered to glass coverslips and placed into an open cell chamber prior to analysis. Photobleaching experiments were done using a Zeiss LSM 510 confocal laserscanning microscope (Carl Zeiss Meditec, Göttingen, Germany). All experiments were done at 37°C. Pixel quantification was performed using the Zeiss LSM software. For the FRAP study, Env-EGFP-expressing HeLa cells were simultaneously imaged, with one region analyzed by FRAP and the other one used as a fluorescence control. A schematic illustration of an updated HIV-1 Env-mediated fusion model Figure 7 A schematic illustration of an updated HIV-1 Env-mediated fusion model. The temporal sequence of Env conformational changes: (1) gp120 interacts with CD4; (2) conformational alteration in both molecules triggers binding to the cognate co-receptor; (3) FP exposure and membrane insertion; pre-hairpin intermediate formation; (4) gradual shedding of gp120 from the gp41 anchor further facilitates refolding of HR1 and HR2 of gp41, leading to (5) formation of NSC -an intermediate bundle structure of gp41 core; (6) mixing of the lipid outer leaflet; (7) recruitment of Env that acts in concert to promote and stabilize fusion pore; (8) full fusion (coalescence of both leaflets of apposing membranes) leading to content mixing. The second cell was used as a control, which was still scanned throughout the time course. The fluorescence intensity was monitored simultaneously. Values for the bleached cells were normalized as the percentage of the fluorescence intensity calculated for control cells.
2017-06-25T05:29:47.742Z
2009-03-02T00:00:00.000
{ "year": 2009, "sha1": "8019f04aa67395a89f918f5c0dadb2b743483c51", "oa_license": "CCBY", "oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-6-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8019f04aa67395a89f918f5c0dadb2b743483c51", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258271222
pes2o/s2orc
v3-fos-license
Disease severity and renal function among sickle cell anaemia patients in a tertiary hospital, South-south, Nigeria: a cross sectional study Background Renal disease is a recognized complication of sickle cell anaemia (SCA), especially from the third decade of life and is linked to disease severity. This study assessed the association between disease severity and renal function among SCA patients using routine and newer markers of renal function. Methods This cross-sectional study recruited 85 SCA patients. Disease severity was assessed using modified Adegoke criteria which include the frequency of transfusion, painful crises, packed cell volume, and history of complications such as hypertension and chronic leg ulcers. Renal function was assessed using urea, creatinine, and beta-2-microglobulin (β2-M). Association was determined between renal function and disease severity using Pearson's correlation. P-value < 0.05 was taken as significant. Results The mean age of participants was 27.2 ± 7.6 years with 41(48.2%) males and 44 (51.8%) females. The mean packed cell volume, serum creatinine, serum urea, and β2-M were 24.0± 4.1%,17.6±7.5 mg/dL, 0.7±0.3mg/dL, 3.4±1.2mg/l respectively. A majority (54.1%) of them had a mild disease while 35.3% and 10.6% had moderate and severe diseases, respectively. Forty of the SCA patients had urine specific gravity below 1.010. The mean values of systolic blood pressure (p=0.001) diastolic blood pressure (p=0.001), serum creatinine (p=0.028) and β2M (p=0.019) significantly increased with disease severity. There was a significant positive correlation between SCA disease severity and serum urea (r=0.229; p=0.035), and serum β2-microglobulin (r=0.270; p=0.012). Conclusion Sickle cell anaemia severity is associated with a decline in renal function using both traditional and novel renal markers. Serum β2-M may serve as a useful marker of renal function and disease severity in SCA Background Sickle cell anaemia (SCA) is a chronic sickling disease characterized by clinical events called crisis 1 .These clinical events are modified by factors such as hydration, de-oxygenation, temperature, pH, viscosity, levels of haemoglobin F, and co-existing haemoglobinopathies such as thalassemia and glucose-6-phosphate dehydrogenase deficiency 2 .Individuals with SCA are at risk of developing renal disease due to chronic sickling underlying the disease and resulting in hemolysis-induced renal injury 3 .Chronic kidney disease (CKD) is a recognized complication of SCA associated with risk factors such as hypertension, low haemoglobin concentration, hemolysis, prior vaso-oclusive crisis, BS gene haplotype [4][5][6][7][8][9] .Renal involvement contributes substantially to reduced life expectancy in patients with SCA, accounting for 16-18% mortality 10 . Manifestations of renal complication of SCA include asymptomatic or symptomatic albuminuria or proteinuria.Sklar et al 11 reported renal insufficiency in 4.6% of 116 SCA patients that was significantly associated with proteinuria and increased age.In Nigeria, 50% of 72 patients with SCA were reported to have albuminuria 12 , while proteinuria was found in 41% of 73 SCA patients in Saudi Arabia where about 27% of SCA patients were within the first three decades of life 13- 14 .However, in the USA, less than 1% of the total population of 375, 152 SCA patients studied had renal failure 15 .Powars et al 4 reported that 4.2% of 725 SCA patients had irreversible renal damage progressing to end-stage renal failure.Chronic kidney disease has been identified as the major risk factor for early mortality in adult patients with SCA 16 .Previous studies have reported the significant burden of renal complications frequently encountered among patients with SCA [11][12][13][14][15] .However; local reports are limited despite large number of sickle cell disease patients in Nigeria. https://dx.doi.org/10.4314/mmj.v35i1.3 The rising prevalence of renal impairment in SCA may be due to increased survival, co-morbidities such as uncontrolled hypertension, diabetes mellitus16and analgesic abuse for chronic pain that characterize the disease.The kidney in SCA individuals is affected by both the hemodynamic changes of chronic anaemia and the consequences of recurrent vasoocclusion leading to structural and functional changes and progression to CKD 17 .Despite the use of the disease severity scoring system (DSSS) to categorize patients and routine screening for renal impairment using creatinine, there are limited studies establishing the relationship between the disease severity and early renal disease.This may be due to the insensitivity of serum creatinine for early renal function decline based on the influence of body mass index (BMI) and hyper-filtration present in patients in SCA.It is, therefore, necessary to assess any relationship between disease severity and renal function using other renal markers such as β2-microglobulin.This study assessed the disease severity of SCA patients and determined its association with renal function using both traditional and new markers of renal function in a tertiary hospital in Southern Nigeria. Study Design This was a cross-sectional study carried out in the Departments of Haematology and Chemical Pathology of the University of Benin Teaching Hospital (UBTH), Benin City, Edo State over a six-month period.This is a tertiary hospital, located in Southern Nigeria and serves as a major referral center to the neighbouring states of Ondo, Kogi and Delta.The sickle cell clinic is run by consultant haematologists and an average of 40 SCA patients are seen weekly in the consultant outpatient clinic. Study Sample Size The sample size was calculated using the formula for a cross-sectional study18.N = Z2P (1-P) / d2 where N = minimum sample size, Z = normal standard deviation at 95% confidence interval = 1.96,P = proportion of the SCA patient with sickle cell nephropathy was 5% from a previous study 15 .d = degree of precision = 5%.This formula gave a minimum sample size of 80 after including a 10% attrition rate. Study Participants The study population included 85 adult HbSS patients diagnosed by haemoglobin electrophoresis who were attending the haematology clinic of UBTH.Inclusion criteria were consenting HbSS patients of age ≥18 years, in steady state and without established chronic kidney disease.HBSS patients on cimetidine or trimethoprim, haematological malignancies, on-going infection and diabetes mellitus were excluded from the study. Data Collection Tool The self-administered questionnaire was used to obtain information on demography, medical history, history of the previous transfusion, and chronic complications such as hypertension and chronic leg ulcer in the last 12 months.The height and weight of each participant were measured using a stadiometer and weighing scale in meters and kilogram (Kg) respectively, to derive the BMI in Kg/m 2 .To measure the weight, participants were allowed to remove shoes and any heavy clothing before mounting the scale and heads kept in upright position after which the readings taken in kilogram (Kg) by the investigator.Two readings were taken for each participant and the average was recorded.Height was taken using a standard measuring meter attached to the scale, participants were asked to remove head ties and caps before applying the measuring ruler close to the crown of the head.Blood pressure was measured using an Accouson sphygmomanometer.Two readings at 30-minute intervals were taken for each participant and the mean value was recorded in mmHg.Ten (10) mL of venous blood was collected from the antecubital fossa of each patient and 3 mL was dispensed into EDTA bottle for hematocrit while 7 mL was dispensed into plain tubes and allowed to clot.Serum samples were harvested following centrifugation at 1, 500g over 15 minutes for biochemical assays to assess renal function.Urea and creatinine concentrations in serum were determined using Urease and Jaffe kinetic methods (Randox kits) respectively while β 2 M was determined using ELISA (Quantikine kit) method.The absorbances for urea and creatinine were determined by Spectrumlab PC 22 Spectrophotometer and β 2 M using a microplate reader.Thereafter, 5 mL of urine was collected for specific gravity using a dipstick.Known concentrations of quality control sera were assayed for quantitative parameters while qualitative control was used for urine dipstick.Hyposthenuria is defined as urine-specific gravity less than 1.010 19 .A modification of the disease severity scoring system (MDSSS) by Adegoke et al20 was used to stratify the subjects into mild, moderate and severe disease groups.Parameters used for scoring were stable packed cell volume (PCV), frequency of painful crises, and transfusion history within the previous 12 months as well as presence or absence of complications such as hypertension, chronic leg ulcer, cerebrovascular accident (CVA), and acute chest syndrome.The minimum score was 3 while the maximum score was 12. SCA patients were then categorized into mild, moderate and severe disease groups using parameters proposed Adegoke et al 20 . Statistical Analysis Data obtained were entered analyzed using the statistical package of social sciences (SPSS) software version 20.Descriptive statistics were represented with tables.Categorical variables were compared using Chi-square test.The comparison between group means was done using student t-test.Comparison of mean values among three groups was done using ANOVA.Tukey HSD (Honest Significant Difference) was used for post hoc analysis.The p value < 0.05 was taken as the cut off level for significance.Pearson's correlation test was used to test association between continuous variables. Ethical Approval and Consideration Ethical approval was obtained from Human Research and Ethical Committee of UBTH.The reference number of the approved protocol was ADM/E 22/A/VOL.VII/1038.Informed consent was obtained from all participants in the study.The study did not involve any therapeutic trials.Confidentiality of the provided information was ensured throughout the study. Results https://dx.doi.org/10.4314/mmj.v35i1.3A total of 85 SCA patients with a mean age of 27.2 ± 7.6 years were studied.They comprised 41(48.2%)males and 44 (51.8%) females.The majority (58.8%) of the study participants had BMI values less than 18.0 kg/m2 while 32 (37.6%) had a BMI of 18.0-24.9kg/m 2 .The mean systolic and diastolic BP were 109.3±13.9 and 67.1±10.7 respectively.The mean packed cell volume serum creatinine, serum urea, and β2-microglobulin were 24.0± 4.1%,17.6±7.5 mg/dL, 0.7±0.3mg/dL,3.4±1.2mg/lrespectively.Forty (47.1%) of the study participants had urine-specific gravity below 1.010.(Table 1) Based on the proposed MDSSS, the majority (54.1%) of the SCA patients had a mild disease while 35.3% and 10.6% of subjects had moderate and severe diseases respectively.Seventy-one (83.5%) of the SCA patients had a positive history of previous blood transfusion while 76.5% had a history of vaso-occlusive crises in the preceding 12 months.History of complications such as chronic leg ulcer and hypertension was present in 18.8% and 4.7% of patients respectively.(Table 2) The mean values of systolic blood pressure (p=0.001)diastolic blood pressure (p=0.001),serum creatinine (p=0.028), and β2M (p=0.019)significantly increased with disease severity.(Table 3) Post-hoc analysis showed that the mean serum β2microglobulin was significantly higher in SCA patients with severe disease compared with those with moderate disease (p=0.025) and mild disease (p=0.005).It also showed the mean serum creatinine in SCA was significantly higher in those with severe disease compared to those with mild disease (p=0.011) and moderate disease (p=0.011).(Table 4).There was no significant difference in the mean SG of urine across the disease severity.(Table 4) There was a significant positive correlation between SCA disease severity and serum urea (r=0.229;p=0.035), and serum β2-microglobulin (r=0.270;p=0.012).(Table 5) Discussion The participants in this study were majorly young adults with a mean age of about 27 years.This is similar to 24 years reported as the mean age of the adult SCA population in a previous study from Nigeria 11 .This may reflect the lower life expectancy of SCA patients compared to individuals who do not have SCA.Severe disease was found in 10.6% of our study participants, similar to 10.4% reported by Adegoke et al 20 .However, this finding is higher than 4.5%, 5.8%, 7.8% reported in Saudi Arabia, Yemen and Darkar respectively [21][22][23] .The differences in the prevalence of severe disease in these various geographical locations could be explained by the role of genetic factors like β-globin haplotype polymorphisms.The predominant haplotype in Saudi Arabia and Yemen is the Arab-Indian haplotype which is associated with mild disease.Whereas, the Benin haplotype that is predominant in Southern Nigeria has a severe clinical presentation 20,[24][25][26] .The use of certain medications such as hydroxyurea by some SCA patients in the various studies and other genetic abnormalities such as glucose-6-phosphate dehydrogenase deficiency, thalassemia, and fetal haemoglobin may also modify the clinical presentation of SCA 2 .In addition, the difference in methodology such as the age of study participants, and parameters used in the assessment of disease severity may also partly explain the variation in the disease severity in these studies.The mean values of serum urea and creatinine in our study participants are close to the lower limit of normal reference values for Nigerians 27,28 .These traditional markers of kidney function may be affected by non-renal factors which may limit their use 26 .For example, muscle mass affects serum creatinine while the amount of protein intake and the body's ability to catabolize urea may affect serum urea 29 .SCA patients usually have reduced muscle mass which is supported in our study where about 60% were underweight.This has implications for clinical practice because serum creatinine may still be within the normal reference values in SCA patients even in the presence of significant kidney damage 29 .This underscores the need to use other markers of kidney function that are not affected by reduced muscle mass. The mean value of serum β2-microglobulin level in our study participants is higher than the upper limit of normal reference value for Nigerians 30 .The finding in this study is similar to the report by De Jong el al 31 .β2-microglobulin is a more favourable marker that could be used to assess renal function in SCA patients 32 .It is a low molecular weight protein and the component of the major histocompatibility complex I [33][34][35] .It measures about 12,000 Dalton and is produced by nucleated cells [33][34][35] .It is filtered at the glomerulus, almost completely absorbed and destroyed by the proximal convoluted tubules [33][34][35] , hence only a minimal amount is seen in the urine of healthy individuals.Measurement of β2-microglobulin can therefore give information on both glomerular and tubular function.Autoimmune disorders and certain malignancies may lead to increase serum β2microglobulin, hence these conditions must be excluded when using it to assess renal function.The tubular injury occurs early in sickle cell nephropathy before glomerular function becomes impaired 36 , therefore, markers of tubular injury will be highly valuable in the detection of early renal dysfunction even when serum creatinine is within normal limits.Although a detailed assessment of tubular function was not done, urine specific gravity which also assesses concentrating ability of the renal tubules was done.The result showed the impaired concentrating ability of the renal tubules in about half of the SCA patients.The mean specific gravity of 1.007 found in this study is similar to the 1.006 reported in a study done in Cameroon 37 .One of the advantages of using β2-microglobulin in SCA patients over traditional renal markers such as urea and creatinine is that it assesses both the glomerular and tubular function which are commonly affected in sickle cell nephropathy.The limitation of this study was that we did not include a reference exogenous renal marker such as inulin for comparison with β2-microglobulin in the assessment of renal function among SCA because the methodology involved is very cumbersome.Also, a detailed assessment of tubular function was not done in the study due to limited funds.Conclusion: Majority of patients with sickle cell anaemia had mild to moderate disease using the MDSSS.The disease severity was significantly associated with declining renal function using both traditional and novel renal markers; however, serum β2-microglobulin had a better association with disease severity than creatinine.Serum β2-microglobulin may serve as a useful marker of renal function and disease severity in patients with SCA especially when serum creatinine is within normal limits. Table 1 : Demographic Information and Clinical Parameters of Study ParticipantsTable 2 : Comparison of Haematocrit and Biochemical Parameters of Study ParticipantsTable 3 : Frequency of Disease Severity Categorization and Some Severity Indices https://dx.doi.org/10.4314/mmj.v35i1.3
2023-04-22T15:09:07.926Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "027e32c45d8386f59cb7b7d47e11a69a0e4299e5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.4314/mmj.v35i1.3", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "9692ee4897dc11e4198d833e0ef2fbc45ab070ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }